Widely used index may have overestimated drought

For decades, scientists have used sophisticated instruments and computer models to predict the nature of droughts. With the threat of climate change looming large, the majority of these models have steadily predicted an increasingly frequent and severe global drought cycle. But a recent study from a team of researchers at Princeton University and the Australian National University suggests that one of these widely used tools – the Palmer Drought Severity Index (PDSI) – may be incorrect.

The PDSI was developed in the 1960s as a way to convert multiyear temperature and precipitation data into a single number representing relative wetness for each region of the United States. The PDSI, however, does not originally account for potential evaporation, which depends on solar radiation, wind speed and humidity. The new model developed by Justin Sheffield, a hydrologist at Princeton and the lead author of the study, and his team accounts for this deficiency, and is subsequently producing different numbers. Has the reported increase in drought over the last 60 years been overestimated? And what might that mean for the future?

Researchers say a comet killed the dinosaurs

Professors Mukul Sharma (left) and Jason Moore of the Department of Earth Sciences revisit the departure of the dinosaurs. -  Eli Burakian, Dartmouth College
Professors Mukul Sharma (left) and Jason Moore of the Department of Earth Sciences revisit the departure of the dinosaurs. – Eli Burakian, Dartmouth College

In a geological moment about 66 million years ago, something killed off almost all the dinosaurs and some 70 percent of all other species living on Earth. Only those dinosaurs related to birds appear to have survived. Most scientists agree that the culprit in this extinction was extraterrestrial, and the prevailing opinion has been that the party crasher was an asteroid.

Not so, say two Dartmouth researchers. Professors Jason Moore and Mukul Sharma of the Department of Earth Sciences favor another explanation, asserting that a high-velocity comet led to the demise of the dinosaurs.

Recently, asteroids have been in the headlines. On February 15, 2013, an asteroid exploded in the skies over Siberia. Later that day, another swept past the Earth in what some regard as a close call-just 17,000 miles away.

The asteroid impact theory of extinction began with discoveries by the late physicist and Nobel Laureate Luis Alvarez and his son, the geologist Walter Alvarez, a professor at the University of California, Berkeley. In 1980 they identified extremely high concentrations of the element iridium in a layer of rock known as the K-Pg (formerly called K-T) boundary. The layer marks the end of the Cretaceous period (abbreviated “K”), the epoch of the dinosaurs, and the beginning of the Paleogene period, with its notable absence of the large lizards.

While iridium is rare in the Earth’s crust, it is a common trace element in rocky space debris such as asteroids. Based on the elevated levels of iridium found worldwide in the boundary layer, the Alvarezes suggested that this signaled a major asteroid strike around the time of the K-Pg boundary-about 66 million years ago. Debate surrounded their theory until 2010, when a panel of 41 scientists published a report in support of the Alvarezes’ theory. The panel confirmed that a major asteroid impact had occurred at the K-Pg boundary and was responsible for mass extinctions.

The scientific community today looks to the deeply buried and partially submerged, 110-mile wide Chicxulub crater in Mexico’s Yucatán as the place where the death-dealing asteroid landed. The 66-million-year age of Chicxulub, discovered in 1990, coincides with the KT boundary, leading to the conclusion that what caused the crater also wiped out the dinosaurs.

Moore and Sharma do agree with fellow scientists that Chicxulub was the impact zone, but dispute the characterization of the object from space as an asteroid. In a paper presented to the 44th Lunar and Planetary Conference on March 22, 2013, they described their somewhat controversial findings.

Moore notes that in the past geochemists toiled away, isolated from their geophysicist colleagues, each focused on his or her particular area of expertise. “There hadn’t been a concerted synthesis of all the data from these two camps,” says Moore. “That’s what we’ve tried to do.”

The Dartmouth duo compiled all the published data on iridium from the K-Pg boundary. They also included the K-Pg data on osmium-another element common in space rock. In sifting through all this they found a wide range of variability, so consequently kept only the figures they demonstrated to be most reliable. “Because we are bringing a fresh set of eyes into this field, we feel our decisions are objective and unbiased,” says Sharma.

For example, they deleted data drawn from deep ocean cores where there were very high amounts of iridium. “We discovered that even then there was a huge variation. It was much worse in the oceans than on the continents,” Sharma said. “We figured out that the oceanic variations are likely caused by preferential concentration of iridium bearing minerals in marine sediments.”

In the final analysis, the overall trace element levels were much lower than those that scientists had been using for decades and being this low weakened the argument for an asteroid impact explanation. However, a comet explanation reconciles the conflicting evidence of a huge impact crater with the revised, lower iridium/osmium levels at the K-Pg boundary.

“We are proposing a comet because that conclusion hits a ‘sweet spot.’ Comets have a lower percentage of iridium and osmium than asteroids, relative to their mass, yet a high-velocity comet would have sufficient energy to create a 110-mile-wide crater,” says Moore. “Comets travel much faster than asteroids, so they have more energy on impact, which in combination with their being partially ice means they are not contributing as much iridium or osmium.”

Moore attributes much of the early resistance to a comet impact theory to a lack of knowledge about comets in general. “We weren’t certain whether they were dirty snowballs or icy dirt balls,” he says. “Today, we are inclined toward the icy dirt ball description.”

Comet composition and physical structure were unknown, but with the advent of NASA missions to comets like “Deep Impact” in 2010, a much larger database has been developed. “We now have a much better understanding of what a comet may be like and it is still consistent with the K-Pg boundary data we are seeing,” Moore adds.

Sharma says that, “In synthesizing the data generated by two very disparate fields of research-geochemistry and geophysics-we are now 99.9 percent sure that what we are dealing with is a 66-million-year-old comet impact-not an asteroid.”

Discovery of 1,800-year-old ‘Rosetta Stone’ for tropical ice cores

This photo from a 1977 expedition to Quelccaya Ice Cap in Peru shows clearly defined annual layers of ice and dust visible in the ice cap's margin. Researchers at the Ohio State University are using a set of ice cores taken from Quelccaya as a 'Rosetta Stone' for studying other ice cores taken from around the world. -  Photo by Lonnie Thompson, Courtesy of Ohio State University.
This photo from a 1977 expedition to Quelccaya Ice Cap in Peru shows clearly defined annual layers of ice and dust visible in the ice cap’s margin. Researchers at the Ohio State University are using a set of ice cores taken from Quelccaya as a ‘Rosetta Stone’ for studying other ice cores taken from around the world. – Photo by Lonnie Thompson, Courtesy of Ohio State University.

Two annually dated ice cores drawn from the tropical Peruvian Andes reveal Earth’s tropical climate history in unprecedented detail-year by year, for nearly 1,800 years.

Researchers at The Ohio State University retrieved the cores from a Peruvian ice cap in 2003, and then noticed some startling similarities to other ice cores that they had retrieved from Tibet and the Himalayas. Patterns in the chemical composition of certain layers matched up, even though the cores were taken from opposite sides of the planet.

In the April 4, 2013 online edition of the journal Science Express, they describe the find, which they call the first annually resolved “Rosetta Stone” with which to compare other climate histories from Earth’s tropical and subtropical regions over the last two millennia.

The cores provide a new tool for researchers to study Earth’s past climate, and better understand the climate changes that are happening today.

“These ice cores provide the longest and highest-resolution tropical ice core record to date,” said Lonnie Thompson, distinguished university professor of earth sciences at Ohio State and lead author of the study.

“In fact, having drilled ice cores throughout the tropics for more than 30 years, we now know that this is the highest-resolution tropical ice core record that is likely to be retrieved.”

The new cores, drilled from Peru’s Quelccaya Ice Cap, are special because most of their 1,800-year history exists as clearly defined layers of light and dark: light from the accumulated snow of the wet season, and dark from the accumulated dust of the dry season.

They are also special because of where they formed, atop the high Andean altiplano in southern Peru. Most of the moisture in the area comes from the east, in snowstorms fueled by moist air rising from the Amazon Basin. But the ice core-derived climate records from the Andes are also impacted from the west-specifically by El Niño, a temporary change in climate, which is driven by sea surface temperatures in the tropical Pacific.

El Niño thus leaves its mark on the Quelccaya ice cap as a chemical signature (especially in oxygen isotopes) indicating sea surface temperatures in the equatorial Pacific Ocean over much of the past 1,800 years.

“We have been able to derive a proxy for sea surface temperatures that reaches back long before humans were able to make such measurements, and long before humans began to affect Earth’s climate,” Thompson said.

Ellen Mosley-Thompson, distinguished university professor of geography at Ohio State and director of the Byrd Polar Research Center, explained that the 2003 expedition to Quelccaya was the culmination of 20 years of work.

The Thompsons have drilled ice cores from glaciers atop the most remote areas of the planet-the Chinese Himalayas, the Tibetan Plateau, Kilimanjaro in Africa, and Papua Indonesia among others-to gauge Earth’s past climate. Each new core has provided a piece of the puzzle, as the researchers measured the concentrations of key chemicals preserved in thousands of years of accumulated ice.

A 1983 trip to Quelccaya yielded cores that earned the research team their first series of papers in Science. The remoteness of the site and the technology available at the time limited the quality of samples they could obtain, however. The nearest road was a two-day walk from the ice cap, so they were forced to melt the cores in the field and carry samples back as bottles of water. This made some chemical measurements impossible, and diminished the time resolution available from the cores.

“Due to the remoteness of the ice cap, we had to develop new tools such as a light-weight drill powered by solar panels to collect the 1983 cores. However, we knew there was much more information the cores could provide” Mosley-Thompson said. “Now the ice cap is just a six-hour walk from a new access road where a freezer truck can be positioned to preserve the cores. So we can now make better dust measurements along with a suite of chemical analyses that we couldn’t make before.”

The cores will provide a permanent record for future use by climate scientists, Thompson added. This is very important, as plants captured by the advancing ice cap 6,000 years ago are now emerging along its retreating margins, which shows that Quelccaya is now smaller than it has been in six thousand years.

“The frozen history from this tropical ice cap-which is melting away as Earth continues to warm-is archived in freezers at -30ºC so that creative people will have access to it 20 years from now, using instruments and techniques that don’t even exist today,” he said.

Congestion in the Earth’s mantle

This is Mineralogist Prof. Dr. Falko Langenhorst from Jena University (Germany). -  Photo: Anne Guenther/FSU
This is Mineralogist Prof. Dr. Falko Langenhorst from Jena University (Germany). – Photo: Anne Guenther/FSU

The Earth is dynamic. What we perceive as solid ground beneath our feet, is in reality constantly changing. In the space of a year Africa and America are drifting apart at the back of the Middle Atlantic for some centimeters while the floor of the Pacific Ocean is subducted underneath the South American Continent. “In 100 million years’ time Africa will be pulled apart and North Australia will be at the equator,” says Prof. Dr. Falko Langenhorst from the Friedrich Schiller University Jena (Germany). Plate tectonics is leading to a permanent renewal of the ocean floors, the mineralogist explains. The gaps between the drifting slabs are being filled up by rising melt, solidifying to new oceanic crust. In other regions the slabs dive into the deep interior of the Earth and mix with the surrounding Earth’s mantle.

The Earth is the only planet in our solar system, conducting such a ‘facelift’ on a regular basis. But the continuous up and down on the Earth’s crust doesn’t run smoothly everywhere. “Seismic measurements show that in some mantle regions, where one slab is subducted underneath another one, the movement stagnates, as soon as the rocks have reached a certain depth,” says Prof. Langenhorst. The causes of the ‘congestion’ of the subducted plate are still unknown. In the current issue of the science magazine ‘Nature Geoscience‘ Prof. Langenhorst and earth scientists of Bayreuth University now explain the phenomenon for the first time (DOI: 10.1038/NGEO1772).

According to this, the rocks of the submerging ocean plate pond at a depth of 440 to 650 kilometers – in the transition zone between the upper and the lower Earth mantle. “The reason for that can be found in the slow diffusion and transformation of mineral components,” mineralogist Langenhorst explains. On the basis of high pressure experiments the scientists were able to clarify things: under the given pressure and temperature in this depth, the exchange of elements between the main minerals of the subducted ocean plate – pyroxene and garnet – is slowed down to an extreme extent. “The diffusion of a pyroxene-component in garnet is so slow, that the submerging rocks don’t become denser and heavier, and therefore stagnate,” the Jena scientist says.

Interestingly there is congestion in the earth mantle exactly where the ocean floor submerges particularly fast into the interior of the Earth. “In the Tonga rift off Japan for example, the speed of subduction is very high,” Prof. Langenhorst states. Thereby the submerging rocks of the oceanic plate stay relatively cold up to great depth, which makes the exchange of elements between the mineral components exceptionally difficult. “It takes about 100 Million years for pyroxene crystals which are only 1 mm in size to diffuse into the garnet. For this amount of time the submerging plate stagnates,” Langenhorst describes the rock congestion. It can probably only diffuse at the boundary of the lower Earth mantle. Because then pyroxene changes into the mineral akimotoite due to the higher pressure in the depth of 650 kilometers. “This could lead to an immediate rise in the rock density and would enable the submerging into greater depths.”

The North American Cordillera: Constructive collisions

The mountain ranges of the North American Cordillera are made up of dozens of distinct crustal blocks. A new study clarifies their mode of origin and identifies a previously unknown oceanic plate that contributed to their assembly.

The extensive area of elevated topography that dominates the Western reaches of North America is exceptionally broad, encompassing the coastal ranges, the Rocky Mountains and the high plateaus in between. In fact, this mountain belt consists of dozens of crustal blocks of varying age and origin, which have been welded onto the American continent over the past 200 million years. “How these blocks arrived in North America has long been a puzzle,” says LMU geophysicist Karin Sigloch, who has now taken a closer look at the problem, in collaboration with the Canadian geologist Mitchell Mihalynuk.

Collisions and continental growth

One popular model for the accretion process postulates that a huge oceanic plate – the Farallon Plate – acted as a conveyor belt to sweep crustal fragments eastwards to the margin of American Plate, to which they were attached as the denser Farallon Plate was subducted under it. However, this scenario is at variance with several geological findings, and does not explain why the same phenomenon is not observed on the west coast of South America, the classical case of subduction of oceanic crust beneath a continental plate. The precise source of the crustal blocks themselves has also remained enigmatic, although geological studies suggest that they derive from several groups of volcanic islands. “The geological strata in North America have been highly deformed over the course of time, and are extremely difficult to interpret, so these findings have not been followed up,” says Sigloch.

Sigloch and Mihalynuk have now succeeded in assembling a comprehensive picture of the accretion process by incorporating geophysical findings obtained by seismic tomography. This technique makes it possible to probe the geophysical structure of the Earth’s interior down to the level of the lower mantle by analyzing the propagation velocities of seismic waves. The method can image the remnants of ancient tectonic plates at great depths, ocean floor that subducted, i.e., disappeared from the surface and sank back into the mantle, long time ago.

Intra-oceanic subduction of the Farallon Plate

Most surprisingly, the new data suggest that the Farallon Plate was far smaller than had been assumed, and underwent subduction well to the west of what was then the continental margin of North America. Instead it collided with, and subducted under, an intervening and previously unrecognized oceanic plate. Sigloch and Mihalynuk were able to locate the remnants of several deep-sea trenches that mark subduction sites at which oceanic plates plunge at a steep angle into the mantle and are drawn almost vertically into its depths. “The volcanic activity that accompanies the subduction process will have generated lots of new crustal material, which emerged in the form of island arcs along the line of the trenches, and provided the material for the crustal blocks,” Sigloch explains.

As these events were going on, the American Plate was advancing steadily westwards, as indicated by striped patterns of magnetized seafloor in the North Atlantic. The first to get consumed was the previously unknown oceanic plate, which can be detected seismologically beneath today’s east coast of North America. Only then did the continent begin to encounter the Farallon plate. On its westward journey, North America overrode one intervening island arc after another – annexing ever more of them for the construction of its wide mountains of the West.

Fracking: Challenges and opportunities

A technology vital for tapping much-needed energy or one that’s environmentally destructive? That’s the question a panel of experts will explore at the Technology and Society Forum session on fracking April 10, 2013 from 3 – 4:30 p.m. in the Campus Center Ballroom. The NJIT Technology and Society Forum is free and open to the public.

Fracking, short for hydraulic fracturing, injects fluid underground at high pressure to fracture rock formations in order to extract previously inaccessible oil and gas. Opponents point to the negatives, including groundwater contamination, risks to air quality, and migration of toxic chemicals to the surface.

The panel looking at both sides of fracking will be chaired by Michel Boufadel, NJIT professor of civil and environmental engineering and director of the university’s Center for Natural Resources Development and Protection. Boufadel’s wide range of environmental research includes assessing effects of the the Exxon Valdez spill in Alaska and the BP Deepwater Horizon blowout in the Gulf of Mexico.

Panelist Fred Baldassare is a senior geoscientist at ECHELON Applied Geoscience Consulting as well as the owner of the practice. He has been a leader in applying isotope geochemistry to identification of the source and type of gases in soils, aquifers and other geologic features of the Appalachian Basin.

Tracy Carluccio is assistant director of the Delaware Riverkeeper Network (DRN), a nonprofit whose staff and volunteers work throughout the entire Delaware River Watershed. DRN is engaged in environmental advocacy, volunteer monitoring, stream-restoration assistance and educational initiatives.

Daniel Soeder is a scientist with the U.S. Department of Energy’s National Energy Technology Laboratory in West Virginia. His research interests include geology, energy and environmental issues related to unconventional fossil fuel resources such as shale gas, oil shale, enhanced oil recovery, and the geological sequestration of carbon dioxide.

Earth is ‘lazy’ when forming faults like those near San Andreas

Cooke's UMass Amherst lab is one of only a handful worldwide to use a relatively new modeling technique that uses kaolin clay rather than sand to better understand the behavior of Earth's crust. -  UMass Amherst
Cooke’s UMass Amherst lab is one of only a handful worldwide to use a relatively new modeling technique that uses kaolin clay rather than sand to better understand the behavior of Earth’s crust. – UMass Amherst

Geoscientist Michele Cooke and colleagues at the University of Massachusetts Amherst take an uncommon, “Earth is lazy” approach to modeling fault development in the crust that is providing new insights into how faults grow. In particular, they study irregularities along strike-slip faults, the active zones where plates slip past each other such as at the San Andreas Fault of southern California.

Until now there has been a great deal of uncertainty among geologists about the factors that govern how new faults grow in regions where one plate slides past or over another around a bend, says Cooke. In their study published in an early online edition of the Journal of Structural Geology, she and colleagues offer the first systematic exploration of fault evolution around fault bends based on modeling in a clay box.

Testing ideas about how the Earth’s crust behaves in real time is impossible because actions unfold over many thousands of years, and success in reconstructing events after the fact is limited. A good analog for laboratory experiments has been a goal for decades. “Geologists don’t agree on how the earth’s crust handles restraining bends along faults. There’s just a lack of evidence. When researchers go out in the field to measure faults, they can’t always tell which one came first, for example,” Cooke says.

Unlike most geoscience researchers, she takes a mechanical efficiency approach to study dynamic fault systems’ effectiveness at transforming input energy into force and movement. For example, a straight fault is more efficient at accommodating strain than a bumpy fault. For this reason Cooke is very interested in how the efficiency of fault bends evolves with increasing deformation.

Her data suggest that at restraining bends, the crust behaves in accord with “work minimization” principles, an idea she dubs the “Lazy Earth” hypothesis. “Our approach offers some of the first system-type evidence of how faults evolve around restraining bends,” she says.

Further, Cooke’s UMass Amherst lab is one of only a handful worldwide to use a relatively new modeling technique that uses kaolin clay rather than sand to better understand the behavior of Earth’s crust.

For these experiments, she and colleagues Mariel Schottenfeld and Steve Buchanan, both undergraduates at the time, used a clay box or tray loaded with kaolin, also known as china clay, prepared very carefully so its viscosity scales to that of the earth’s crust. When scaled properly, data from clay experiments conducted over several hours in a table-top device are useful in modeling restraining bend evolution over thousands of years and at the scale of tens of kilometers.

Cooke says sand doesn’t remember faults the way kaolin can. In an experiment of a bend in a fault, sand will just keep forming new faults. But clay will remember an old fault until it’s so inefficient at accommodating the slip that a new fault will eventually form in a manner much more similar to what geologists see on the ground.

Another innovation Cooke and colleagues use is a laser scan to map the clay’s deformation over time and to collect quantitative data about the system’s efficiency. “It’s a different approach than the conventional one,” Cooke acknowledges. “I think about fault evolution in terms of work and efficiency. With this experiment we now have compelling evidence from the clay box experiment that the development of new faults increases the efficiency of the system. There is good evidence to support the belief that faults grow to improve efficiency in the Earth’s crust as well. “

“We’re moving toward much more precision within laboratory experiments,” she adds. “This whole field is revolutionized in past six years. It’s an exciting time to be doing this sort of modeling. Our paper demonstrates the mastery we now can have over this method.”

The observation that a fault’s active zone can shift location significantly over 10,000 years is very revealing, Cooke says, and has important implications for understanding seismic hazards. The more geologists understand fault development, the better they may be able to predict earthquake hazards and understand Earth’s evolution, she points out.

Team achieves petaflop-level earthquake simulations on GPU-powered supercomputers

A team of researchers at the San Diego Supercomputer Center (SDSC) and the Department of Electronic and Computer Engineering at the University of California, San Diego, has developed a highly scalable computer code that promises to dramatically cut both research times and energy costs in simulating seismic hazards throughout California and elsewhere.

The team, led by Yifeng Cui, a computational scientist at SDSC, developed the scalable GPU (graphical processing units) accelerated code for use in earthquake engineering and disaster management through regional earthquake simulations at the petascale level as part of a larger computational effort coordinated by the Southern California Earthquake Center (SCEC). San Diego State University (SDSU) is also part of this collaborative effort in pushing the envelope toward extreme-scale earthquake computing.

“The increased capability of GPUs, combined with the high-level GPU programming language CUDA, has provided tremendous horsepower required for acceleration of numerically intensive 3D simulation of earthquake ground motions,” said Cui, who recently presented the team’s new development at the NVIDIA 2013 GPU Technology Conference (GTC) in San Jose, Calif.

A technical paper based on this work will be presented June 5-7 at the 2013 International Conference on Computational Science Conference in Barcelona, Spain.

The accelerated code, which was done using GPUs as opposed to CPUs, or central processing units, is based on a widely-used wave propagation code called AWP-ODC, which stands for Anelastic Wave Propagation by Olsen, Day and Cui. It was named after Kim Olsen and Steven Day, geological science professors at San Diego State University (SDSU), and SDSC’s Cui. The research team restructured the code to exploit high performance and throughput, memory locality, and overlapping of computation and communication, which made it possible to scale the code linearly to more than 8,000 NVIDIA Kepler GPU accelerators.

Sustained One Petaflop/s Performance

The team performed GPU-based benchmark simulations of the 5.4 magnitude earthquake that occurred in July 2008 below Chino Hills, near Los Angeles. Compute systems included Keeneland, managed by Georgia Tech, Oak Ridge National Laboratory (ORNL) and the National Institute for Computational Sciences (NICS), and also part of the National Science Foundation’s (NSF) eXtreme Science and Engineering Discovery Environment (XSEDE), and Blue Waters, based at the National Center for Supercomputing Applications (NCSA). Also used was the Titan supercomputer, based at ORNL and funded by the U.S. Department of Energy. Titan is equipped with Cray XK7 systems and NIVIDIA’s Tesla K20X GPU accelerators.

The benchmarks, run on Titan, showed a five-fold speedup over the heavily optimized CPU code on the same system, and a sustained performance of one petaflop per second (one quadrillion calculations per second) on the tested system. A previous benchmark of the AWP-ODC code reached only 200 teraflops (trillions of calculations per second) of sustained performance.

By delivering a significantly higher level of computational power, researchers can provide more accurate earthquake predictions with increased physical reality and resolution, with the potential of saving lives and minimizing property damage.

“This is an impressive achievement that has made petascale-level computing a reality for us, opening up some new and really interesting possibilities for earthquake research,” said Thomas Jordan, director of SCEC, which has been collaborating with UC San Diego and SDSU researchers on this and other seismic research projects, such as the simulation of a magnitude 8.0 earthquake, the largest ever simulation to-date.

“Substantially faster and more energy-efficient earthquake codes are urgently needed for improved seismic hazard evaluation,” said Cui, citing the recent destructive earthquakes in China, Haiti, Chile, New Zealand, and Japan.

Next Steps

While the GPU-based AWP-ODC code is already in research use, further enhancements are being planned for use on hybrid heterogeneous architectures such as Titan and Blue Waters.

“One goal going forward is to use this code to calculate an improved probabilistic seismic hazard forecast for the California region under a collaborative effort coordinated by SCEC,” said Cui. “Our ultimate goal is to support development of a CyberShake model that can assimilate information during earthquake cascades so we can improve our operational forecasting and early warning systems.”

CyberShake is a SCEC project focused on developing new approaches to performing seismic hazard analyses using 3D waveform modeling. The GPU-based code has potential to save hundreds of millions of CPU-hours required to complete statewide seismic hazard map calculations in planning.

Additional members on the UC San Diego research team include Jun Zhou and Efecan Poyraz, graduate students with the university’s Department of Electrical and Computer Engineering (Zhou devoted his graduate research to this development work); SDSC researcher Dong Ju Choi; and Clark C. Guest, an associate professor of electrical and computer engineering at UC San Diego’s Jacobs School of Engineering.

Compute resources used for this research are supported by XSEDE under NSF grant number OCI-1053575, while additional funding for research was provided through XSEDE’s Extended Collaborative Support Service (ECSS) program.

“ECSS exists for exactly this reason, to help a research team make significant performance gains and take their simulations to the next level,” said Nancy Wilkins-Diehr, co-director of the ECSS program and SDSC’s associate director. “We’re very pleased with the results we were able to achieve for PI Thomas Jordan and his team. ECSS projects are typically conducted over several months to up to one year. This type of targeted support may be requested by anyone through the XSEDE allocations process.”

Models will enable safer deepwater oil production

Rice University researchers are developing a comprehensive model that will predict how brine, oil and gas drawn from ultra-deep wells react to everything encountered on the way up to the surface and to suggest strategies to maintain the flow.

Deepwater production involves hydrocarbons but also formation water (brine), chemicals, and materials that make up the complex machinery of modern oil extraction. Under high pressures and temperatures, the brine can form acidic mixtures that corrode pipes or form solid mineral deposits, called scale, that inhibit flow in a well.

Rice professors Walter Chapman, Kenneth Cox and Mason Tomson will combine their expertise in materials and computer modeling to analyze the brine and its environment on the molecular to macro scales. Their research is supported by $1.2 million awarded last fall by the Research Partnership to Secure Energy for America, a contractor for the US Department of Energy, through Brine Chemistry Solutions LLC.

A molecular-level model based on Chapman’s Statistical Associating Fluid Theory (SAFT) equation and on Tomson’s expertise with the Brine Chemistry Consortium will predict the likelihood that scale and corrosion will hinder the flow in a well based on variables found at the site. That will save money, cut risk and make deepwater production safer and more environmentally sound, Chapman said.

“This is all about flow assurance,” he said. “Companies want to maintain their ability to produce by knowing how to deal with potential obstacles — scale, asphaltenes, natural gas hydrates, wax — anything that could prevent them from being able to flow the fluids. Rice works in each of these areas, but the current project involves scale and corrosion.”

The degree of difficulty increases as explorers drill further out to sea. Ultra-deepwater wells are those with water depths greater than 7,500 feet. In these fossil fuel reservoirs, extreme temperatures up to 500 degrees Fahrenheit and pressures greater than 25,000 pounds per square inch can turn benign mixtures into pipe-eating acids or clogging solids.

“There are a lot of components,” said Cox, who spent 17 years as a research engineer at Shell before entering academia. “You have water, all kinds of salts and other species in a very complex mixture over many extreme combinations of temperature and pressure. But most of the data we have to base our models on, to calibrate the models against, are taken near room temperature and at atmospheric pressure.

“It’s kind of like sending someone to the moon, when you only know what you’ve experienced on Earth,” he said. “We mean to take this limited body of data and make the best possible use of it.”

“Our idea is to have a single model able to describe all the phases — for gases, water and aqueous solutions, and hydrocarbons — from very hot, high-pressure conditions down hole all the way through the platform and even through transmission and refining,” Chapman said. “We want to use one model to describe conditions along this entire path.”

Samples from the deep are difficult to analyze, even when they can be obtained, he said. “Even though sample cells can maintain pressure, and a very few will maintain temperature, by the time they come up to the surface, the samples are completely different,” he said. “So we have limited data from which to project a highly complex system.”

That system not only includes the oil and gas, briny water, sodium chloride, calcium carbonate, barium sulfate, carbon dioxide, hydrogen sulfide and other elements found in the chemical stew in and beneath the sea, but also the hardware used to extract oil. Every pipe, every seal, every tank and every part of every pump has to be accounted for in the equation, to see how each might react. Pressure and temperature changes along the entire path affect how – or if – the product flows.

“The temperature’s high enough for water to dissolve sand,” Cox said. “That’s an interesting problem in itself, because at very high temperature it forms volatile silicic acid. The presence of this kind of thing is what we try to pin down.”

“We’re trying to get basic data into a fundamental model that, given the water chemistry and the conditions at any site, allows us to model what should be expected of that site,” Chapman said.

He said a good model would help companies plan their approach to a potential well before committing millions of dollars to set up a platform. “They want to know what they’re going to find. Their inhibition strategy (to limit corrosion), the materials they’re going to use and the separation equipment will all be based on their knowledge of the fluid properties down hole.”

The Rice team is working with project leader Brine Chemistry Solutions LLC, a Houston company founded by Tomson’s son, Rice alumnus Ross Tomson. Chapman is the William W. Akers Professor of Chemical and Biomolecular Engineering. Cox is a professor in the practice of chemical and biomolecular engineering. Tomson is a professor of civil and environmental engineering.

Funding for the projects is provided through the ultra-deepwater and unconventional natural gas and other petroleum resources research and development program authorized by the Energy Policy Act of 2005. The program is funded from lease bonuses and royalties paid by industry to produce oil and gas on federal lands and is designed to assess and mitigate risk, enhancing the environmental sustainability of oil and gas exploration and production activities.

Scientists image deep magma beneath Pacific seafloor volcano

Since the plate tectonics revolution of the 1960s, scientists have known that new seafloor is created throughout the major ocean basins at linear chains of volcanoes known as mid-ocean ridges. But where exactly does the erupted magma come from?

Researchers at Scripps Institution of Oceanography at UC San Diego now have a better idea after capturing a unique image of a site deep in the earth where magma is generated.

Using electromagnetic technology developed and advanced at Scripps, the researchers mapped a large area beneath the seafloor off Central America at the northern East Pacific Rise, a seafloor volcano located on a section of the global mid-ocean ridges that together form the largest and most active chain of volcanoes in the solar system. By comparison, the researchers say the cross-section area of the melting region they mapped would rival the size of San Diego County.

Details of the image and the methods used to capture it are published in the March 28 issue of the journal Nature.

“Our data show that mantle upwelling beneath the mid-ocean ridge creates a deeper and broader melting region than previously thought,” said Kerry Key, lead author of the study and an associate research geophysicist at Scripps. “This was the largest project of its kind, enabling us to image the mantle with a level of detail not possible with previous studies.”

The northern East Pacific Rise is an area where two of the planet’s tectonic plates are spreading apart from each another. Mantle rising between the plates melts to generate the magma that forms fresh seafloor when it erupts or freezes in the crust.

Data for the study was obtained during a 2004 field study conducted aboard the research vessel Roger Revelle, a ship operated by Scripps and owned by the U.S. Navy.

The marine electromagnetic technology behind the study was originally developed in the 1960s by Charles “Chip” Cox, an emeritus professor of oceanography at Scripps, and his student Jean Filloux. In recent years the technology was further advanced by Steven Constable and Key. Since 1995 Scripps researchers have been working with the energy industry to apply this technology to map offshore geology as an aid to exploring for oil and gas reservoirs.

“We have been working on developing our instruments and interpretation software for decades, and it is really exciting to see it all come together to provide insights into the fundamental processes of plate tectonics,” said Constable, a coauthor of the paper and a professor in the Cecil H. and Ida M. Green Institute of Geophysics and Planetary Physics at Scripps. “It was really a surprise to discover that melting started so deep in the mantle-much deeper than was expected.”

Key believes the insights that electromagnetics provides will continue to grow as the technology matures and data analysis techniques improve (last week Key and his colleagues announced the use of electromagnetics in discovering a magma lubricant for the planet’s tectonic plates: http://scrippsnews.ucsd.edu/Releases/?releaseID=1331).

“Electromagnetics is really coming of age as a tool for imaging the earth,” said Key. “Much of what we know about the crust and mantle is a result of using seismic techniques. Now electromagnetic technology is offering promise for further discoveries.”

Key also has future plans to apply electromagnetic technology to map subglacial lakes and groundwater in the polar regions.