Rewriting the history of volcanic forcing during the past 2,000 years

Locations of Antarctic ice core sites used for volcanic sulfate aerosol deposition reconstruction (right); a  DRI scientist examines a freshly drilled ice core in the field before ice cores are analyzed in DRI's ultra-trace ice core analytical laboratory. -  M. Sigl
Locations of Antarctic ice core sites used for volcanic sulfate aerosol deposition reconstruction (right); a DRI scientist examines a freshly drilled ice core in the field before ice cores are analyzed in DRI’s ultra-trace ice core analytical laboratory. – M. Sigl

A team of scientists led by Michael Sigl and Joe McConnell of Nevada’s Desert Research Institute (DRI) has completed the most accurate and precise reconstruction to date of historic volcanic sulfate emissions in the Southern Hemisphere.

The new record, described in a manuscript published today in the online edition of Nature Climate Change, is derived from a large number of individual ice cores collected at various locations across Antarctica and is the first annually resolved record extending through the Common Era (the last 2,000 years of human history).

“This record provides the basis for a dramatic improvement in existing reconstructions of volcanic emissions during recent centuries and millennia,” said the report’s lead author Michael Sigl, a postdoctoral fellow and specialist in DRI’s unique ultra-trace ice core analytical laboratory, located on the Institute’s campus in Reno, Nevada.

These reconstructions are critical to accurate model simulations used to assess past natural and anthropogenic climate forcing. Such model simulations underpin environmental policy decisions including those aimed at regulating greenhouse gas and aerosol emissions to mitigate projected global warming.

Powerful volcanic eruptions are one of the most significant causes of climate variability in the past because of the large amounts of sulfur dioxide they emit, leading to formation of microscopic particles known as volcanic sulfate aerosols. These aerosols reflect more of the sun’s radiation back to space, cooling the Earth. Past volcanic events are measured through sulfate deposition records found in ice cores and have been linked to short-term global and regional cooling.

This effort brought together the most extensive array of ice core sulfate data in the world, including the West Antarctic Ice Sheet (WAIS) Divide ice core – arguably the most detailed record of volcanic sulfate in the Southern Hemisphere. In total, the study incorporated 26 precisely synchronized ice core records collected in an array of 19 sites from across Antarctica.

“This work is the culmination of more than a decade of collaborative ice core collection and analysis in our lab here at DRI,” said Joe McConnell, a DRI research professor who developed the continuous-flow analysis system used to analyze the ice cores.

McConnell, a member of several research teams that collected the cores (including the 2007-2009 Norwegian-American Scientific Traverse of East Antarctica and the WAIS Divide project that reached a depth of 3,405 meters in 2011), added, “The new record identifies 116 individual volcanic events during the last 2000 years.”

“Our new record completes the period from years 1 to 500 AD, for which there were no reconstructions previously, and significantly improves the record for years 500 to 1500 AD,” Sigl added. This new record also builds on DRI’s previous work as part of the international Past Global Changes (PAGES) effort to help reconstruct an accurate 2,000-year-long global temperature for individual continents.

This study involved collaborating researchers from the United States, Japan, Germany, Norway, Australia, and Italy. International collaborators contributed ice core samples for analysis at DRI as well as ice core measurements and climate modeling.

According to Yuko Motizuki from RIKEN (Japan’s largest comprehensive research institution), “The collaboration between DRI, National Institute of Polar Research (NIPR), and RIKEN just started in the last year, and we were very happy to be able to use the two newly obtained ice core records taken from Dome Fuji, where the volcanic signals are clearly visible. This is because precipitation on the site mainly contains stratospheric components.” Dr. Motizuki analyzed the samples collected by the Japanese Antarctic Research Expedition.

Simulations of volcanic sulfate transport performed with a coupled aerosol-climate model were compared to the ice core observations and used to investigate spatial patterns of sulfate deposition to Antarctica.

“Both observations and model results show that not all eruptions lead to the same spatial pattern of sulfate deposition,” said Matthew Toohey from the German institute GEOMAR Helmholtz Centre for Ocean Research Kiel. He added, “Spatial variability in sulfate deposition means that the accuracy of volcanic sulfate reconstructions depends strongly on having a sufficient number of ice core records from as many different regions of Antarctica as possible.”

With such an accurately synchronized and robust array, Sigl and his colleagues were able to revise reconstructions of past volcanic aerosol loading that are widely used today in climate model simulations. Most notably, the research found that the two largest volcanic eruptions in recent Earth history (Samalas in 1257 and Kuwae in 1458) deposited 30 to 35 percent less sulfate in Antarctica, suggesting that these events had a weaker cooling effect on global climate than previously thought.

Gas-charged fluids creating seismicity associated with a Louisiana sinkhole

Natural earthquakes and nuclear explosions produce seismic waves that register on seismic monitoring networks around the globe, allowing the scientific community to pinpoint the location of the events. In order to distinguish seismic waves produced by a variety of activities – from traffic to mining to explosions – scientists study the seismic waves generated by as many types of events as possible.

In August 2012, the emergence of a very large sinkhole at the Napoleonville Salt Dome in Louisiana offered University of California, Berkeley scientists the opportunity to detect, locate and analyze a rich sequence of 62 seismic events that occurred one day prior to its discovery.

In June 2012, residents of Bayou Corne reported frequent tremors and unusual gas bubbling in local surface water. The U.S. Geological Survey installed a temporary network of seismic stations, and on August 3, a large sinkhole was discovered close to the western edge of the salt dome.

In this study published by the Bulletin of the Seismological Society of America (BSSA), co-authors Douglas Dreger and Avinash Nayak, evaluated the data recorded by the seismic network during the 24 hours prior to the discovery of the sinkhole. They implemented a waveform scanning approach to continuously detect, locate and analyze the source of the seismic events at the sinkhole, which are located to the edge of the salt dome and above and to the west of the cavern near the sinkhole.

The point-source equivalent force system describing the motions at the seismic source (called moment tensor) showed similarities to seismic events produced by explosions and active geothermal and volcanic environments. But at the sinkhole, an influx of natural gas rather than hot magma may be responsible for elevating the pore pressure enough to destabilize pre-existing zones of weakness, such as fractures or faults at the edge of the salt dome.

Extinct undersea volcanoes squashed under Earth’s crust cause tsunami earthquakes, according to new

New research has revealed the causes and warning signs of rare tsunami earthquakes, which may lead to improved detection measures.

Tsunami earthquakes happen at relatively shallow depths in the ocean and are small in terms of their magnitude. However, they create very large tsunamis, with some earthquakes that only measure 5.6 on the Richter scale generating waves that reach up to ten metres when they hit the shore.

A global network of seismometers enables researchers to detect even the smallest earthquakes. However, the challenge has been to determine which small magnitude events are likely to cause large tsunamis.

In 1992, a magnitude 7.2 tsunami earthquake occurred off the coast of Nicaragua in Central America causing the deaths of 170 people. Six hundred and thirty seven people died and 164 people were reported missing following a tsunami earthquake off the coast of Java, Indonesia, in 2006, which measured 7.2 on the Richter scale.

The new study, published in the journal Earth and Planetary Science Letters, reveals that tsunami earthquakes may be caused by extinct undersea volcanoes causing a “sticking point” between two sections of the Earth’s crust called tectonic plates, where one plate slides under another.

The researchers from Imperial College London and GNS Science in New Zealand used geophysical data collected for oil and gas exploration and historical accounts from eye witnesses relating to two tsunami earthquakes, which happened off the coast of New Zealand’s north island in 1947. Tsunami earthquakes were only identified by geologists around 35 years ago, so detailed studies of these events are rare.

The team located two extinct volcanoes off the coast of Poverty Bay and Tolaga Bay that have been squashed and sunk beneath the crust off the coast of New Zealand, in a process called subduction.

The researchers suggest that the volcanoes provided a “sticking point” between a part of the Earth’s crust called the Pacific plate, which was trying to slide underneath the New Zealand plate. This caused a build-up of energy, which was released in 1947, causing the plates to “unstick” and the Pacific plate to move and the volcanoes to become subsumed under New Zealand. This release of the energy from both plates was unusually slow and close to the seabed, causing large movements of the sea floor, which led to the formation of very large tsunami waves.

All these factors combined, say the researchers, are factors that contribute to tsunami earthquakes. The researchers say that the 1947 New Zealand tsunami earthquakes provide valuable insights into what geological factors cause these events. They believe the information they’ve gathered on these events could be used to locate similar zones around the world that could be at risk from tsunami earthquakes. Eyewitnesses from these tsunami earthquakes also describe the type of ground movement that occurred and this provides valuable clues about possible early warning signals for communities.

Dr Rebecca Bell, from the Department of Earth Science and Engineering at Imperial College London, says: “Tsunami earthquakes don’t create massive tremors like more conventional earthquakes such as the one that hit Japan in 2011, so residents and authorities in the past haven’t had the same warning signals to evacuate. These types of earthquakes were only identified a few decades ago, so little information has been collected on them. Thanks to oil exploration data and eyewitness accounts from two tsunami earthquakes that happened in New Zealand more than 70 years ago, we are beginning to understand for first time the factors that cause these events. This could ultimately save lives.”

By studying the data and reports, the researchers have built up a picture of what happened in New Zealand in 1947 when the tsunami earthquakes hit. In the March earthquake, eyewitnesses around Poverty Bay on the east coast of the country, close to the town of Gisborne, said that they didn’t feel violent tremors, which are characteristic of typical earthquakes. Instead, they felt the ground rolling, which lasted for minutes, and brought on a sense of sea sickness. Approximately 30 minutes later the bay was inundated by a ten metre high tsunami that was generated by a 5.9 magnitude offshore earthquake. In May, an earthquake measuring 5.6 on the Richter scale happened off the coast of Tolaga Bay, causing an approximate six metre high tsunami to hit the coast. No lives were lost in the New Zealand earthquakes as the areas were sparsely populated in 1947. However, more recent tsunami earthquakes elsewhere have devastated coastal communities.

The researchers are already working with colleagues in New Zealand to develop a better warning system for residents. In particular, new signage is being installed along coastal regions to alert people to the early warning signs that indicate a possible tsunami earthquake. In the future, the team hope to conduct new cutting-edge geophysical surveys over the sites of other sinking volcanoes to better understand their characteristics and the role they play in generating this unusual type of earthquake.

Team develops a geothermometer for methane formation

John Eiler (left) and Daniel Stolper (right) with the Caltech-led team's prototype mass spectrometer -- the Thermo IRMS 253 Ultra. This instrument is the first equipped to measure abundances of rare isotopic versions of complex molecules, even where combinations of isotopic substitutions result in closely similar masses. This machine enabled the first precise measurements of molecules of methane that contain two heavy isotopes -- 13CH3D, which incorporates both a carbon-13 atom and a deuterium atom, and 12CH2D2, which includes two deuterium atoms. -  Caltech
John Eiler (left) and Daniel Stolper (right) with the Caltech-led team’s prototype mass spectrometer — the Thermo IRMS 253 Ultra. This instrument is the first equipped to measure abundances of rare isotopic versions of complex molecules, even where combinations of isotopic substitutions result in closely similar masses. This machine enabled the first precise measurements of molecules of methane that contain two heavy isotopes — 13CH3D, which incorporates both a carbon-13 atom and a deuterium atom, and 12CH2D2, which includes two deuterium atoms. – Caltech

Methane is a simple molecule consisting of just one carbon atom bound to four hydrogen atoms. But that simplicity belies the complex role the molecule plays on Earth-it is an important greenhouse gas, is chemically active in the atmosphere, is used in many ecosystems as a kind of metabolic currency, and is the main component of natural gas, which is an energy source.

Methane also poses a complex scientific challenge: it forms through a number of different biological and nonbiological processes under a wide range of conditions. For example, microbes that live in cows’ stomachs make it; it forms by thermal breakdown of buried organic matter; and it is released by hot hydrothermal vents on the sea floor. And, unlike many other, more structurally complex molecules, simply knowing its chemical formula does not necessarily reveal how it formed. Therefore, it can be difficult to know where a sample of methane actually came from.

But now a team of scientists led by Caltech geochemist John M. Eiler has developed a new technique that can, for the first time, determine the temperature at which a natural methane sample formed. Since methane produced biologically in nature forms below about 80°C, and methane created through the thermal breakdown of more complex organic matter forms at higher temperatures (reaching 160°C�°C, depending on the depth of formation), this determination can aid in figuring out how and where the gas formed.

A paper describing the new technique and its first applications as a geothermometer appears in a special section about natural gas in the current issue of the journal Science. Former Caltech graduate student Daniel A. Stolper (PhD ’14) is the lead author on the paper.

“Everyone who looks at methane sees problems, sees questions, and all of these will be answered through basic understanding of its formation, its storage, its chemical pathways,” says Eiler, the Robert P. Sharp Professor of Geology and professor of geochemistry at Caltech.

“The issue with many natural gas deposits is that where you find them-where you go into the ground and drill for the methane-is not where the gas was created. Many of the gases we’re dealing with have moved,” says Stolper. “In making these measurements of temperature, we are able to really, for the first time, say in an independent way, ‘We know the temperature, and thus the environment where this methane was formed.'”

Eiler’s group determines the sources and formation conditions of materials by looking at the distribution of heavy isotopes-species of atoms that have extra neutrons in their nuclei and therefore have different chemistry. For example, the most abundant form of carbon is carbon-12, which has six protons and six neutrons in its nucleus. However, about 1 percent of all carbon possesses an extra neutron, which makes carbon-13. Chemicals compete for these heavy isotopes because they slow molecular motions, making molecules more stable. But these isotopes are also very rare, so there is a chemical tug-of-war between molecules, which ends up concentrating the isotopes in the molecules that benefit most from their stabilizing effects. Similarly, the heavy isotopes like to bind, or “clump,” with each other, meaning that there will be an excess of molecules containing two or more of the isotopes compared to molecules containing just one. This clumping effect is strong at low temperatures and diminishes at higher temperatures. Therefore, determining how many of the molecules in a sample contain heavy isotopes clumped together can tell you something about the temperature at which the sample formed.

Eiler’s group has previously used such a “clumped isotope” technique to determine the body temperatures of dinosaurs, ground temperatures in ancient East Africa, and surface temperatures of early Mars. Those analyses looked at the clumping of carbon-13 and oxygen-18 in various minerals. In the new work, Eiler and his colleagues were able to examine the clumping of carbon-13 and deuterium (hydrogen-2).

The key enabling technology was a new mass spectrometer that the team designed in collaboration with Thermo Fisher, mixing and matching existing technologies to piece together a new platform. The prototype spectrometer, the Thermo IRMS 253 Ultra, is equipped to analyze samples in a way that measures the abundances of several rare versions, or isotopologues, of the methane molecule, including two “clumped isotope” species: 13CH3D, which has both a carbon-13 atom and a deuterium atom, and 12CH2D2, which includes two deuterium atoms.

Using the new spectrometer, the researchers first tested gases they made in the laboratory to make sure the method returned the correct formation temperatures.

They then moved on to analyze samples taken from environments where much is known about the conditions under which methane likely formed. For example, sometimes when methane forms in shale, an impermeable rock, it is trapped and stored, so that it cannot migrate from its point of origin. In such cases, detailed knowledge of the temperature history of the rock constrains the possible formation temperature of methane in that rock. Eiler and Stolper analyzed samples of methane from the Haynesville Shale, located in parts of Arkansas, Texas, and Louisiana, where the shale is not thought to have moved much after methane generation. And indeed, the clumped isotope technique returned a range of temperatures (169°C�°C) that correspond well with current reservoir temperatures (163°C�°C). The method was also spot-on for methane collected from gas that formed as a product of oil-eating bugs living on top of oil reserves in the Gulf of Mexico. It returned temperatures of 34°C and 48°C plus or minus 8°C for those samples, and the known temperatures of the sampling locations were 42°C and 48°C, respectively.

To validate further the new technique, the researchers next looked at methane from the Marcellus Shale, a formation beneath much of the Appalachian basin, where the gas-trapping rock is known to have formed at high temperature before being uplifted into a cooler environment. The scientists wanted to be sure that the methane did not reset to the colder temperature after formation. Using their clumped isotope technique, the researchers verified this, returning a high formation temperature.

“It must be that once the methane exists and is stable, it’s a fossil remnant of what its formation environment was like,” Eiler says. “It only remembers where it formed.”

An important application of the technique is suggested by the group’s measurements of methane from the Antrim Shale in Michigan, where groundwater contains both biologically and thermally produced methane. Clumped isotope temperatures returned for samples from the area clearly revealed the different origins of the gases, hitting about 40°C for a biologically produced sample and about 115°C for a sample involving a mix of biologically and thermally produced methane.

“There are many cases where it is unclear whether methane in a sample of groundwater is the product of subsurface biological communities or has leaked from petroleum-forming systems,” says Eiler. “Our results from the Antrim Shale indicate that this clumped isotope technique will be useful for distinguishing between these possible sources.”

One final example, from the Potiguar Basin in Brazil, demonstrates another way the new method will serve geologists. In this case the methane was dissolved in oil and had been free to migrate from its original location. The researchers initially thought there was a problem with their analysis because the temperature they returned was much higher than the known temperature of the oil. However, recent evidence from drill core rocks from the region shows that the deepest parts of the system actually got very hot millions of years ago. This has led to a new interpretation suggesting that the methane gas originated deep in the system at high temperatures and then percolated up and mixed into the oil.

“This shows that our new technique is not just a geothermometer for methane formation,” says Stolper. “It’s also something you can use to think about the geology of the system.”

Research provides new theory on cause of ice age 2.6 million years ago

New research published today (Friday 27th June 2014) in the journal Nature Scientific Reports has provided a major new theory on the cause of the ice age that covered large parts of the Northern Hemisphere 2.6 million years ago.

The study, co-authored by Dr Thomas Stevens, from the Department of Geography at Royal Holloway, University of London, found a previously unknown mechanism by which the joining of North and South America changed the salinity of the Pacific Ocean and caused major ice sheet growth across the Northern Hemisphere.

The change in salinity encouraged sea ice to form which in turn created a change in wind patterns, leading to intensified monsoons. These provided moisture that caused an increase in snowfall and the growth of major ice sheets, some of which reached 3km thick.

The team of researchers analysed deposits of wind-blown dust called red clay that accumulated between six million and two and a half million years ago in north central China, adjacent to the Tibetan plateau, and used them to reconstruct changing monsoon precipitation and temperature.

“Until now, the cause of the Quaternary ice age had been a hotly debated topic”, said Dr Stevens. “Our findings suggest a significant link between ice sheet growth, the monsoon and the closing of the Panama Seaway, as North and South America drifted closer together. This provides us with a major new theory on the origins of the ice age, and ultimately our current climate system.”

Surprisingly, the researchers found there was a strengthening of the monsoon during global cooling, instead of the intense rainfall normally associated with warmer climates.

Dr Stevens added: “This led us to discover a previously unknown interaction between plate tectonic movements in the Americas and dramatic changes in global temperature. The intensified monsoons created a positive feedback cycle, promoting more global cooling, more sea ice and even stronger precipitation, culminating in the spread of huge glaciers across the Northern Hemisphere.”

Ancient ocean currents may have changed pace and intensity of ice ages

About 950,000 years ago, North Atlantic currents and northern hemisphere ice sheets underwent changes. -  NASA
About 950,000 years ago, North Atlantic currents and northern hemisphere ice sheets underwent changes. – NASA

Climate scientists have long tried to explain why ice-age cycles became longer and more intense some 900,000 years ago, switching from 41,000-year cycles to 100,000-year cycles.

In a paper published this week in the journal Science, researchers report that the deep ocean currents that move heat around the globe stalled or may have stopped at that time, possibly due to expanding ice cover in the Northern Hemisphere.

“The research is a breakthrough in understanding a major change in the rhythm of Earth’s climate, and shows that the ocean played a central role,” says Candace Major, program director in the National Science Foundation (NSF)’s Division of Ocean Sciences, which funded the research.

The slowing currents increased carbon dioxide (CO2) storage in the oceans, leaving less CO2 in the atmosphere. That kept temperatures cold and kicked the climate system into a new phase of colder, but less frequent, ice ages, the scientists believe.

“The oceans started storing more carbon dioxide for a longer period of time,” says Leopoldo Pena, the paper’s lead author and a paleoceanographer at Columbia University’s Lamont-Doherty Earth Observatory (LDEO). “Our evidence shows that the oceans played a major role in slowing the pace of the ice ages and making them more severe.”

The researchers reconstructed the past strength of Earth’s system of ocean currents by sampling deep-sea sediments off the coast of South Africa, where powerful currents originating in the North Atlantic Ocean pass on their way to Antarctica.

How vigorously those currents moved can be inferred by how much North Atlantic water made it that far, as measured by isotope ratios of the element neodymium bearing the signature of North Atlantic seawater.

Like tape recorders, the shells of ancient plankton incorporate these seawater signals through time, allowing scientists to approximate when currents grew stronger and when weaker.

Over the last 1.2 million years, the conveyor-like currents strengthened during warm periods and lessened during ice ages, as previously thought.

But at about 950,000 years ago, ocean circulation slowed significantly and stayed weak for 100,000 years.

During that period the planet skipped an interglacial–the warm interval between ice ages. When the system recovered, it entered a new phase of longer, 100,000-year ice age cycles.

After this turning point, deep ocean currents remained weak during ice ages, and ice ages themselves became colder.

“Our discovery of such a major breakdown in the ocean circulation system was a big surprise,” said paper co-author Steven Goldstein, a geochemist at LDEO. “It allowed the ice sheets to grow when they should have melted, triggering the first 100,000-year cycle.”

Ice ages come and go at predictable intervals based on the changing amount of sunlight that falls on the planet, due to variations in Earth’s orbit around the sun.

Orbital changes alone, however, are not enough to explain the sudden switch to longer ice age intervals.

According to one earlier hypothesis for the transition, advancing glaciers in North America stripped away soils in Canada, causing thicker, longer-lasting ice to build up on the remaining bedrock.

Building on that idea, the researchers believe that the advancing ice might have triggered the slowdown in deep ocean currents, leading the oceans to vent less carbon dioxide, which suppressed the interglacial that should have followed.

“The ice sheets must have reached a critical state that switched the ocean circulation system into a weaker mode,” said Goldstein.

Neodymium, a key component of cellphones, headphones, computers and wind turbines, also offers a good way of measuring the vigor of ancient ocean currents.

Goldstein and colleagues had used neodymium ratios in deep-sea sediment samples to show that ocean circulation slowed during past ice ages.

They used the same method to show that changes in climate preceded changes in ocean circulation.

A trace element in Earth’s crust, neodymium washes into the oceans through erosion from the continents, where natural radioactive decay leaves a signature unique to the land mass from which it originated.

When Goldstein and Lamont colleague Sidney Hemming pioneered this method in the late 1990s, they rarely worried about surrounding neodymium contaminating their samples.

The rise of consumer electronics has changed that.

“I used to say you could do sample processing for neodymium analysis in a parking lot,” said Goldstein. “Not anymore.”

Fracking flowback could pollute groundwater with heavy metals

Partially wetted sand grains (grey) with colloids (red) are shown. -  Cornell University
Partially wetted sand grains (grey) with colloids (red) are shown. – Cornell University

The chemical makeup of wastewater generated by “hydrofracking” could cause the release of tiny particles in soils that often strongly bind heavy metals and pollutants, exacerbating the environmental risks during accidental spills, Cornell University researchers have found.

Previous research has shown 10 to 40 percent of the water and chemical solution mixture injected at high pressure into deep rock strata, surges back to the surface during well development. Scientists at the College of Agriculture and Life Sciences studying the environmental impacts of this “flowback fluid” found that the same properties that make it so effective at extracting natural gas from shale can also displace tiny particles that are naturally bound to soil, causing associated pollutants such as heavy metals to leach out.

They described the mechanisms of this release and transport in a paper published in the American Chemical Society journal Environmental Science & Technology.

The particles they studied are colloids – larger than the size of a molecule but smaller than what can be seen with the naked eye – which cling to sand and soil due to their electric charge.

In experiments, glass columns were filled with sand and synthetic polystyrene colloids. They then flushed the column with different fluids – deionized water as a control, and flowback fluid collected from a Marcellus Shale drilling site – at different rates of flow and measured the amount of colloids that were mobilized.

On a bright field microscope, the polystyrene colloids were visible as red spheres between light-grey sand grains, which made their movement easy to track. The researchers also collected and analyzed the water flowing out of the column to quantify the colloid concentration leaching out.

They found that fewer than five percent of colloids were released when they flushed the columns with deionized water. That figure jumped to 32 to 36 percent when flushed with flowback fluid. Increasing the flow rate of the flowback fluid mobilized an additional 36 percent of colloids.

They believe this is because the chemical composition of the flowback fluid reduced the strength of the forces that allow colloids to remain bound to the sand, causing the colloids to actually be repelled from the sand.

“This is a first step into discovering the effects of flowback fluid on colloid transport in soils,” said postdoctoral associate Cathelijne Stoof, a co-author on the paper.

The authors hope to conduct further experiments using naturally occurring colloids in more complex field soil systems, as well as different formulations of flowback fluid collected from other drilling sites.

Stoof said awareness of the phenomenon and an understanding of the mechanisms behind it can help identify risks and inform mitigation strategies.

“Sustainable development of any resource requires facts about its potential impacts, so legislators can make informed decisions about whether and where it can and cannot be allowed, and to develop guidelines in case it goes wrong,” Stoof said. “In the case of spills, you want to know what happens when the fluid moves through the soil.”




Video
Click on this image to view the .mp4 video
This video visualizes the effects of hydrofracking flowback fluid on colloid mobilization in unsaturated sand. Included are the injection of the colloids into the sand column at the beginning of the experiment, the deionized water flush at 0.3 ml/min, the flowback water flush at 0.3 ml/min, and the flowback water flush at 1.5 ml/min. – Cornell University

Oklahoma quakes induced by wastewater injection, study finds

The dramatic increase in earthquakes in central Oklahoma since 2009 is likely attributable to subsurface wastewater injection at just a handful of disposal wells, finds a new study to be published in the journal Science on July 3, 2014.

The research team was led by Katie Keranen, professor of geophysics at Cornell University, who says Oklahoma earthquakes constitute nearly half of all central and eastern U.S. seismicity from 2008 to 2013, many occurring in areas of high-rate water disposal.

“Induced seismicity is one of the primary challenges for expanded shale gas and unconventional hydrocarbon development. Our results provide insight into the process by which the earthquakes are induced and suggest that adherence to standard best practices may substantially reduce the risk of inducing seismicity,” said Keranen. “The best practices include avoiding wastewater disposal near major faults and the use of appropriate monitoring and mitigation strategies.”

The study also concluded:

  • Four of the highest-volume disposal wells in Oklahoma (~0.05% of wells) are capable of triggering ~20% of recent central U.S. earthquakes in a swarm covering nearly 2,000 square kilometers, as shown by analysis of modeled pore pressure increase at relocated earthquake hypocenters.

  • Earthquakes are induced at distances over 30 km from the disposal wells. These distances are far beyond existing criteria of 5 km from the well for diagnosis of induced earthquakes.

  • The area of increased pressure related to these wells continually expands, increasing the probability of encountering a larger fault and thus increasing the risk of triggering a higher-magnitude earthquake.

“Earthquake and subsurface pressure monitoring should be routinely conducted in regions of wastewater disposal and all data from those should be publicly accessible. This should also include detailed monitoring and reporting of pumping volumes and pressures,” said Keranen. ‘In many states the data are more difficult to obtain than for Oklahoma; databases should be standardized nationally. Independent quality assurance checks would increase confidence. ”

Resolving apparent inconsistencies in optimality principles for flow processes in geosystems

Optimality principles have been used, in a holistic approach, to describe flow processes in several important geosystems. Optimality principles refer to the state of a physical system that is controlled by an optimal condition subject to physical and/or resource constraints.

While significant successes have been achieved in applying them, some principles appear to contradict each other.

For example, scientists have found that the formation of channel networks in a river basin follows the minimization of energy expenditure (MEE) rate, while the Earth-atmosphere system can be described by the maximum entropy production (MEP) principle.

Under isothermal conditions the energy expenditure rate is proportional to the entropy production rate; therefore, MEE and MEP do not appear to be consistent.

The physical origin of these optimality principles is an issue of active research. They cannot be directly deduced from existing thermodynamic laws that deal largely with processes within black-boxes (systems) and were not developed to describe flow structures for flow processes within these boxes.

The apparent inconsistency between different optimality principles calls for the development of a more precise understanding of fundamental physical laws within the context of thermodynamics.

In a recent article published in the Chinese Science Bulletin, Hui-Hai Liu, a scientist in the Earth Sciences Division at the Lawrence Berkeley National Laboratory of the University of California, proposed a new thermodynamic hypothesis.

In order to resolve the seemingly inconsistent optimality principles for flow processes in geosystems, this hypothesis states that a nonlinear natural system that is not isolated and involves positive feedback mechanisms tends to minimize its resistance to the flow process through it that are imposed by its environment.

The key discovery of this research is that a system does not tend to provide minimum resistance to all the involved flow processes, but only to the driving process imposed by its environment. The optimality principle corresponding to minimizing flow resistance applies solely to the driving process. This is a significant refinement of traditional optimality principles that do not single out the driving process.

This hypothesis resolves the seeming inconsistency between minimization of energy expenditure for a river basin and the maximum entropy production principle for the Earth-atmosphere system.

Water flow is the driving process in forming the channel network of a river basin; without water flow, there would not be a soil erosion process to generate river patterns.

On the other hand, the Earth receives radiation from the hot Sun and transfers this heat into space. The atmosphere and oceans act as a fluid system that transports heat from hot regions to cold ones with general circulation, and the convection process is more efficient in transferring heat than the conduction process. In this system, the driving flow process is the heat flow, which is also the initiator for other flow processes.

Under steady-state flow conditions, the average heat flow rate is closely related to entropy production in the Earth-atmosphere system, and the MEP corresponds to the maximum convective heat transport. In this case, maximum entropy production happens to be a byproduct of this heat-flow optimization process.

Observed and understood this way, the maximum entropy production principle in the Earth-atmosphere system and the minimization of energy expenditure in a river basin are consistent and can be unified in terms of minimizing resistance to the “flow process imposed by its environment”, or the driving process.

This research also outlines the conditions under which the corresponding optimality principle can apply, in a nonlinear system that is not isolated and involves positive feedback mechanisms.

Examples in subsurface liquid flow processes were used to demonstrate that the minimization of flow resistance does not hold when these conditions are not met.

This new hypothesis has important applications in practice.

Hui-Hai Liu posits that this new understanding can serve as the physical basis for successfully developing subsurface flow laws in hydrogeology, including the base-case theory for modeling unsaturated flow and transport in the well-known Yucca Mountain Project related to the US high-level nuclear waste repository site.

“I can see some direct applications of the theory in areas including fingering flow in the subsurface, hydraulic fracturing process, and rock damage mechanics,” said Hui-Hai Liu.

Study links Greenland ice sheet collapse, sea level rise 400,000 years ago

A research team is hiking to sample the Greenland ice-sheet margin in south Greenland. -  (Photo by Kelsey Winsor, courtesy Oregon State University)
A research team is hiking to sample the Greenland ice-sheet margin in south Greenland. – (Photo by Kelsey Winsor, courtesy Oregon State University)

A new study suggests that a warming period more than 400,000 years ago pushed the Greenland ice sheet past its stability threshold, resulting in a nearly complete deglaciation of southern Greenland and raising global sea levels some 4-6 meters.

The study is one of the first to zero in on how the vast Greenland ice sheet responded to warmer temperatures during that period, which were caused by changes in the Earth’s orbit around the sun.

Results of the study, which was funded by the National Science Foundation, are being published this week in the journal Nature.

“The climate 400,000 years ago was not that much different than what we see today, or at least what is predicted for the end of the century,” said Anders Carlson, an associate professor at Oregon State University and co-author on the study. “The forcing was different, but what is important is that the region crossed the threshold allowing the southern portion of the ice sheet to all but disappear.

“This may give us a better sense of what may happen in the future as temperatures continue rising,” Carlson added.

Few reliable models and little proxy data exist to document the extent of the Greenland ice sheet loss during a period known as the Marine Isotope Stage 11. This was an exceptionally long warm period between ice ages that resulted in a global sea level rise of about 6-13 meters above present. However, scientists have been unsure of how much sea level rise could be attributed to Greenland, and how much may have resulted from the melting of Antarctic ice sheets or other causes.

To find the answer, the researchers examined sediment cores collected off the coast of Greenland from what is called the Eirik Drift. During several years of research, they sampled the chemistry of the glacial stream sediment on the island and discovered that different parts of Greenland have unique chemical features. During the presence of ice sheets, the sediments are scraped off and carried into the water where they are deposited in the Eirik Drift.

“Each terrain has a distinct fingerprint,” Carlson noted. “They also have different tectonic histories and so changes between the terrains allow us to predict how old the sediments are, as well as where they came from. The sediments are only deposited when there is significant ice to erode the terrain. The absence of terrestrial deposits in the sediment suggests the absence of ice.

“Not only can we estimate how much ice there was,” he added, “but the isotopic signature can tell us where ice was present, or from where it was missing.”

This first “ice sheet tracer” utilizes strontium, lead and neodymium isotopes to track the terrestrial chemistry.

The researchers’ analysis of the scope of the ice loss suggests that deglaciation in southern Greenland 400,000 years ago would have accounted for at least four meters – and possibly up to six meters – of global sea level rise. Other studies have shown, however, that sea levels during that period were at least six meters above present, and may have been as much as 13 meters higher.

Carlson said the ice sheet loss likely went beyond the southern edges of Greenland, though not all the way to the center, which has not been ice-free for at least one million years.

In their Nature article, the researchers contrasted the events of Marine Isotope Stage 11 with another warming period that occurred about 125,000 years ago and resulted in a sea level rise of 5-10 meters. Their analysis of the sediment record suggests that not as much of the Greenland ice sheet was lost – in fact, only enough to contribute to a sea level rise of less than 2.5 meters.

“However, other studies have shown that Antarctica may have been unstable at the time and melting there may have made up the difference,” Carlson pointed out.

The researchers say the discovery of an ice sheet tracer that can be documented through sediment core analysis is a major step to understanding the history of ice sheets in Greenland – and their impact on global climate and sea level changes. They acknowledge the need for more widespread coring data and temperature reconstructions.

“This is the first step toward more complete knowledge of the ice history,” Carlson said, “but it is an important one.”