Magma in Earth’s mantle forms deeper than once thought

This shows magma generation seen from a cross-section of Earth's interior beneath oceanic ridges. -  Dasgupta Group/Rice University
This shows magma generation seen from a cross-section of Earth’s interior beneath oceanic ridges. – Dasgupta Group/Rice University

Magma forms far deeper than geologists previously thought, according to new research results.

A team led by geologist Rajdeep Dasgupta of Rice University put very small samples of peridotite, rock derived from Earth’s mantle, under high pressures in a laboratory.

The scientists found that the rock can and does liquify, at least in small amounts, at pressures equivalent to those found as deep as 250 kilometers down in the mantle beneath the ocean floor.

Dasgupta said that this answers several questions about Earth’s inner workings.

He is the lead author of a paper that appears today in the journal Nature. The research was funded by the National Science Foundation (NSF).

“The results show that in some parts of the Earth, melting, or magma formation, happens very deep beneath Earth’s surface,” said geologist Jennifer Wade, a program director in NSF’s Division of Earth Sciences, which funded the research.

“It also means that some carbon dioxide and water could come from different sources–and deeper within the Earth–than we believed.”

The mantle is the planet’s middle layer, a buffer of rock between the crust–the top five miles or so–and the Earth’s core.

If one could compress millions of years of observation of the mantle to mere minutes, the mantle would look like a rolling mass of rising and falling material.

This slow but constant churning convection brings materials from deep within the Earth to the surface, and higher, through volcanic eruptions.

The team focused on the mantle beneath the ocean because that’s where crust is created and where, Dasgupta said, “the connection between the interior and surface world is established.”

Magma rises with convective currents, then cools and spreads out to form ocean-floor crust.

The starting point for melting has long been thought to be at 70 kilometers beneath the seafloor.

That had confounded geologists who had suspected, but could not demonstrate, the existence of deeper magma, said Dasgupta.

For example, when scientists try to determine the mantle’s density, they do so by measuring the speed of a seismic wave after an earthquake, from its origin to other points on the planet.

Because such waves travel faster through solids (e.g., crust) than through liquids (e.g., magma), geologists had been surprised to detect waves slowing down, as though passing through liquid, in a zone that should be the mantle’s faster “express lane.”

“Seismologists have observed anomalies in velocity data as deep as 200 kilometers beneath the ocean floor,” Dasgupta said.

“It turns out that trace amounts of magma are generated at this depth, which would potentially explain that” slower velocity.

The research also offers clues to the electrical conductivity of the oceanic mantle.

“The magma at such depths has a high enough concentration of dissolved carbon dioxide that its conductivity is very high,” Dasgupta said.

But, because scientists have not yet been able to sample the mantle directly, researchers have had to extrapolate from the properties of rocks carried up to the surface.

So, in a previous study, Dasgupta determined that melting in Earth’s deep upper mantle is caused by the presence of carbon dioxide.

The present study shows that carbon helps to make silicate magma at significant depths. And, the researchers also found that carbonated rock melts at significantly lower temperatures than non-carbonated rock.

“This deep melting makes the silicate differentiation [changes in silicate distribution that range from the dense metallic core, to the less-dense silicate-rich mantle, to the thinner crust] of the planet much more efficient than previously thought,” Dasgupta said.

“Deep magma is the main agent that brings all the key ingredients for life–water and carbon–to the surface of the Earth.”

In Dasgupta’s high-pressure lab, volcanic rocks are windows to the planet’s interior. The researchers crush tiny rock samples that contain carbon dioxide to find out how deep magma forms.

“We have all the necessary tools to simulate very high pressures–to nearly 750,000 pounds per square inch–and temperatures,” he said. “We can subject small amounts of rock to these conditions to see what happens.”

The geologists use powerful hydraulic presses to partially melt rocks that contain tiny amounts of carbon, simulating what they believe is happening under equivalent pressures in the mantle.

“When rocks come from deep in the mantle to shallower depths, they cross . . . the solidus [boundary], where rocks begin to undergo partial melting and produce magmas,” Dasgupta said.

“Scientists knew the effect of a trace amount of carbon dioxide or water would lower this boundary, but our new estimation made it 150-180 kilometers deeper from the known depth of 70 kilometers,” he said.

“What we are now saying is that with just a trace of carbon dioxide in the mantle, melting can begin as deep as around 200 kilometers.

“When we incorporate the effect of trace water, the magma generation depth becomes at least 250 kilometers.”

The extent of magma generation is larger than previously thought, he said, and, as a consequence, has the capacity to affect the geophysical and geochemical properties of the entire planet.

Magma in Earth’s mantle forms deeper than once thought

This shows magma generation seen from a cross-section of Earth's interior beneath oceanic ridges. -  Dasgupta Group/Rice University
This shows magma generation seen from a cross-section of Earth’s interior beneath oceanic ridges. – Dasgupta Group/Rice University

Magma forms far deeper than geologists previously thought, according to new research results.

A team led by geologist Rajdeep Dasgupta of Rice University put very small samples of peridotite, rock derived from Earth’s mantle, under high pressures in a laboratory.

The scientists found that the rock can and does liquify, at least in small amounts, at pressures equivalent to those found as deep as 250 kilometers down in the mantle beneath the ocean floor.

Dasgupta said that this answers several questions about Earth’s inner workings.

He is the lead author of a paper that appears today in the journal Nature. The research was funded by the National Science Foundation (NSF).

“The results show that in some parts of the Earth, melting, or magma formation, happens very deep beneath Earth’s surface,” said geologist Jennifer Wade, a program director in NSF’s Division of Earth Sciences, which funded the research.

“It also means that some carbon dioxide and water could come from different sources–and deeper within the Earth–than we believed.”

The mantle is the planet’s middle layer, a buffer of rock between the crust–the top five miles or so–and the Earth’s core.

If one could compress millions of years of observation of the mantle to mere minutes, the mantle would look like a rolling mass of rising and falling material.

This slow but constant churning convection brings materials from deep within the Earth to the surface, and higher, through volcanic eruptions.

The team focused on the mantle beneath the ocean because that’s where crust is created and where, Dasgupta said, “the connection between the interior and surface world is established.”

Magma rises with convective currents, then cools and spreads out to form ocean-floor crust.

The starting point for melting has long been thought to be at 70 kilometers beneath the seafloor.

That had confounded geologists who had suspected, but could not demonstrate, the existence of deeper magma, said Dasgupta.

For example, when scientists try to determine the mantle’s density, they do so by measuring the speed of a seismic wave after an earthquake, from its origin to other points on the planet.

Because such waves travel faster through solids (e.g., crust) than through liquids (e.g., magma), geologists had been surprised to detect waves slowing down, as though passing through liquid, in a zone that should be the mantle’s faster “express lane.”

“Seismologists have observed anomalies in velocity data as deep as 200 kilometers beneath the ocean floor,” Dasgupta said.

“It turns out that trace amounts of magma are generated at this depth, which would potentially explain that” slower velocity.

The research also offers clues to the electrical conductivity of the oceanic mantle.

“The magma at such depths has a high enough concentration of dissolved carbon dioxide that its conductivity is very high,” Dasgupta said.

But, because scientists have not yet been able to sample the mantle directly, researchers have had to extrapolate from the properties of rocks carried up to the surface.

So, in a previous study, Dasgupta determined that melting in Earth’s deep upper mantle is caused by the presence of carbon dioxide.

The present study shows that carbon helps to make silicate magma at significant depths. And, the researchers also found that carbonated rock melts at significantly lower temperatures than non-carbonated rock.

“This deep melting makes the silicate differentiation [changes in silicate distribution that range from the dense metallic core, to the less-dense silicate-rich mantle, to the thinner crust] of the planet much more efficient than previously thought,” Dasgupta said.

“Deep magma is the main agent that brings all the key ingredients for life–water and carbon–to the surface of the Earth.”

In Dasgupta’s high-pressure lab, volcanic rocks are windows to the planet’s interior. The researchers crush tiny rock samples that contain carbon dioxide to find out how deep magma forms.

“We have all the necessary tools to simulate very high pressures–to nearly 750,000 pounds per square inch–and temperatures,” he said. “We can subject small amounts of rock to these conditions to see what happens.”

The geologists use powerful hydraulic presses to partially melt rocks that contain tiny amounts of carbon, simulating what they believe is happening under equivalent pressures in the mantle.

“When rocks come from deep in the mantle to shallower depths, they cross . . . the solidus [boundary], where rocks begin to undergo partial melting and produce magmas,” Dasgupta said.

“Scientists knew the effect of a trace amount of carbon dioxide or water would lower this boundary, but our new estimation made it 150-180 kilometers deeper from the known depth of 70 kilometers,” he said.

“What we are now saying is that with just a trace of carbon dioxide in the mantle, melting can begin as deep as around 200 kilometers.

“When we incorporate the effect of trace water, the magma generation depth becomes at least 250 kilometers.”

The extent of magma generation is larger than previously thought, he said, and, as a consequence, has the capacity to affect the geophysical and geochemical properties of the entire planet.

Greenland ice cores reveal warm climate of the past

The NEEM ice core drilling project in northwest Greenland is an international project led by the Centre for Ice and Climate at the Niels Bohr Institute. For four years the researchers have drilled ice cores through the entire 2.5 kilometer thick ice sheet and obtained new groundbreaking knowledge about the past warm climate period called the Eemian. -  Niels Bohr Institute
The NEEM ice core drilling project in northwest Greenland is an international project led by the Centre for Ice and Climate at the Niels Bohr Institute. For four years the researchers have drilled ice cores through the entire 2.5 kilometer thick ice sheet and obtained new groundbreaking knowledge about the past warm climate period called the Eemian. – Niels Bohr Institute

In the period between 130,000 and 115,000 years ago, Earth’s climate was warmer than today. But how much warmer was it and what did the warming do to global sea levels? – as we face global warming in the future, the answer to these questions is becoming very important. New research from the NEEM ice core drilling project in Greenland shows that the period was warmer than previously thought. The international research project is led by researchers from the Niels Bohr Institute and the very important results are published in the prestigious scientific journal, Nature.

In the last millions years the Earth’s climate has alternated between ice ages lasting about 100,000 years and interglacial periods of 10,000 to 15,000 years. The new results from the NEEM ice core drilling project in northwest Greenland, led by the Niels Bohr Institute at the University of Copenhagen show that the climate in Greenland was around 8 degrees C warmer than today during the last interglacial period, the Eemian period, 130,000 to 115,000 thousand years ago.

“Even though the warm Eemian period was a period when the oceans were four to eight meters higher than today, the ice sheet in northwest Greenland was only a few hundred meters lower than the current level, which indicates that the contribution from the Greenland ice sheet was less than half the total sea-level rise during that period,” says Dorthe Dahl-Jensen, Professor at the Niels Bohr Institute, University of Copenhagen, and leader of the NEEM-project.

Past reveals knowledge about the climate

The North Greenland Eemian Ice Drilling project or NEEM, led by the Niels Bohr Institute, is an international project with participants from 14 countries. After four years of deep drilling, the team has drilled ice cores through the more than 2.5 kilometer thick ice sheet. The ice is a stack of layer upon layer of annual snow fall which never melts away, and as the layers gradually sink, the snow is compresses into ice. This gives thousands of annual ice layers that, like tree rings, can tell us about variations in past climate from year to year.

The ice cores are examined in laboratories with a series of analyses that reveal past climate. The content of the heavy oxygen isotope O18 in the ice cores tells us about the temperature in clouds when the snow fell, and thus of the climate of the past. The air bubbles in the ice are also examined. The air bubbles are samples of the ancient atmosphere encased in the ice and they provide knowledge about the air composition of the atmosphere during past climates.

Past global warming

The researchers have obtained the first complete ice core record from the entire previous interglacial period, the Eemian, and with the detailed studies have been able to recreate the annual temperatures – almost 130,000 years back in time.

“It is a great achievement for science to collect and combine so many measurements on the ice core and reconstruct past climate history. The new findings show higher temperatures in northern Greenland during the Eemian than current climate models have estimated,” says Professor Dorthe Dahl-Jensen, Niels Bohr Institute.

Intense melting on the surface

During the warm Eemian period, there was intense surface melting that can be seen in the ice core as layers of refrozen meltwater. Meltwater from the surface had penetrated down into the underlying snow, where it once again froze into ice.
Such surface melting has occurred very rarely in the last 5,000 years, but the team observed such a melting during the summer of 2012 when they were in Greenland.

“We were completely shocked by the warm surface temperatures at the NEEM camp in July 2012,” says Professor Dorthe Dahl-Jensen. “It was even raining and just like in the Eemian, the meltwater formed refrozen layers of ice under the surface. Although it was an extreme event the current warming over Greenland makes surface melting more likely and the warming that is predicted to occur over the next 50-100 years will potentially have Eemian-like climatic conditions,” she believes.

Good news and bad news

During the warm Eemian period there was increased melting at the edge of the ice sheet and the dynamic flow of the entire ice mass caused the ice sheet to lose mass and it was reduced in height. The ice mass was shrinking at a very high rate of 6 cm per year. But despite the warm temperatures, the ice sheet did not disappear and the research team estimates that the volume of the ice sheet was not reduced by more than 25 percent during the warmest 6,000 years of the Eemian.

“The good news from this study is that the Greenland ice sheet is not as sensitive to temperature increases and to ice melting and running out to sea in warm climate periods like the Eemian,as we thought” explains Dorthe Dahl-Jensen and adds that the bad news is that if Greenland’s ice did not disappear during the Eemian then Antarctica must be responsible for a significant portion of the 4-8 meter rise in sea levels that we know occurred during the Eemian.

This new knowledge about past warm climates may help to clarify what is in store for us now that we are facing a global warming.

Greenland ice cores reveal warm climate of the past

The NEEM ice core drilling project in northwest Greenland is an international project led by the Centre for Ice and Climate at the Niels Bohr Institute. For four years the researchers have drilled ice cores through the entire 2.5 kilometer thick ice sheet and obtained new groundbreaking knowledge about the past warm climate period called the Eemian. -  Niels Bohr Institute
The NEEM ice core drilling project in northwest Greenland is an international project led by the Centre for Ice and Climate at the Niels Bohr Institute. For four years the researchers have drilled ice cores through the entire 2.5 kilometer thick ice sheet and obtained new groundbreaking knowledge about the past warm climate period called the Eemian. – Niels Bohr Institute

In the period between 130,000 and 115,000 years ago, Earth’s climate was warmer than today. But how much warmer was it and what did the warming do to global sea levels? – as we face global warming in the future, the answer to these questions is becoming very important. New research from the NEEM ice core drilling project in Greenland shows that the period was warmer than previously thought. The international research project is led by researchers from the Niels Bohr Institute and the very important results are published in the prestigious scientific journal, Nature.

In the last millions years the Earth’s climate has alternated between ice ages lasting about 100,000 years and interglacial periods of 10,000 to 15,000 years. The new results from the NEEM ice core drilling project in northwest Greenland, led by the Niels Bohr Institute at the University of Copenhagen show that the climate in Greenland was around 8 degrees C warmer than today during the last interglacial period, the Eemian period, 130,000 to 115,000 thousand years ago.

“Even though the warm Eemian period was a period when the oceans were four to eight meters higher than today, the ice sheet in northwest Greenland was only a few hundred meters lower than the current level, which indicates that the contribution from the Greenland ice sheet was less than half the total sea-level rise during that period,” says Dorthe Dahl-Jensen, Professor at the Niels Bohr Institute, University of Copenhagen, and leader of the NEEM-project.

Past reveals knowledge about the climate

The North Greenland Eemian Ice Drilling project or NEEM, led by the Niels Bohr Institute, is an international project with participants from 14 countries. After four years of deep drilling, the team has drilled ice cores through the more than 2.5 kilometer thick ice sheet. The ice is a stack of layer upon layer of annual snow fall which never melts away, and as the layers gradually sink, the snow is compresses into ice. This gives thousands of annual ice layers that, like tree rings, can tell us about variations in past climate from year to year.

The ice cores are examined in laboratories with a series of analyses that reveal past climate. The content of the heavy oxygen isotope O18 in the ice cores tells us about the temperature in clouds when the snow fell, and thus of the climate of the past. The air bubbles in the ice are also examined. The air bubbles are samples of the ancient atmosphere encased in the ice and they provide knowledge about the air composition of the atmosphere during past climates.

Past global warming

The researchers have obtained the first complete ice core record from the entire previous interglacial period, the Eemian, and with the detailed studies have been able to recreate the annual temperatures – almost 130,000 years back in time.

“It is a great achievement for science to collect and combine so many measurements on the ice core and reconstruct past climate history. The new findings show higher temperatures in northern Greenland during the Eemian than current climate models have estimated,” says Professor Dorthe Dahl-Jensen, Niels Bohr Institute.

Intense melting on the surface

During the warm Eemian period, there was intense surface melting that can be seen in the ice core as layers of refrozen meltwater. Meltwater from the surface had penetrated down into the underlying snow, where it once again froze into ice.
Such surface melting has occurred very rarely in the last 5,000 years, but the team observed such a melting during the summer of 2012 when they were in Greenland.

“We were completely shocked by the warm surface temperatures at the NEEM camp in July 2012,” says Professor Dorthe Dahl-Jensen. “It was even raining and just like in the Eemian, the meltwater formed refrozen layers of ice under the surface. Although it was an extreme event the current warming over Greenland makes surface melting more likely and the warming that is predicted to occur over the next 50-100 years will potentially have Eemian-like climatic conditions,” she believes.

Good news and bad news

During the warm Eemian period there was increased melting at the edge of the ice sheet and the dynamic flow of the entire ice mass caused the ice sheet to lose mass and it was reduced in height. The ice mass was shrinking at a very high rate of 6 cm per year. But despite the warm temperatures, the ice sheet did not disappear and the research team estimates that the volume of the ice sheet was not reduced by more than 25 percent during the warmest 6,000 years of the Eemian.

“The good news from this study is that the Greenland ice sheet is not as sensitive to temperature increases and to ice melting and running out to sea in warm climate periods like the Eemian,as we thought” explains Dorthe Dahl-Jensen and adds that the bad news is that if Greenland’s ice did not disappear during the Eemian then Antarctica must be responsible for a significant portion of the 4-8 meter rise in sea levels that we know occurred during the Eemian.

This new knowledge about past warm climates may help to clarify what is in store for us now that we are facing a global warming.

Scientists underestimated potential for Tohoku quake. Now what?

The massive Tohoku, Japan, earthquake in 2011 and Sumatra-Andaman superquake in 2004 stunned scientists because neither region was thought to be capable of producing a megathrust earthquake with a magnitude exceeding 8.4.

Now earthquake scientists are going back to the proverbial drawing board and admitting that existing predictive models looking at maximum earthquake size are no longer valid.

In a new analysis published in the journal Seismological Research Letters, a team of scientists led by Oregon State University’s Chris Goldfinger describes how past global estimates of earthquake potential were constrained by short historical records and even shorter instrumental records. To gain a better appreciation for earthquake potential, he says, scientists need to investigate longer paleoseismic records.

“Once you start examining the paleoseismic and geodetic records, it becomes apparent that there had been the kind of long-term plate deformation required by a giant earthquake such as the one that struck Japan in 2011,” Goldfinger said. “Paleoseismic work has confirmed several likely predecessors to Tohoku, at about 1,000-year intervals.”

The researchers also identified long-term “supercycles” of energy within plate boundary faults, which appear to store this energy like a battery for many thousands of years before yielding a giant earthquake and releasing the pressure. At the same time, smaller earthquakes occur that do not dissipate to any great extent the energy stored within the plates.

The newly published analysis acknowledges that scientists historically may have underestimated the number of regions capable of producing major earthquakes on a scale of Tohoku.

“Since the 1970s, scientists have divided the world into plate boundaries that can generate 9.0 earthquakes versus those that cannot,” said Goldfinger, a professor in OSU’s College of Earth, Ocean, and Atmospheric Sciences. “Those models were already being called into question when Sumatra drove one stake through their heart, and Tohoku drove the second one.

“Now we have no models that work,” he added, “and we may not have for decades. We have to assume, however, that the potential for 9.0 subduction zone earthquakes is much more widespread than originally thought.”

Both Tohoku and Sumatra were written off in the textbooks as not having the potential for a major earthquake, Goldfinger pointed out.

“Their plate age was too old, and they didn’t have a really large earthquake in their recent history,” Goldfinger said. “In fact, if you look at a northern Japan seismic risk map from several years ago, it looks quite benign – but this was an artifact of recent statistics.”

Paleoseismic evidence of subduction zone earthquakes is not yet plentiful in most cases, so little is known about the long-term earthquake potential of most major faults. Scientists can determine whether a fault has ruptured in the past – when and to what extent – but they cannot easily estimate how big a specific earthquake might have been. Most, Goldfinger says, fall into ranges – say, 8.4 to 8.7.

Nevertheless, that type of evidence can be more telling than historical records because it may take many thousands of years to capture the full range of earthquake behavior.

In their analysis, the researchers point to several subduction zone areas that previously had been discounted as potential 9.0 earthquake producers – but may be due for reconsideration. These include central Chile, Peru, New Zealand, the Kuriles fault between Japan and Russia, the western Aleutian Islands, the Philippines, Java, the Antilles Islands and Makran, Pakistan/Iran.

Onshore faults such as the Himalayan Front may also be hiding outsized earthquakes, the researchers add. Their work was supported by the National Science Foundation.

Goldfinger, who directs the Active Tectonics and Seafloor Mapping Laboratory at Oregon State, is a leading expert on the Cascadia Subduction Zone off the Pacific Northwest coast of North America. His comparative studies have taken him to the Indian Ocean, Japan and Chile, and in 2007, he led the first American research ship into Sumatra waters in nearly 30 years to study similarities between the Indian Ocean subduction zone and Cascadia.

Paleoseismic evidence abounds in the Cascadia Subduction Zone, Goldfinger pointed out. When a major offshore earthquake occurs, the disturbance causes mud and sand to begin streaming down the continental margins and into the undersea canyons. Coarse sediments called turbidites run out onto the abyssal plain; these sediments stand out distinctly from the fine particulate matter that accumulates on a regular basis between major tectonic events.

By dating the fine particles through carbon-14 analysis and other methods, Goldfinger and colleagues can estimate with a great deal of accuracy when major earthquakes have occurred. Over the past 10,000 years, there have been 19 earthquakes that extended along most of the Cascadia Subduction Zone margin, stretching from southern Vancouver Island to the Oregon-California border.

“These would typically be of a magnitude from about 8.7 to 9.2 – really huge earthquakes,” Goldfinger said. “We’ve also determined that there have been 22 additional earthquakes that involved just the southern end of the fault. We are assuming that these are slightly smaller – more like 8.0 – but not necessarily. They were still very large earthquakes that if they happened today could have a devastating impact.”

Scientists underestimated potential for Tohoku quake. Now what?

The massive Tohoku, Japan, earthquake in 2011 and Sumatra-Andaman superquake in 2004 stunned scientists because neither region was thought to be capable of producing a megathrust earthquake with a magnitude exceeding 8.4.

Now earthquake scientists are going back to the proverbial drawing board and admitting that existing predictive models looking at maximum earthquake size are no longer valid.

In a new analysis published in the journal Seismological Research Letters, a team of scientists led by Oregon State University’s Chris Goldfinger describes how past global estimates of earthquake potential were constrained by short historical records and even shorter instrumental records. To gain a better appreciation for earthquake potential, he says, scientists need to investigate longer paleoseismic records.

“Once you start examining the paleoseismic and geodetic records, it becomes apparent that there had been the kind of long-term plate deformation required by a giant earthquake such as the one that struck Japan in 2011,” Goldfinger said. “Paleoseismic work has confirmed several likely predecessors to Tohoku, at about 1,000-year intervals.”

The researchers also identified long-term “supercycles” of energy within plate boundary faults, which appear to store this energy like a battery for many thousands of years before yielding a giant earthquake and releasing the pressure. At the same time, smaller earthquakes occur that do not dissipate to any great extent the energy stored within the plates.

The newly published analysis acknowledges that scientists historically may have underestimated the number of regions capable of producing major earthquakes on a scale of Tohoku.

“Since the 1970s, scientists have divided the world into plate boundaries that can generate 9.0 earthquakes versus those that cannot,” said Goldfinger, a professor in OSU’s College of Earth, Ocean, and Atmospheric Sciences. “Those models were already being called into question when Sumatra drove one stake through their heart, and Tohoku drove the second one.

“Now we have no models that work,” he added, “and we may not have for decades. We have to assume, however, that the potential for 9.0 subduction zone earthquakes is much more widespread than originally thought.”

Both Tohoku and Sumatra were written off in the textbooks as not having the potential for a major earthquake, Goldfinger pointed out.

“Their plate age was too old, and they didn’t have a really large earthquake in their recent history,” Goldfinger said. “In fact, if you look at a northern Japan seismic risk map from several years ago, it looks quite benign – but this was an artifact of recent statistics.”

Paleoseismic evidence of subduction zone earthquakes is not yet plentiful in most cases, so little is known about the long-term earthquake potential of most major faults. Scientists can determine whether a fault has ruptured in the past – when and to what extent – but they cannot easily estimate how big a specific earthquake might have been. Most, Goldfinger says, fall into ranges – say, 8.4 to 8.7.

Nevertheless, that type of evidence can be more telling than historical records because it may take many thousands of years to capture the full range of earthquake behavior.

In their analysis, the researchers point to several subduction zone areas that previously had been discounted as potential 9.0 earthquake producers – but may be due for reconsideration. These include central Chile, Peru, New Zealand, the Kuriles fault between Japan and Russia, the western Aleutian Islands, the Philippines, Java, the Antilles Islands and Makran, Pakistan/Iran.

Onshore faults such as the Himalayan Front may also be hiding outsized earthquakes, the researchers add. Their work was supported by the National Science Foundation.

Goldfinger, who directs the Active Tectonics and Seafloor Mapping Laboratory at Oregon State, is a leading expert on the Cascadia Subduction Zone off the Pacific Northwest coast of North America. His comparative studies have taken him to the Indian Ocean, Japan and Chile, and in 2007, he led the first American research ship into Sumatra waters in nearly 30 years to study similarities between the Indian Ocean subduction zone and Cascadia.

Paleoseismic evidence abounds in the Cascadia Subduction Zone, Goldfinger pointed out. When a major offshore earthquake occurs, the disturbance causes mud and sand to begin streaming down the continental margins and into the undersea canyons. Coarse sediments called turbidites run out onto the abyssal plain; these sediments stand out distinctly from the fine particulate matter that accumulates on a regular basis between major tectonic events.

By dating the fine particles through carbon-14 analysis and other methods, Goldfinger and colleagues can estimate with a great deal of accuracy when major earthquakes have occurred. Over the past 10,000 years, there have been 19 earthquakes that extended along most of the Cascadia Subduction Zone margin, stretching from southern Vancouver Island to the Oregon-California border.

“These would typically be of a magnitude from about 8.7 to 9.2 – really huge earthquakes,” Goldfinger said. “We’ve also determined that there have been 22 additional earthquakes that involved just the southern end of the fault. We are assuming that these are slightly smaller – more like 8.0 – but not necessarily. They were still very large earthquakes that if they happened today could have a devastating impact.”

Researchers analyze ‘rock dissolving’ method of geoengineering

The benefits and side effects of dissolving particles in our ocean’s surfaces to increase the marine uptake of carbon dioxide (CO2), and therefore reduce the excess amount of it in the atmosphere, have been analyzed in a new study published today.

The study, published today, 22 January, in IOP Publishing’s journal Environmental Research Letters, assesses the impact of dissolving the naturally occurring mineral olivine and calculates how effective this approach would be in reducing atmospheric CO2.

The researchers, from the Alfred Wegener Institute for Polar and Marine Research in Bremerhaven, Germany, calculate that if three gigatonnes of olivine were deposited into the oceans each year, it could compensate for only around nine per cent of present day anthropogenic CO2 emissions.

This long discussed ‘quick fix’ method of geoengineering is not without environmental drawbacks; the particles would have to be ground down to very small sizes (around one micrometre) in order to be effective. The grinding process would consume energy and therefore emit varying amounts of CO2, depending on the sort of power plants used to provide the energy.

Lead author of the study Peter Köhler said: “Our literature-based estimates on the energy costs of grinding olivine to such a small size suggest that with present day technology, around 30 per cent of the CO2 taken out of the atmosphere and absorbed by the oceans would be re-emitted by the grinding process.”

The researchers used a computer model to assess the impact of six different olivine dissolution scenarios. Olivine is an abundant magnesium-silicate found beneath the Earth’s surface that weathers quickly when exposed to water and air – in its natural environment it is dissolved by carbonic acid which is formed from CO2 out of the atmosphere and rain water.

If olivine is distributed onto the ocean’s surface, it begins to dissolve and subsequently increases the alkalinity of the water. This raises the uptake capacity of the ocean for CO2, which is taken up via gas exchange from the atmosphere.

According to the study, 92 per cent of the CO2 taken up by the oceans would be caused by changes in the chemical make-up of the water, whilst the remaining uptake would be down to changes in marine life through a process known as ocean fertilisation.

Ocean fertilisation involves providing phytoplankton with essential nutrients to encourage its growth. The increased numbers of phytoplankton use CO2 to grow, and then when it dies it sinks to the ocean floor taking the CO2 with it.

“In our study we only examined the effects of silicate in olivine. Silicate is a limiting nutrient for diatoms – a specific class of phytoplankton. We simulated with our model that the added input of silicate would shift the species composition within phytoplankton towards diatoms.

“It is likely that iron and other trace metals will also impact marine life if olivine is used on a large scale. Therefore, this approach can also be considered as an ocean fertilisation experiment and these impacts should be taken into consideration when assessing the pros and cons of olivine dissolution,” continued Köhler.

The researchers also investigated whether the deposition of olivine could counteract the problem of ocean acidification, which continues to have a profound effect on marine life. They calculate that about 40 gigatonnes of olivine would need to be dissolved annually to fully counteract today’s anthropogenic CO2 emissions.

“If this method of geoengineering was deployed, we would need an industry the size of the present day coal industry to obtain the necessary amounts of olivine. To distribute this, we estimate that 100 dedicated large ships with a commitment to distribute one gigatonne of olivine per year would be needed.

“Taking all our conclusions together – mainly the energy costs of the processing line and the projected potential impact on marine biology – we assess this approach as rather inefficient. It certainly is not a simple solution against the global warming problem.” said Köhler.

Researchers analyze ‘rock dissolving’ method of geoengineering

The benefits and side effects of dissolving particles in our ocean’s surfaces to increase the marine uptake of carbon dioxide (CO2), and therefore reduce the excess amount of it in the atmosphere, have been analyzed in a new study published today.

The study, published today, 22 January, in IOP Publishing’s journal Environmental Research Letters, assesses the impact of dissolving the naturally occurring mineral olivine and calculates how effective this approach would be in reducing atmospheric CO2.

The researchers, from the Alfred Wegener Institute for Polar and Marine Research in Bremerhaven, Germany, calculate that if three gigatonnes of olivine were deposited into the oceans each year, it could compensate for only around nine per cent of present day anthropogenic CO2 emissions.

This long discussed ‘quick fix’ method of geoengineering is not without environmental drawbacks; the particles would have to be ground down to very small sizes (around one micrometre) in order to be effective. The grinding process would consume energy and therefore emit varying amounts of CO2, depending on the sort of power plants used to provide the energy.

Lead author of the study Peter Köhler said: “Our literature-based estimates on the energy costs of grinding olivine to such a small size suggest that with present day technology, around 30 per cent of the CO2 taken out of the atmosphere and absorbed by the oceans would be re-emitted by the grinding process.”

The researchers used a computer model to assess the impact of six different olivine dissolution scenarios. Olivine is an abundant magnesium-silicate found beneath the Earth’s surface that weathers quickly when exposed to water and air – in its natural environment it is dissolved by carbonic acid which is formed from CO2 out of the atmosphere and rain water.

If olivine is distributed onto the ocean’s surface, it begins to dissolve and subsequently increases the alkalinity of the water. This raises the uptake capacity of the ocean for CO2, which is taken up via gas exchange from the atmosphere.

According to the study, 92 per cent of the CO2 taken up by the oceans would be caused by changes in the chemical make-up of the water, whilst the remaining uptake would be down to changes in marine life through a process known as ocean fertilisation.

Ocean fertilisation involves providing phytoplankton with essential nutrients to encourage its growth. The increased numbers of phytoplankton use CO2 to grow, and then when it dies it sinks to the ocean floor taking the CO2 with it.

“In our study we only examined the effects of silicate in olivine. Silicate is a limiting nutrient for diatoms – a specific class of phytoplankton. We simulated with our model that the added input of silicate would shift the species composition within phytoplankton towards diatoms.

“It is likely that iron and other trace metals will also impact marine life if olivine is used on a large scale. Therefore, this approach can also be considered as an ocean fertilisation experiment and these impacts should be taken into consideration when assessing the pros and cons of olivine dissolution,” continued Köhler.

The researchers also investigated whether the deposition of olivine could counteract the problem of ocean acidification, which continues to have a profound effect on marine life. They calculate that about 40 gigatonnes of olivine would need to be dissolved annually to fully counteract today’s anthropogenic CO2 emissions.

“If this method of geoengineering was deployed, we would need an industry the size of the present day coal industry to obtain the necessary amounts of olivine. To distribute this, we estimate that 100 dedicated large ships with a commitment to distribute one gigatonne of olivine per year would be needed.

“Taking all our conclusions together – mainly the energy costs of the processing line and the projected potential impact on marine biology – we assess this approach as rather inefficient. It certainly is not a simple solution against the global warming problem.” said Köhler.

Analysis of fracking wastewater yields some surprises

Hydraulically fractured natural gas wells are producing less wastewater per unit of gas recovered than conventional wells would. But the scale of fracking operations in the Marcellus shale region is so vast that the wastewater it produces threatens to overwhelm the region’s wastewater disposal capacity, according to new analysis by researchers at Duke and Kent State universities.

Hydraulically fractured natural gas wells in the Marcellus shale region of Pennsylvania produce only about 35 percent as much wastewater per unit of gas recovered as conventional wells, according to the analysis, which appears in the journal Water Resources Research.

“We found that on average, shale gas wells produced about 10 times the amount of wastewater as conventional wells, but they also produced about 30 times more natural gas,” said Brian Lutz, assistant professor of biogeochemistry at Kent State, who led the analysis while he was a postdoctoral research associate at Duke. “That surprised us, given the popular perception that hydraulic fracturing creates disproportionate amounts of wastewater.”

However, the study shows the total amount of wastewater from natural gas production in the region has increased by about 570 percent since 2004 as a result of increased shale gas production there.

“It’s a double-edged sword,” Lutz said. “On one hand, shale gas production generates less wastewater per unit. On the other hand, because of the massive size of the Marcellus resource, the overall volume of water that now has to be transported and treated is immense. It threatens to overwhelm the region’s wastewater-disposal infrastructure capacity.”

“This is the reality of increasing domestic natural gas production,” said Martin Doyle, professor of river science at Duke’s Nicholas School of the Environment. “There are significant tradeoffs and environmental impacts whether you rely on conventional gas or shale gas.”

The researchers analyzed gas production and wastewater generation for 2,189 gas wells in Pennsylvania, using publicly available data reported by industry to the state’s Department of Environmental Protection, in compliance with state law.

In hydraulic fracturing, large volumes of water, sand and chemicals are injected deep underground into gas wells at high pressure to crack open shale rock and extract its embedded natural gas. As the pace of shale gas production grows, so too have concerns about groundwater contamination and what to do with all the wastewater.

Another surprise that emerged, Doyle said, was that well operators classified only about a third of the wastewater from Marcellus wells as flowback from hydraulic fracturing; most of it was classified as brine.

“A lot of attention, to date, has focused on chemicals in the flowback that comes out of a well following hydraulic fracturing,” he said. “However, the amount of brine produced – which contains high levels of salts and other natural pollutants from shale rock – has received less attention even though it is no less important.”

Brine can be generated by wells over much longer periods of time than flowback, he noted, and studies have shown that some of the pollutants in brine can be as difficult to treat as many of the chemicals used in hydraulic fracturing fluids.

“We need to come up with technological and logistical solutions to address these concerns, including better ways to recycle and treat the waste on site or move it to places where it can be safely disposed,” Doyle said. “Both of these are in fact developing rapidly.”

“Opponents have targeted hydraulic fracturing as posing heightened risks, but many of the same environmental challenges presented by shale gas production would exist if we were expanding conventional gas production,” Lutz added. “We have to accept the reality that any effort to substantially boost domestic energy production will present environmental costs.”

The Marcellus shale formation stretches from New York to Virginia and accounts for about 10 percent of all natural gas produced in the United States today. Much of the current production is in Pennsylvania. Prior to technological advances in horizontal well drilling and hydraulic fracturing that made the shale gas accessible, the region accounted for only about 2 percent of the nation’s output.

Analysis of fracking wastewater yields some surprises

Hydraulically fractured natural gas wells are producing less wastewater per unit of gas recovered than conventional wells would. But the scale of fracking operations in the Marcellus shale region is so vast that the wastewater it produces threatens to overwhelm the region’s wastewater disposal capacity, according to new analysis by researchers at Duke and Kent State universities.

Hydraulically fractured natural gas wells in the Marcellus shale region of Pennsylvania produce only about 35 percent as much wastewater per unit of gas recovered as conventional wells, according to the analysis, which appears in the journal Water Resources Research.

“We found that on average, shale gas wells produced about 10 times the amount of wastewater as conventional wells, but they also produced about 30 times more natural gas,” said Brian Lutz, assistant professor of biogeochemistry at Kent State, who led the analysis while he was a postdoctoral research associate at Duke. “That surprised us, given the popular perception that hydraulic fracturing creates disproportionate amounts of wastewater.”

However, the study shows the total amount of wastewater from natural gas production in the region has increased by about 570 percent since 2004 as a result of increased shale gas production there.

“It’s a double-edged sword,” Lutz said. “On one hand, shale gas production generates less wastewater per unit. On the other hand, because of the massive size of the Marcellus resource, the overall volume of water that now has to be transported and treated is immense. It threatens to overwhelm the region’s wastewater-disposal infrastructure capacity.”

“This is the reality of increasing domestic natural gas production,” said Martin Doyle, professor of river science at Duke’s Nicholas School of the Environment. “There are significant tradeoffs and environmental impacts whether you rely on conventional gas or shale gas.”

The researchers analyzed gas production and wastewater generation for 2,189 gas wells in Pennsylvania, using publicly available data reported by industry to the state’s Department of Environmental Protection, in compliance with state law.

In hydraulic fracturing, large volumes of water, sand and chemicals are injected deep underground into gas wells at high pressure to crack open shale rock and extract its embedded natural gas. As the pace of shale gas production grows, so too have concerns about groundwater contamination and what to do with all the wastewater.

Another surprise that emerged, Doyle said, was that well operators classified only about a third of the wastewater from Marcellus wells as flowback from hydraulic fracturing; most of it was classified as brine.

“A lot of attention, to date, has focused on chemicals in the flowback that comes out of a well following hydraulic fracturing,” he said. “However, the amount of brine produced – which contains high levels of salts and other natural pollutants from shale rock – has received less attention even though it is no less important.”

Brine can be generated by wells over much longer periods of time than flowback, he noted, and studies have shown that some of the pollutants in brine can be as difficult to treat as many of the chemicals used in hydraulic fracturing fluids.

“We need to come up with technological and logistical solutions to address these concerns, including better ways to recycle and treat the waste on site or move it to places where it can be safely disposed,” Doyle said. “Both of these are in fact developing rapidly.”

“Opponents have targeted hydraulic fracturing as posing heightened risks, but many of the same environmental challenges presented by shale gas production would exist if we were expanding conventional gas production,” Lutz added. “We have to accept the reality that any effort to substantially boost domestic energy production will present environmental costs.”

The Marcellus shale formation stretches from New York to Virginia and accounts for about 10 percent of all natural gas produced in the United States today. Much of the current production is in Pennsylvania. Prior to technological advances in horizontal well drilling and hydraulic fracturing that made the shale gas accessible, the region accounted for only about 2 percent of the nation’s output.