Fracking’s environmental impacts scrutinized

Greenhouse gas emissions from the production and use of shale gas would be comparable to conventional natural gas, but the controversial energy source actually faired better than renewables on some environmental impacts, according to new research.

The UK holds enough shale gas to supply its entire gas demand for 470 years, promising to solve the country’s energy crisis and end its reliance on fossil-fuel imports from unstable markets. But for many, including climate scientists and environmental groups, shale gas exploitation is viewed as environmentally dangerous and would result in the UK reneging on its greenhouse gas reduction obligations under the Climate Change Act.

University of Manchester scientists have now conducted one of the most thorough examinations of the likely environmental impacts of shale gas exploitation in the UK in a bid to inform the debate. Their research has just been published in the leading academic journal Applied Energy and study lead author, Professor Adisa Azapagic, will outline the findings at the Labour Party Conference in Manchester, England, today (Monday, 22 September).

“While exploration is currently ongoing in the UK, commercial extraction of shale gas has not yet begun, yet its potential has stirred controversy over its environmental impacts, its safety and the difficulty of justifying its use to a nation conscious of climate change,” said Professor Azapagic.

“There are many unknowns in the debate surrounding shale gas, so we have attempted to address some of these unknowns by estimating its life cycle environmental impacts from ‘cradle to grave’. We looked at 11 different impacts from the extraction of shale gas using hydraulic fracturing – known as ‘fracking’- as well as from its processing and use to generate electricity.”

The researchers compared shale gas to other fossil-fuel alternatives, such as conventional natural gas and coal, as well as low-carbon options, including nuclear, offshore wind and solar power (solar photovoltaics).

The results of the research suggest that the average emissions of greenhouse gases from shale gas over its entire life cycle are about 460 grams of carbon dioxide-equivalent per kilowatt-hour of electricity generated. This, the authors say, is comparable to the emissions from conventional natural gas. For most of the other life-cycle environmental impacts considered by the team, shale gas was also comparable to conventional natural gas.

But the study also found that shale gas was better than offshore wind and solar for four out of 11 impacts: depletion of natural resources, toxicity to humans, as well as the impact on freshwater and marine organisms. Additionally, shale gas was better than solar (but not wind) for ozone layer depletion and eutrophication (the effect of nutrients such as phosphates, on natural ecosystems).

On the other hand, shale gas was worse than coal for three impacts: ozone layer depletion, summer smog and terrestrial eco-toxicity.

Professor Azapagic said: “Some of the impacts of solar power are actually relatively high, so it is not a complete surprise that shale gas is better in a few cases. This is mainly because manufacturing solar panels is very energy and resource-intensive, while their electrical output is quite low in a country like the UK, as we don’t have as much sunshine. However, our research shows that the environmental impacts of shale gas can vary widely, depending on the assumptions for various parameters, including the composition and volume of the fracking fluid used, disposal routes for the drilling waste and the amount of shale gas that can be recovered from a well.

“Assuming the worst case conditions, several of the environmental impacts from shale gas could be worse than from any other options considered in the research, including coal. But, under the best-case conditions, shale gas may be preferable to imported liquefied natural gas.”

The authors say their results highlight the need for tight regulation of shale gas exploration – weak regulation, they claim, may result in shale gas having higher impacts than coal power, resulting in a failure to meet climate change and sustainability imperatives and undermining the deployment of low-carbon technologies.

Professor Azapagic added: “Whether shale gas is an environmentally sound option depends on the perceived importance of different environmental impacts and the regulatory structure under which shale gas operates.

“From the government policy perspective – focusing mainly on economic growth and energy security – it appears likely that shale gas represents a good option for the UK energy sector, assuming that it can be extracted at reasonable cost.

“However, a wider view must also consider other aspects of widespread use of shale gas, including the impact on climate change, as well as many other environmental considerations addressed in our study. Ultimately, the environmental impacts from shale gas will depend on which options it is displacing and how tight the regulation is.”

Study co-author Dr Laurence Stamford, from Manchester’s School of Chemical Engineering and Analytical Science, said: “Appropriate regulation should introduce stringent controls on the emissions from shale gas extraction and disposal of drilling waste. It should also discourage extraction from sites where there is little shale gas in order to avoid the high emissions associated with a low-output well.

He continued: “If shale gas is extracted under tight regulations and is reasonably cheap, there is no obvious reason, as yet, why it should not make some contribution to our energy mix. However, regulation should also ensure that investment in sustainable technologies is not reduced at the expense of shale gas.”

First eyewitness accounts of mystery volcanic eruption

This eruption occurred just before the 1815 Tambora volcanic eruption which is famous for its impact on climate worldwide, with 1816 given memorable names such as ‘Eighteen-Hundred-and-Froze-to-Death’, the ‘Year of the Beggar’ and the ‘Year Without a Summer’ because of unseasonal frosts, crop failure and famine across Europe and North America. The extraordinary conditions are considered to have inspired literary works such as Byron’s ‘Darkness’ and Mary Shelley’s Frankenstein.

However, the global deterioration of the 1810s into the coldest decade in the last 500 years started six years earlier, with another large eruption. In contrast to Tambora, this so-called ‘Unknown’ eruption seemingly occurred unnoticed, with both its location and date a mystery. In fact the ‘Unknown’ eruption was only recognised in the 1990s, from tell-tale markers in Greenland and Antarctic ice that record the rare events when volcanic aerosols are so violently erupted that they reach the Earth’s stratosphere.

Working in collaboration with colleagues from the School of Earth Sciences and PhD student Alvaro Guevara-Murua, Dr Caroline Williams, from the Department of Hispanic, Portuguese and Latin American Studies, began searching historical archives for references to the event.

Dr Williams said: “I spent months combing through the vast Spanish colonial archive, but it was a fruitless search – clearly the volcano wasn’t in Latin America. I then turned to the writings of Colombian scientist Francisco José de Caldas, who served as Director of the Astronomical Observatory of Bogotá between 1805 and 1810. Finding his precise description of the effects of an eruption was a ‘Eureka’ moment.”

In February 1809 Caldas wrote about a “mystery” that included a constant, stratospheric “transparent cloud that obstructs the sun’s brilliance” over Bogotá, starting on the 11 December 1808 and seen across Colombia. He gave detailed observations, for example that the “natural fiery colour [of the sun] has changed to that of silver, so much so that many have mistaken it for the moon”; and that the weather was unusually cold, the fields covered with ice and the crops damaged by frost.

Unearthing a short account written by physician José Hipólito Unanue in Lima, Peru, describing sunset after-glows (a common atmospheric effect caused by volcanic aerosols in the stratosphere) at the same time as Caldas’ “vapours above the horizon”, enabled the researchers to verify that the atmospheric effects of the eruption were seen at the same time on both sides of the equator.

These two 19th century Latin American scientists provide the first direct observations that can be linked to the ‘Unknown’ eruption. More importantly, the accounts date the eruption to within a fortnight of 4 December 1808.

Dr Erica Hendy said: “There have to be more observations hidden away, for example in ship logs. Having a date for the eruption will now make it much easier to track these down, and maybe even pinpoint the volcano. Climate modelling of this fascinating decade will also now be more accurate because the season of the eruption determines how the aerosols disperse around the globe and where climatic effects are felt.”

Alvaro Guevara-Murua added: “This study has meant delving into many fields of research – obviously paleoclimatology and volcanology, but also 19th century meteorology and Spanish colonial history – and has also needed rigorous precision to correctly translate the words of two scientists writing 200 years ago. Giving them a voice in modern science has been a big responsibility.”

One further question remains: why are there so few historical accounts of what was clearly a significant event with wide-reaching consequences? Perhaps, Dr Williams suggests, the political environment on both sides of the Atlantic at the beginning of the nineteenth century played a part.

“The eruption coincided with the Napoleonic Wars in Europe, the Peninsular War in Spain, and with political developments in Latin America that would soon lead to the independence of almost all of Spain’s American colonies. It’s possible that, in Europe and Latin America at least, the attention of individuals who might otherwise have provided us with a record of unusual meteorological or atmospheric effects simply turned to military and political matters instead,” she said.

What set the Earth’s plates in motion?

The image shows a snapshot from the film after 45 million years of spreading. The pink is the region where the mantle underneath the early continent has melted, facilitating its spreading, and the initiation of the plate tectonic process. -  Patrice Rey, Nicolas Flament and Nicolas Coltice.
The image shows a snapshot from the film after 45 million years of spreading. The pink is the region where the mantle underneath the early continent has melted, facilitating its spreading, and the initiation of the plate tectonic process. – Patrice Rey, Nicolas Flament and Nicolas Coltice.

The mystery of what kick-started the motion of our earth’s massive tectonic plates across its surface has been explained by researchers at the University of Sydney.

“Earth is the only planet in our solar system where the process of plate tectonics occurs,” said Professor Patrice Rey, from the University of Sydney’s School of Geosciences.

“The geological record suggests that until three billion years ago the earth’s crust was immobile so what sparked this unique phenomenon has fascinated geoscientists for decades. We suggest it was triggered by the spreading of early continents then eventually became a self-sustaining process.”

Professor Rey is lead author of an article on the findings published in Nature on Wednesday, 17 September.

The other authors on the paper are Nicolas Flament, also from the School of Geosciences and Nicolas Coltice, from the University of Lyon.

There are eight major tectonic plates that move above the earth’s mantle at rates up to 150 millimetres every year.

In simple terms the process involves plates being dragged into the mantle at certain points and moving away from each other at others, in what has been dubbed ‘the conveyor belt’.

Plate tectonics depends on the inverse relationship between density of rocks and temperature.

At mid-oceanic ridges, rocks are hot and their density is low, making them buoyant or more able to float. As they move away from those ridges they cool down and their density increases until, where they become denser than the underlying hot mantle, they sink and are ‘dragged’ under.

But three to four billion years ago, the earth’s interior was hotter, volcanic activity was more prominent and tectonic plates did not become cold and dense enough to spontaneously sank.

“So the driving engine for plate tectonics didn’t exist,” said Professor Rey said.

“Instead, thick and buoyant early continents erupted in the middle of immobile plates. Our modelling shows that these early continents could have placed major stress on the surrounding plates. Because they were buoyant they spread horizontally, forcing adjacent plates to be pushed under at their edges.”

“This spreading of the early continents could have produced intermittent episodes of plate tectonics until, as the earth’s interior cooled and its crust and plate mantle became heavier, plate tectonics became a self-sustaining process which has never ceased and has shaped the face of our modern planet.”

The new model also makes a number of predictions explaining features that have long puzzled the geoscience community.



Video
Click on this image to view the .mp4 video
The movie tells an 87-million-year-long story. It shows an early buoyant continent (made of a residual mantle in green and continental crust in red) slowly spreading toward the adjacent immobile plate (blue). After 45 million years, a short-lived subduction zone, where the plate goes under, develops. This allows the continent to surge toward the ocean, leading to the detachment of a continental block, the starting step in the movement of the continental plates or plate tectonics. – Patrice Rey, Nicolas Flament and Nicolas Coltice

Meteorite that doomed the dinosaurs helped the forests bloom

<IMG SRC="/Images/537934362.jpg" WIDTH="350" HEIGHT="233" BORDER="0" ALT="Seen here is a Late Cretaceous specimen from the Hell Creek Formation, morphotype HC62, taxon
''Rhamnus” cleburni. Specimens are housed at the Denver Museum of Nature and Science in
Denver, Colorado. – Image credit: Benjamin Blonder.”>
Seen here is a Late Cretaceous specimen from the Hell Creek Formation, morphotype HC62, taxon
Rhamnus” cleburni. Specimens are housed at the Denver Museum of Nature and Science in
Denver, Colorado. – Image credit: Benjamin Blonder.

66 million years ago, a 10-km diameter chunk of rock hit the Yukatan peninsula near the site of the small town of Chicxulub with the force of 100 teratons of TNT. It left a crater more than 150 km across, and the resulting megatsunami, wildfires, global earthquakes and volcanism are widely accepted to have wiped out the dinosaurs and made way for the rise of the mammals. But what happened to the plants on which the dinosaurs fed?

A new study led by researchers from the University of Arizona reveals that the meteorite impact that spelled doom for the dinosaurs also decimated the evergreen flowering plants to a much greater extent than their deciduous peers. They hypothesize that the properties of deciduous plants made them better able to respond rapidly to chaotically varying post-apocalyptic climate conditions. The results are publishing on September 16 in the open access journal PLOS Biology.

Applying biomechanical formulae to a treasure trove of thousands of fossilized leaves of angiosperms – flowering plants excluding conifers – the team was able to reconstruct the ecology of a diverse plant community thriving during a 2.2 million-year period spanning the cataclysmic impact event, believed to have wiped out more than half of plant species living at the time. The fossilized leaf samples span the last 1,400,000 years of the Cretaceous and the first 800,000 of the Paleogene.

The researchers found evidence that after the impact, fast-growing, deciduous angiosperms had replaced their slow-growing, evergreen peers to a large extent. Living examples of evergreen angiosperms, such as holly and ivy, tend to prefer shade, don’t grow very fast and sport dark-colored leaves.

“When you look at forests around the world today, you don’t see many forests dominated by evergreen flowering plants,” said the study’s lead author, Benjamin Blonder. “Instead, they are dominated by deciduous species, plants that lose their leaves at some point during the year.”

Blonder and his colleagues studied a total of about 1,000 fossilized plant leaves collected from a location in southern North Dakota, embedded in rock layers known as the Hell Creek Formation, which at the end of the Cretaceous was a lowland floodplain crisscrossed by river channels. The collection consists of more than 10,000 identified plant fossils and is housed primarily at the Denver Museum of Nature and Science. “When you hold one of those leaves that is so exquisitely preserved in your hand knowing it’s 66 million years old, it’s a humbling feeling,” said Blonder.

“If you think about a mass extinction caused by catastrophic event such as a meteorite impacting Earth, you might imagine all species are equally likely to die,” Blonder said. “Survival of the fittest doesn’t apply – the impact is like a reset button. The alternative hypothesis, however, is that some species had properties that enabled them to survive.

“Our study provides evidence of a dramatic shift from slow-growing plants to fast-growing species,” he said. “This tells us that the extinction was not random, and the way in which a plant acquires resources predicts how it can respond to a major disturbance. And potentially this also tells us why we find that modern forests are generally deciduous and not evergreen.”

Previously, other scientists found evidence of a dramatic drop in temperature caused by dust from the impact. “The hypothesis is that the impact winter introduced a very variable climate,” Blonder said. “That would have favored plants that grew quickly and could take advantage of changing conditions, such as deciduous plants.”

“We measured the mass of a given leaf in relation to its area, which tells us whether the leaf was a chunky, expensive one to make for the plant, or whether it was a more flimsy, cheap one,” Blonder explained. “In other words, how much carbon the plant had invested in the leaf.” In addition the researchers measured the density of the leaves’ vein networks, a measure of the amount of water a plant can transpire and the rate at which it can acquire carbon.

“There is a spectrum between fast- and slow-growing species,” said Blonder. “There is the ‘live fast, die young’ strategy and there is the ‘slow but steady’ strategy. You could compare it to financial strategies investing in stocks versus bonds.” The analyses revealed that while slow-growing evergreens dominated the plant assemblages before the extinction event, fast-growing flowering species had taken their places afterward.

How a change in slope affects lava flows

When exposed to the elements, flowing lava will form a crust at its surface. -  Scott Rowland
When exposed to the elements, flowing lava will form a crust at its surface. – Scott Rowland

As soon as lava flows from a volcano, exposure to air and wind causes it to start to cool and harden. Rather than hardening evenly, the energy exchange tends to take place primarily at the surface. The cooling causes a crust to form on the outer edges of the lava flow, insulating the molten lava within. This hardened lava shell allows a lava flow to travel much further than it would otherwise, while cracks in the lava’s crust can cause it to draw up short.

When there is a break in the terrain-a sharp change in slope, a valley, or a rock wall, for example-the smooth lava flow is disrupted. Pulses in flow volume or the formation of turbulent eddies caused by these topographic features can make the hard lava shell crack. Using observations from historical eruptions and a simple mechanical model, Glaze et al. studied how changes in slope can affect lava flows. This was featured in a recent study in the Journal of Geophysical Research: Solid Earth.

The increase in flow velocity from a steepening slope is often quite minor, as most of the energy goes into vertical rotation of the lava, just as with a rock rolling down a hill. The authors’ model considers factors such as temperature, depth and flow velocity, along with the effect of lava viscosity, to calculate how a change in slope affects the formation of vertical eddies created by tumbling lava. The authors’ model allowed them to determine how far downstream the turbulence persists before the lava returns to a more streamlined flow.

Early Earth less hellish than previously thought

Calvin Miller is shown at the Kerlingarfjoll volcano in central Iceland. Some geologists have proposed that the early Earth may have resembled regions like this. -  Tamara Carley
Calvin Miller is shown at the Kerlingarfjoll volcano in central Iceland. Some geologists have proposed that the early Earth may have resembled regions like this. – Tamara Carley

Conditions on Earth for the first 500 million years after it formed may have been surprisingly similar to the present day, complete with oceans, continents and active crustal plates.

This alternate view of Earth’s first geologic eon, called the Hadean, has gained substantial new support from the first detailed comparison of zircon crystals that formed more than 4 billion years ago with those formed contemporaneously in Iceland, which has been proposed as a possible geological analog for early Earth.

The study was conducted by a team of geologists directed by Calvin Miller, the William R. Kenan Jr. Professor of Earth and Environmental Sciences at Vanderbilt University, and published online this weekend by the journal Earth and Planetary Science Letters in a paper titled, “Iceland is not a magmatic analog for the Hadean: Evidence from the zircon record.”

From the early 20th century up through the 1980’s, geologists generally agreed that conditions during the Hadean period were utterly hostile to life. Inability to find rock formations from the period led them to conclude that early Earth was hellishly hot, either entirely molten or subject to such intense asteroid bombardment that any rocks that formed were rapidly remelted. As a result, they pictured the surface of the Earth as covered by a giant “magma ocean.”

This perception began to change about 30 years ago when geologists discovered zircon crystals (a mineral typically associated with granite) with ages exceeding 4 billion years old preserved in younger sandstones. These ancient zircons opened the door for exploration of the Earth’s earliest crust. In addition to the radiometric dating techniques that revealed the ages of these ancient zircons, geologists used other analytical techniques to extract information about the environment in which the crystals formed, including the temperature and whether water was present.

Since then zircon studies have revealed that the Hadean Earth was not the uniformly hellish place previously imagined, but during some periods possessed an established crust cool enough so that surface water could form – possibly on the scale of oceans.

Accepting that the early Earth had a solid crust and liquid water (at least at times), scientists have continued to debate the nature of that crust and the processes that were active at that time: How similar was the Hadean Earth to what we see today?

Two schools of thought have emerged: One argues that Hadean Earth was surprisingly similar to the present day. The other maintains that, although it was less hostile than formerly believed, early Earth was nonetheless a foreign-seeming and formidable place, similar to the hottest, most extreme, geologic environments of today. A popular analog is Iceland, where substantial amounts of crust are forming from basaltic magma that is much hotter than the magmas that built most of Earth’s current continental crust.

“We reasoned that the only concrete evidence for what the Hadean was like came from the only known survivors: zircon crystals – and yet no one had investigated Icelandic zircon to compare their telltale compositions to those that are more than 4 billion years old, or with zircon from other modern environments,” said Miller.

In 2009, Vanderbilt doctoral student Tamara Carley, who has just accepted the position of assistant professor at Layfayette College, began collecting samples from volcanoes and sands derived from erosion of Icelandic volcanoes. She separated thousands of zircon crystals from the samples, which cover the island’s regional diversity and represent its 18 million year history.

Working with Miller and doctoral student Abraham Padilla at Vanderbilt, Joe Wooden at Stanford University, Axel Schmitt and Rita Economos from UCLA, Ilya Bindeman at the University of Oregon and Brennan Jordan at the University of South Dakota, Carley analyzed about 1,000 zircon crystals for their age and elemental and isotopic compositions. She then searched the literature for all comparable analyses of Hadean zircon and for representative analyses of zircon from other modern environments.

“We discovered that Icelandic zircons are quite distinctive from crystals formed in other locations on modern Earth. We also found that they formed in magmas that are remarkably different from those in which the Hadean zircons grew,” said Carley.

Most importantly, their analysis found that Icelandic zircons grew from much hotter magmas than Hadean zircons. Although surface water played an important role in the generation of both Icelandic and Hadean crystals, in the Icelandic case the water was extremely hot when it interacted with the source rocks while the Hadean water-rock interactions were at significantly lower temperatures.

“Our conclusion is counterintuitive,” said Miller. “Hadean zircons grew from magmas rather similar to those formed in modern subduction zones, but apparently even ‘cooler’ and ‘wetter’ than those being produced today.”

M 9.0+ possible for subduction zones along ‘Ring of Fire,’ suggests new study

The magnitude of the 2011 Tohoku quake (M 9.0) caught many seismologists by surprise, prompting some to revisit the question of calculating the maximum magnitude earthquake possible for a particular fault. New research offers an alternate view that uses the concept of probable maximum magnitude events over a given period, providing the magnitude and the recurrence rate of extreme events in subduction zones for that period. Most circum Pacific subduction zones can produce earthquakes of magnitude greater than 9.0, suggests the study.

The idea of identifying the maximum magnitude for a fault isn’t new, and its definition varies based on context. This study, published online by the Bulletin of the Seismological Society of America (BSSA), calculates the “probable maximum earthquake magnitude within a time period of interest,” estimating the probable magnitude of subduction zone earthquakes for various time periods, including 250, 500 and 10,000 years.

“Various professionals use the same terminology – maximum magnitude – to mean different things. The most interesting question for us was what was going to be the biggest magnitude earthquake over a given period of time?” said co-author Yufang Rong, a seismologist at the Center for Property Risk Solutions of FM Global, a commercial and industrial property insurer. “Can we know the exact, absolute maximum magnitude? The answer is no, however, we developed a simple methodology to estimate the probable largest magnitude within a specific time frame.”

The study’s results indicated most of the subduction zones can generate M 8.5 or greater over a 250-return period; M 8.8 or greater over 500 years; and M 9.0 or greater over 10,000 years.

“Just because a subduction zone hasn’t produced a magnitude 8.8 in 499 years, that doesn’t mean one will happen next year,” said Rong. “We are talking about probabilities.”

The instrumental and historical earthquake record is brief, complicating any attempt to confirm recurrence rates and estimate with confidence the maximum magnitude of an earthquake in a given period. The authors validated their methodology by comparing their findings to the seismic history of the Cascadia subduction zone, revealed through deposits of marine sediment along the Pacific Northwest coast. While some subduction zones have experienced large events during recent history, the Cascadia subduction zone has remained quiet. Turbidite and onshore paleoseismic studies have documented a rich seismic history, identifying 40 large events over the past 10,000 years.

“Magnitude limits of subduction zone earthquakes” is co-authored by Rong, David Jackson of UCLA, Harold Magistrale of FM Global, and Chris Goldfinger of Oregon State University. The paper will be published online Sept. 16 by BSSA as well as in its October print edition.

Wastewater injection is culprit for most quakes in southern Colorado and northern New Mexico

The deep injection of wastewater underground is responsible for the dramatic rise in the number of earthquakes in Colorado and New Mexico since 2001, according to a study to be published in the Bulletin of the Seismological Society of America (BSSA).

The Raton Basin, which stretches from southern Colorado into northern New Mexico, was seismically quiet until shortly after major fluid injection began in 1999. Since 2001, there have been 16 magnitude > 3.8 earthquakes (including M 5.0 and 5.3), compared to only one (M 4.0) the previous 30 years. The increase in earthquakes is limited to the area of industrial activity and within 5 kilometers (3.1 miles) of wastewater injection wells.

In 1994, energy companies began producing coal-bed methane in Colorado and expanded production to New Mexico in 1999. Along with the production of methane, there is the production of wastewater, which is injected underground in disposal wells and can raise the pore pressure in the surrounding area, inducing earthquakes. Several lines of evidence suggest the earthquakes in the area are directly related to the disposal of wastewater, a by-product of extracting methane, and not to hydraulic fracturing occurring in the area.

Beginning in 2001, the production of methane expanded, with the number of high-volume wastewater disposal wells increasing (21 presently in Colorado and 7 in New Mexico) along with the injection rate. Since mid-2000, the total injection rate across the basin has ranged from 1.5 to 3.6 million barrels per month.

The authors, all scientists with the U.S. Geological Survey, detail several lines of evidence directly linking the injection wells to the seismicity. The timing and location of seismicity correspond to the documented pattern of injected wastewater. Detailed investigations of two seismic sequences (2001 and 2011) places them in proximity to high-volume, high-injection-rate wells, and both sequences occurred after a nearby increase in the rate of injection. A comparison between seismicity and wastewater injection in Colorado and New Mexico reveals similar patterns, suggesting seismicity is initiated shortly after an increase in injection rates.

Contaminated water in 2 states linked to faulty shale gas wells

Faulty well integrity, not hydraulic fracturing deep underground, is the primary cause of drinking water contamination from shale gas extraction in parts of Pennsylvania and Texas, according to a new study by researchers from five universities.

The scientists from Duke, Ohio State, Stanford, Dartmouth and the University of Rochester
published their peer-reviewed study Sept. 15 in the Proceedings of the National Academy of Sciences. Using noble gas and hydrocarbon tracers, they analyzed the gas content of more than 130 drinking water wells in the two states.

“We found eight clusters of wells — seven in Pennsylvania and one in Texas — with contamination, including increased levels of natural gas from the Marcellus shale in Pennsylvania and from shallower, intermediate layers in both states,” said Thomas H. Darrah, assistant professor of earth science at Ohio State, who led the study while he was a research scientist at Duke.

“Our data clearly show that the contamination in these clusters stems from well-integrity problems such as poor casing and cementing,” Darrah said.

“These results appear to rule out the possibility that methane has migrated up into drinking water aquifers because of horizontal drilling or hydraulic fracturing, as some people feared,” said Avner Vengosh, professor of geochemistry and water quality at Duke.

In four of the affected clusters, the team’s noble gas analysis shows that methane from drill sites escaped into drinking water wells from shallower depths through faulty or insufficient rings of cement surrounding a gas well’s shaft. In three clusters, the tests suggest the methane leaked through faulty well casings. In one cluster, it was linked to an underground well failure.

“People’s water has been harmed by drilling,” said Robert B. Jackson, professor of environmental and earth sciences at Stanford and Duke. “In Texas, we even saw two homes go from clean to contaminated after our sampling began.”

“The good news is that most of the issues we have identified can potentially be avoided by future improvements in well integrity,” Darrah stressed.

Using both noble gas and hydrocarbon tracers — a novel combination that enabled the researchers to identify and distinguish between the signatures of naturally occurring methane and stray gas contamination from shale gas drill sites — the team analyzed gas content in 113 drinking-water wells and one natural methane seep overlying the Marcellus shale in Pennsylvania, and in 20 wells overlying the Barnett shale in Texas. Sampling was conducted in 2012 and 2013. Sampling sites included wells where contamination had been debated previously; wells known to have naturally high level of methane and salts, which tend to co-occur in areas overlying shale gas deposits; and wells located both within and beyond a one-kilometer distance from drill sites.

Noble gases such as helium, neon or argon are useful for tracing fugitive methane because although they mix with natural gas and can be transported with it, they are inert and are not altered by microbial activity or oxidation. By measuring changes in ratios in these tag-along noble gases, researchers can determine the source of fugitive methane and the mechanism by which it was transported into drinking water aquifers — whether it migrated there as a free gas or was dissolved in water.

“This is the first study to provide a comprehensive analysis of noble gases and their isotopes in groundwater near shale gas wells,” said Darrah, who is continuing the analysis in his lab at Ohio State. “Using these tracers, combined with the isotopic and chemical fingerprints of hydrocarbons in the water and its salt content, we can pinpoint the sources and pathways of methane contamination, and determine if it is natural or not.”

Gas leaks from faulty wells linked to contamination in some groundwater

A study has pinpointed the likely source of most natural gas contamination in drinking-water wells associated with hydraulic fracturing, and it’s not the source many people may have feared.

What’s more, the problem may be fixable: improved construction standards for cement well linings and casings at hydraulic fracturing sites.

A team led by a researcher at The Ohio State University and composed of researchers at Duke, Stanford, Dartmouth, and the University of Rochester devised a new method of geochemical forensics to trace how methane migrates under the earth. The study identified eight clusters of contaminated drinking-water wells in Pennsylvania and Texas.

Most important among their findings, published this week in the Proceedings of the National Academy of Sciences, is that neither horizontal drilling nor hydraulic fracturing of shale deposits seems to have caused any of the natural gas contamination.

“There is no question that in many instances elevated levels of natural gas are naturally occurring, but in a subset of cases, there is also clear evidence that there were human causes for the contamination,” said study leader Thomas Darrah, assistant professor of earth sciences at Ohio State. “However our data suggests that where contamination occurs, it was caused by poor casing and cementing in the wells,” Darrah said.

In hydraulic fracturing, water is pumped underground to break up shale at a depth far below the water table, he explained. The long vertical pipes that carry the resulting gas upward are encircled in cement to keep the natural gas from leaking out along the well. The study suggests that natural gas that has leaked into aquifers is the result of failures in the cement used in the well.

“Many of the leaks probably occur when natural gas travels up the outside of the borehole, potentially even thousands of feet, and is released directly into drinking-water aquifers” said Robert Poreda, professor of geochemistry at the University of Rochester.

“These results appear to rule out the migration of methane up into drinking water aquifers from depth because of horizontal drilling or hydraulic fracturing, as some people feared,” said Avner Vengosh, professor of geochemistry and water quality at Duke.

“This is relatively good news because it means that most of the issues we have identified can potentially be avoided by future improvements in well integrity,” Darrah said.

“In some cases homeowner’s water has been harmed by drilling,” said Robert B. Jackson, professor of environmental and earth sciences at Stanford and Duke. “In Texas, we even saw two homes go from clean to contaminated after our sampling began.”

The method that the researchers used to track the source of methane contamination relies on the basic physics of the noble gases (which happen to leak out along with the methane). Noble gases such as helium and neon are so called because they don’t react much with other chemicals, although they mix with natural gas and can be transported with it.

That means that when they are released underground, they can flow long distances without getting waylaid by microbial activity or chemical reactions along the way. The only important variable is the atomic mass, which determines how the ratios of noble gases change as they tag along with migrating natural gas. These properties allow the researchers to determine the source of fugitive methane and the mechanism by which it was transported into drinking water aquifers.

The researchers were able to distinguish between the signatures of naturally occurring methane and stray gas contamination from shale gas drill sites overlying the Marcellus shale in Pennsylvania and the Barnett shale in Texas.

The researchers sampled water from the sites in 2012 and 2013. Sampling sites included wells where contamination had been debated previously; wells known to have naturally high level of methane and salts, which tend to co-occur in areas overlying shale gas deposits; and wells located both within and beyond a one-kilometer distance from drill sites.

As hydraulic fracturing starts to develop around the globe, including countries South Africa, Argentina, China, Poland, Scotland, and Ireland, Darrah and his colleagues are continuing their work in the United States and internationally. And, since the method that the researchers employed relies on the basic physics of the noble gases, it can be employed anywhere. Their hope is that their findings can help highlight the necessity to improve well integrity.