Enormous aquifer discovered under Greenland ice sheet

Glaciologist Lora Koenig (left) operates a video recorder that has been lowered into the bore hole to observe the ice structure of the aquifer in April 2013. -  University of Utah/Clément Miège
Glaciologist Lora Koenig (left) operates a video recorder that has been lowered into the bore hole to observe the ice structure of the aquifer in April 2013. – University of Utah/Clément Miège

Buried underneath compacted snow and ice in Greenland lies a large liquid water reservoir that has now been mapped by researchers using data from NASA’s Operation IceBridge airborne campaign.

A team of glaciologists serendipitously found the aquifer while drilling in southeast Greenland in 2011 to study snow accumulation. Two of their ice cores were dripping water when the scientists lifted them to the surface, despite air temperatures of minus 4 F (minus 20 C). The researchers later used NASA’s Operation Icebridge radar data to confine the limits of the water reservoir, which spreads over 27,000 square miles (69,930 square km) – an area larger than the state of West Virginia. The water in the aquifer has the potential to raise global sea level by 0.016 inches (0.4 mm).

“When I heard about the aquifer, I had almost the same reaction as when we discovered Lake Vostok [in Antarctica]: it blew my mind that something like that is possible,” said Michael Studinger, project scientist for Operation IceBridge, a NASA airborne campaign studying changes in ice at the poles. “It turned my view of the Greenland ice sheet upside down – I don’t think anyone had expected that this layer of liquid water could survive the cold winter temperatures without being refrozen.”

Southeast Greenland is a region of high snow accumulation. Researchers now believe that the thick snow cover insulates the aquifer from cold winter surface temperatures, allowing it to remain liquid throughout the year. The aquifer is fed by meltwater that percolates from the surface during the summer.

The new research is being presented in two papers: one led by University of Utah’s Rick Forster that was published on Dec. 22 in the journal Nature Geoscience and one led by NASA’s Lora Koenig that has been accepted for publication in the journal Geophysical Research Letters. The findings will significantly advance the understanding of how melt water flows through the ice sheet and contributes to sea level rise.

When a team led by Forster accidentally drilled into water in 2011, they weren’t able to continue studying the aquifer because their tools were not suited to work in an aquatic environment. Afterward, Forster’s team determined the extent of the aquifer by studying radar data from Operation IceBridge together with ground-based radar data. The top of the water layer clearly showed in the radar data as a return signal brighter than the ice layers.

Koenig, a glaciologist with NASA’s Goddard Space Flight Center in Greenbelt, Md., co-led another expedition to southeast Greenland with Forster in April 2013 specifically designed to study the physical characteristics of the newly discovered water reservoir. Koenig’s team extracted two cores of firn (aged snow) that were saturated with water. They used a water-resistant thermoelectric drill to study the density of the ice and lowered strings packed with temperature sensors down the holes, and found that the temperature of the aquifer hovers around 32 F (zero C), warmer than they had expected it to be.

Koenig and her team measured the top of the aquifer at around 39 feet (12 meters) under the surface. This was the depth at which the boreholes filled with water after extracting the ice cores. They then determined the amount of water in the water-saturated firn cores by comparing them to dry cores extracted nearby. The researchers determined the depth at which the pores in the firn close, trapping the water inside the bubbles – at this point, there is a change in the density of the ice that the scientists can measure. This depth is about 121 feet (37 meters) and corresponds to the bottom of the aquifer. Once Koenig’s team had the density, depth and spatial extent of the aquifer, they were able to come up with an estimated water volume of about 154 billion tons (140 metric gigatons). If this water was to suddenly discharge to the ocean, this would correspond to 0.016 inches (0.4 mm) of sea level rise.

Researchers think that the perennial aquifer is a heat reservoir for the ice sheet in two ways: melt water carries heat when it percolates from the surface down the ice to reach the aquifer. And if the trapped water were to refreeze, it would release latent heat. Altogether, this makes the ice in the vicinity of the aquifer warmer, and warmer ice flows faster toward the sea.

“Our next big task is to understand how this aquifer is filling and how it’s discharging,” said Koenig. “The aquifer could offset some sea level rise if it’s storing water for long periods of time. For example after the 2012 extreme surface melt across Greenland, it appears that the aquifer filled a little bit. The question now is how does that water leave the aquifer on its way to the ocean and whether it will leave this year or a hundred years from now.”

Study faults a ‘runaway’ mechanism in intermediate-depth earthquakes

Nearly 25 percent of earthquakes occur more than 50 kilometers below the Earth’s surface, when one tectonic plate slides below another, in a region called the lithosphere. Scientists have thought that these rumblings from the deep arise from a different process than shallower, more destructive quakes. But limited seismic data, and difficulty in reproducing these quakes in the laboratory, have combined to prevent researchers from pinpointing the cause of intermediate and deep earthquakes.

Now a team from MIT and Stanford University has identified a mechanism that helps these deeper quakes spread. By analyzing seismic data from a region in Colombia with a high concentration of intermediate-depth earthquakes, the researchers identified a “runaway process” in which the sliding of rocks at great depths causes surrounding temperatures to spike. This influx of heat, in turn, encourages more sliding – a feedback mechanism that propagates through the lithosphere, generating an earthquake.

German Prieto, an assistant professor of geophysics in MIT’s Department of Earth, Atmospheric and Planetary Sciences, says that once thermal runaway starts, the surrounding rocks can heat up and slide more easily, raising the temperature very quickly.

“What we predict is for medium-sized earthquakes, with magnitude 4 to 5, temperature can rise up to 1,000 degrees Centigrade, or about 1,800 degrees Fahrenheit, in a matter of one second,” Prieto says. “It’s a huge amount. You’re basically allowing rupture to run away because of this large temperature increase.”

Prieto says that understanding deeper earthquakes may help local communities anticipate how much shaking they may experience, given the seismic history of their regions.

He and his colleagues have published their results in the journal Geophysical Research Letters.

Water versus heat: two competing theories


The majority of Earth’s seismic activity occurs at relatively shallow depths, and the mechanics of such quakes is well understood: Over time, abutting plates in the crust build up tension as they shift against each other. This tension ultimately reaches a breaking point, creating a sudden rupture that splinters through the crust.

However, scientists have determined that this process is not feasible for quakes that occur far below the surface. Essentially, higher temperatures and pressures at these depths would make rocks behave differently than they would closer to the surface, gliding past rather than breaking against each other.

By way of explanation, Prieto draws an analogy to glass: If you try to bend a glass tube at room temperature, with enough force, it will eventually shatter. But with heating, the tube will become much more malleable, and bend without breaking.

So how do deeper earthquakes occur? Scientists have proposed two theories: The first, called dehydration embrittlement, is based on the small amounts of water in rocks’ mineral composition. At high pressure and heat, rocks release water, which lubricates surrounding faults, creating fractures that ultimately set off a quake.

The second theory is thermal runaway: Increasing temperatures weaken rocks, promoting slippage that spreads through the lithosphere, further increasing temperatures and causing more rocks to slip, resulting in an earthquake.

Probing the nest


Prieto and his colleagues found new evidence in support of the second theory by analyzing seismic data from a region of Colombia that experiences large numbers of intermediate-depth earthquakes – quakes whose epicenters are 50 to 300 kilometers below the surface. This region, known as the Bucaramanga Nest, hosts the highest concentration of intermediate-depth quakes in the world: Since 1993, more than 80,000 earthquakes have been recorded in the area, making it, in Prieto’s view, an “ideal natural laboratory” for studying deeper quakes.

The researchers analyzed seismic waves recorded by nearby surface seismometers and calculated two parameters: stress drop, or the total amount of energy released by an earthquake, and radiated seismic energy, or the amount of that energy that makes it to the surface as seismic waves – energy that is manifested in the shaking of the ground.

The stronger a quake is, the more energy, or heat, it generates. Interestingly, the MIT group found that only 2 percent of a deeper quake’s total energy is felt at the surface. Prieto reasoned that much of the other 98 percent may be released locally as heat, creating an enormous temperature increase that pushes a quake to spread.

Prieto says the study provides strong evidence for thermal runaway as the likely mechanism for intermediate-depth earthquakes. Such knowledge, he says, may be useful for communities around Bucaramanga in predicting the severity of future quakes.

“Usually people in Bucaramanga feel a magnitude 4 quake every month or so, and every year they experience a larger one that can shake significantly,” Prieto says. “If you’re in a region where you have intermediate-depth quakes and you know the size of the region, you can make a prediction of the type of magnitudes of quakes that you can have, and what kind of shaking you would expect.”

Prieto, a native Colombian, plans to deploy seismic stations above the Bucaramanga Nest to better understand the activity of deeper quakes.

Scientists anticipated size and location of 2012 Costa Rica earthquake

Andrew Newman, an associate professor in the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology, performs a GPS survey in Costa Rica's Nicoya Peninsula in 2010. -  Lujia Feng
Andrew Newman, an associate professor in the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology, performs a GPS survey in Costa Rica’s Nicoya Peninsula in 2010. – Lujia Feng

Scientists using GPS to study changes in the Earth’s shape accurately forecasted the size and location of the magnitude 7.6 Nicoya earthquake that occurred in 2012 in Costa Rica.

The Nicoya Peninsula in Costa Rica is one of the few places where land sits atop the portion of a subduction zone where the Earth’s greatest earthquakes take place. Costa Rica’s location therefore makes it the perfect spot for learning how large earthquakes rupture. Because earthquakes greater than about magnitude 7.5 have occurred in this region roughly every 50 years, with the previous event striking in 1950, scientists have been preparing for this earthquake through a number of geophysical studies. The most recent study used GPS to map out the area along the fault storing energy for release in a large earthquake.

“This is the first place where we’ve been able to map out the likely extent of an earthquake rupture along the subduction megathrust beforehand,” said Andrew Newman, an associate professor in the School of Earth and Atmospheric Sciences at the Georgia Institute of Technology.

The study was published online Dec. 22, 2013, in the journal Nature Geoscience. The research was supported by the National Science Foundation and was a collaboration of researchers from Georgia Tech, the Costa Rica Volcanological and Seismological Observatory (OVSICORI) at Universidad Nacional, University California, Santa Cruz, and the University of South Florida.

Subduction zones are locations where one tectonic plate is forced under another one. The collision of tectonic plates during this process can unleash devastating earthquakes, and sometimes devastating tsunamis. The magnitude 9.0 earthquake off the coast of Japan in 2011 was due to just such a subduction zone eaerthquake. The Cascadia subduction zone in the Pacific Northwest is capable of unleashing a similarly sized quake. Damage from the Nicoya earthquake was not as bad as might be expected from a magnitude 7.6 quake.

“Fortunately there was very little damage considering the earthquake’s size,” said Marino Protti of OVSICORI and the study’s lead author. “The historical pattern of earthquakes not only allowed us to get our instruments ready, it also allowed Costa Ricans to upgrade their buildings to be earthquake safe.”

Plate tectonics are the driving force for subduction zones. As tectonic plates converge, strain temporarily accumulates across the plate boundary when portions of the interface between these tectonic plates, called a megathrust, become locked together. The strain can accumulate to dangerous levels before eventually being released as a massive earthquake.

“The Nicoya Peninsula is an ideal natural lab for studying these events, because the coastline geometry uniquely allows us to get our equipment close to the zone of active strain accumulation,” said Susan Schwartz, professor of earth sciences at the University of California, Santa Cruz, and a co-author of the study.

Through a series of studies starting in the early 1990s using land-based tools, the researchers mapped regions where tectonic plates were completely locked along the subduction interface. Detailed geophysical observations of the region allowed the researchers to create an image of where the faults had locked.

The researchers published a study a few months before the earthquake, describing the particular locked patch with the clearest potential for the next large earthquake in the region. The team projected the total amount of energy that could have developed across that region and forecasted that if the locking remained similar since the last major earthquake in 1950, then there is presently enough energy for an earthquake on the order of magnitude 7.8 there.

Because of limits in technology and scientific understanding about processes controlling fault locking and release, scientists cannot say much about precisely where or when earthquakes will occur. However, earthquakes in Nicoya have occurred about every 50 years, so seismologists had been anticipating another one around 2000, give or take 20 years, Newman said. The earthquake occurred in September of 2012 as a magnitude 7.6 quake.

“It occurred right in the area we determined to be locked and it had almost the size we expected,” Newman said.

The researchers hope to apply what they’ve learned in Costa Rica to other environments. Virtually every damaging subduction zone earthquake occurs far offshore.

“Nicoya is the only place on Earth where we’ve actually been able to get a very accurate image of the locked patch because it occurs directly under land,” Newman said. “If we really want to understand the seismic potential for most of the world, we have to go offshore.”

Scientists have been able to reasonably map portions of these locked areas offshore using data on land, but the resolution is poor, particularly in the regions that are most responsible for generating tsunamis, Newman said. He hopes that his group’s work in Nicoya will be a driver for geodetic studies on the seafloor to observe such Earth deformation. These seafloor geodetic studies are rare and expensive today.

“If we want to understand the potential for large earthquakes, then we really need to start doing more seafloor observations,” Newman said. “It’s a growing push in our community and this study highlights the type of results that one might be able to obtain for most other dangerous environments, including offshore the Pacific Northwest.”

The analogue of a tsunami for telecommunication

Development of electronics and communication requires a hardware base capable for increasingly larger precision, ergonomics and throughput. For communication and GPS-navigation satellites, it is of great importance to reduce the payload mass as well as to ensure the signal stability. Last year, researchers from the Moscow State University (MSU) together
with their Swiss colleagues from EFPL performed a study that can induce certain improvements in this direction. The scientists demonstrated (this paper was published in Nature Photonics) that the primary source of noise in microresonator based optical frequency combs (broad spectra composed of a large number of equidistant narrow emission lines) is related to non-linear harmonic generation mechanisms rather that by fundamental physical limitations and in principle reducible.

On December 22st, a new publication in Nature Photonics is appearing where they extend their results. Michael Gorodetsky, one of the co-authors of this paper, professor of the Physical Faculty of MSU affiliated also in the Russian Quantum Centre in Skolkovo, says that the study contains at least three important results: scientists found a technique to generate stable femtosecond (duration of the order of 10-15 seconds) pulses, optical combs and microwave signals.

Physicists used a microresonator (in this particular case, a millimeter-scale magnesium fluoride disk was used, where whispering-gallery electromagnetic oscillations may be excited, propagating along the circumference of the the resonator) to convert continuous laser emission into periodic pulses of extremely short duration. The best known analogous devices are mode-locked lasers that generating femtosecond, high-intensity pulses. Applications of these lasers range from analysis of chemical reactions at ultra-short timescales to eye-surgery.

“In mode-locked femtosecond lasers complex optical devices, media and special mirrors are normally used. However we succeeded in obtaining stable pulses just in passive optical resonator using its own non-linearity,” — Gorodetsky says. This allows, in future, to decrease drastically the size of the device.

The short pulses generated in the microresonator are in fact what is known as optical solitons (soliton is a stable, shape-conserving localized wave packet propagating in a non-linear medium like a quasiparticle; an example of a soliton existing in nature is a tsunami wave). “One can generate a single stable soliton circulating inside a microresonator. In the output optical fiber, one can obtain a periodic series of pulses with a period corresponding to a round trip time of the soliton.” — Gorodetsky explains.

Such pulses last for 100-200 femtoseconds, but the authors are sure that much shorter solitons are achievable. They suggest that their discovery allows to construct a new generation of compact, stable and cheap optical pulse generators working in the regimes unachievable with other techniques. In the spectral domain, these pulses correspond to the so-called optical frequency “combs” that revolutionized metrology and spectroscopy and brought to those who developed the method a Nobel Prize in physics in 2005 ( American John Hall and German Theodor Haensch received the Prize “for their contributions to the development of laser-based precision spectroscopy, including the optical frequency comb technique”). Currently existing comb generators are much larger and more massive.

At the same time, as the scientists show, a signal generated by such a comb on a photodetectors a high-frequency microwave signal with very low phase noise level. Ultra-low-noise microwave generators are extremely important in modern technology; they are used in metrology, radiolocation, telecommunication hardware, including satellite communications. Authors note that their results are critical for such applications as broadband spectroscopy, telecommunications, and astronomy.

Natural gas saves water, even when factoring in water lost to hydraulic fracturing

For every gallon of water used to produce natural gas through hydraulic fracturing, Texas saved 33 gallons of water by generating electricity with that natural gas instead of coal (in 2011). -  University of Texas at Austin
For every gallon of water used to produce natural gas through hydraulic fracturing, Texas saved 33 gallons of water by generating electricity with that natural gas instead of coal (in 2011). – University of Texas at Austin

A new study finds that in Texas, the U.S. state that annually generates the most electricity, the transition from coal to natural gas for electricity generation is saving water and making the state less vulnerable to drought.

Even though exploration for natural gas through hydraulic fracturing requires significant water consumption in Texas, the new consumption is easily offset by the overall water efficiencies of shifting electricity generation from coal to natural gas. The researchers estimate that water saved by shifting a power plant from coal to natural gas is 25 to 50 times as great as the amount of water used in hydraulic fracturing to extract the natural gas. Natural gas also enhances drought resilience by providing so-called peaking plants to complement increasing wind generation, which doesn’t consume water.

The results of The University of Texas at Austin study are published this week in the journal Environmental Research Letters.

The researchers estimate that in 2011 alone, Texas would have consumed an additional 32 billion gallons of water – enough to supply 870,000 average residents – if all its natural gas-fired power plants were instead coal-fired plants, even after factoring in the additional consumption of water for hydraulic fracturing to extract the natural gas.

Hydraulic fracturing is a process in which water, sand and chemicals are pumped at high pressure into a well to fracture surrounding rocks and allow oil or gas to more easily flow. Hydraulic fracturing and horizontal drilling are the main drivers behind the current boom in U.S. natural gas production.

Environmentalists and others have raised concerns about the amount of water that is consumed. In Texas, concerns are heightened because the use of hydraulic fracturing is expanding rapidly while water supplies are dwindling as the third year of a devastating drought grinds on. Because most electric power plants rely on water for cooling, the electric power supply might be particularly vulnerable to drought.

“The bottom line is that hydraulic fracturing, by boosting natural gas production and moving the state from water-intensive coal technologies, makes our electric power system more drought resilient,” says Bridget Scanlon, senior research scientist at the university’s Bureau of Economic Geology, who led the study.

To study the drought resilience of Texas power plants, Scanlon and her colleagues collected water use data for all 423 of the state’s power plants from the Energy Information Administration and from state agencies including the Texas Commission on Environmental Quality and the Texas Water Development Board, as well as other data.

Since the 1990s, the primary type of power plant built in Texas has been the natural gas combined cycle (NGCC) plant with cooling towers, which uses fuel and cooling water more efficiently than older steam turbine technologies. About a third of Texas power plants are NGCC. NGCC plants consume about a third as much water as coal steam turbine (CST) plants.

The other major type of natural gas plant in the state is a natural gas combustion turbine (NGCT) plant. NGCT plants can also help reduce the state’s water consumption for electricity generation by providing “peaking power” to support expansion of wind energy. Wind turbines don’t require water for cooling; yet wind doesn’t always blow when you need electricity. NGCT generators can be brought online in a matter of seconds to smooth out swings in electricity demand. By combining NGCT generation with wind generation, total water use can be lowered even further compared with coal-fired power generation.

The study focused exclusively on Texas, but the authors believe the results should be applicable to other regions of the U.S., where water consumption rates for the key technologies evaluated – hydraulic fracturing, NGCC plants with cooling towers and traditional coal steam turbine plants – are generally the same.

The Electric Reliability Council of Texas, manager of the state’s electricity grid, projects that if current market conditions continue through 2029, 65 percent of new power generation in the state will come from NGCC plants and 35 percent from natural gas combustion turbine plants, which use no water for cooling, but are less energy efficient than NGCC plants.

“Statewide, we’re on track to continue reducing our water intensity of electricity generation,” says Scanlon.

Hydraulic fracturing accounts for less than 1 percent of the water consumed in Texas. But in some areas where its use is heavily concentrated, it strains local water supplies, as documented in a 2011 study by Jean-Philippe Nicot of the Bureau of Economic Geology. Because natural gas is often used far from where it is originally produced, water savings from shifting to natural gas for electricity generation might not benefit the areas that use more water for hydraulic fracturing.

An earthquake or a snow avalanche has its own shape

Earthquakes (the picture shows the San Andreas fault) and snow avalanches (an avalanche in Mount Everest shown on lower left corner) are examples of systems exhibiting bursty avalanche dynamics. Individual bursts have a highly irregular, complex structure (upper left corner). However, they have 
also a typical, well-defined average shape which depends on certain fundamental properties of the system, i.e. its universality class in the language of physics (upper right corner). -  Aalto University
Earthquakes (the picture shows the San Andreas fault) and snow avalanches (an avalanche in Mount Everest shown on lower left corner) are examples of systems exhibiting bursty avalanche dynamics. Individual bursts have a highly irregular, complex structure (upper left corner). However, they have
also a typical, well-defined average shape which depends on certain fundamental properties of the system, i.e. its universality class in the language of physics (upper right corner). – Aalto University

However, it is crucial what one observes – paper fracture or the avalanching of snow. The results were just published in the Nature Communications journal.

Avalanches of snow or earthquakes can be described in other ways than the well-known Gutenberg-Richter scale, which gives a prediction of how likely a big avalanche or event is. Each avalanche or burst has its own typical shape or form, which tells for instance when most snow is sliding after the avalanche has started. The shape of can be predicted based on mathematical models, or one can find the right model by looking at the measured shape.

-We studied results from computer simulations, and found different kinds of forms of events. We then analyzed them with pen and paper, and together with our experimental collaborators, and concluded that our predictions for the avalanche shapes were correct, Mikko Alava explains.

The results can be applied to comparing experiments with simplified model systems, to a much greater depth. The whole shape of an avalanche holds much more information than say the Gutenberg-Richter index, even with a few other so-called critical exponents.

Diamonds in Earth’s oldest zircons are nothing but laboratory contamination

This image explains how synthetic diamond can be distinguished from natural diamond. -  Dobrzhinetskaya Lab, UC Riverside.
This image explains how synthetic diamond can be distinguished from natural diamond. – Dobrzhinetskaya Lab, UC Riverside.

As is well known, the Earth is about 4.6 billion years old. No rocks exist, however, that are older than about 3.8 billion years. A sedimentary rock section in the Jack Hills of western Australia, more than 3 billion years old, contains within it zircons that were eroded from rocks as old as about 4.3 billion years, making these zircons, called Jack Hills zircons, the oldest recorded geological material on the planet.

In 2007 and 2008, two research papers reported in the journal Nature that a suite of zircons from the Jack Hills included diamonds, requiring a radical revision of early Earth history. The papers posited that the diamonds formed, somehow, before the oldest zircons – that is, before 4.3 billion years ago – and then were recycled repeatedly over a period of 1.2 billion years during which they were periodically incorporated into the zircons by an unidentified process.

Now a team of three researchers, two of whom are at the University of California, Riverside, has discovered using electron microscopy that the diamonds in question are not diamonds at all but broken fragments of a diamond-polishing compound that got embedded when the zircon specimen was prepared for analysis by the authors of the Nature papers.

“The diamonds are not indigenous to the zircons,” said Harry Green, a research geophysicist and a distinguished professor of the Graduate Division at UC Riverside, who was involved in the research. “They are contamination. This, combined with the lack of diamonds in any other samples of Jack Hills zircons, strongly suggests that there are no indigenous diamonds in the Jack Hills zircons.”

Study results appear online this week in the journal Earth and Planetary Science Letters.

“It occurred to us that a long-term history of diamond recycling with intermittent trapping into zircons would likely leave some sort of microstructural record at the interface between the diamonds and zircon,” said Larissa Dobrzhinetskaya, a professional researcher in the Department of Earth Sciences at UCR and the first author of the research paper. “We reasoned that high-resolution electron microscopy of the material should be able to distinguish whether the diamonds are indeed what they have been believed to be.”

Using an intensive search with high-resolution secondary-electron imaging and transmission electron microscopy, the research team confirmed the presence of diamonds in the Jack Hills zircon samples they examined but could readily identify them as broken fragments of diamond paste that the original authors had used to polish the zircons for examination. They also observed quartz, graphite, apatite, rutile, iron oxides, feldspars and other low-pressure minerals commonly included into zircon in granitic rocks.

“In other words, they are contamination from polishing with diamond paste that was mechanically injected into silicate inclusions during polishing” Green said.

The research was supported by a grant from the National Science Foundation.

Green and Dobrzhinetskaya were joined in the research by Richard Wirth at the Helmholtz Centre Potsdam, Germany.

Dobrzhinetskaya and Green planned the research project; Dobrzhinetskaya led the project; she and Wirth did the electron microscopy.

Oil- and metal-munching microbes dominate deep sandstone formations

<IMG SRC="/Images/325645565.jpg" WIDTH="350" HEIGHT="245" BORDER="0" ALT="Halomonas bacteria are well-known for consuming the metal parts of the Titanic. Researchers now have found Halomonas in sandstone formations deep underground. – NOAA”>
Halomonas bacteria are well-known for consuming the metal parts of the Titanic. Researchers now have found Halomonas in sandstone formations deep underground. – NOAA

Halomonas are a hardy breed of bacteria. They can withstand heat, high salinity, low oxygen, utter darkness and pressures that would kill most other organisms. These traits enable these microbes to eke out a living in deep sandstone formations that also happen to be useful for hydrocarbon extraction and carbon sequestration, researchers report in a new study.

The analysis, the first unobstructed view of the microbial life of sandstone formations more than a mile below the surface, appears in the journal Environmental Microbiology.

“We are using new DNA technologies to understand the distribution of life in extreme natural environments,” said study leader Bruce Fouke, a professor of geology and of microbiology at the University of Illinois at Urbana-Champaign. Fouke also is an investigator with the Energy Biosciences Institute, which funded the research, and an affiliate of the Institute for Genomic Biology at Illinois.

Underground microbes are at least as diverse as their surface-dwelling counterparts, Fouke said, and that diversity has gone largely unstudied.

“Astonishingly little is known of this vast subsurface reservoir of biodiversity, despite our civilization’s regular access to and exploitation of subterranean environments,” he said.

To address this gap in knowledge, Fouke and his colleagues collected microbial samples from a sandstone reservoir 1.8 kilometers (1.1 miles) below the surface.

The team used a probe developed by the oilfield services company Schlumberger that reduces or eliminates contamination from mud and microbes at intermediate depths. The researchers sampled sandstone deposits of the Illinois Basin, a vast, subterranean bowl underlying much of Illinois and parts of Indiana, Kentucky and Tennessee, and a rich source of coal and oil.

A genomic study and analysis of the microbes the team recovered revealed “a low-diversity microbial community dominated by Halomonas sulfidaeris-like bacteria that have evolved several strategies to cope with and survive the high-pressure, high-temperature and nutrient deprived deep subsurface environment,” Fouke said.

An analysis of the microbes’ metabolism found that these bacteria are able to utilize iron and nitrogen from their surroundings and recycle scarce nutrients to meet their metabolic needs. (Another member of the same group, Halomonas titanicae, is so named because it is consuming the iron superstructure of the Titanic.)

Perhaps most importantly, the team found that the microbes living in the deep sandstone deposits of the Illinois Basin were capable of metabolizing aromatic compounds, a common component of petroleum.

“This means that these indigenous microbes would have the adaptive edge if hydrocarbon migration eventually does occur,” Fouke said.

A better understanding of the microbial life of the subterranean world will “enhance our ability to explore for and recover oil and gas, and to make more environmentally sound choices for subsurface gas storage,” he said.

Tailored methane measurement services are to be developed for shale gas extraction, municipal waste

Climate-KIC, Europe’s largest public-private innovation partnership working to address the challenge of climate change, has awarded ?1.266 million to FuME (Fugitive Methane Emissions), a new project that will help to identify fugitive methane emissions.

Fugitive methane emissions are of great importance to climate change and governments’ and industry’s response to it, due to its high global warming impact . Capturing fugitive methane emissions can also deliver a profitable return by directly producing saleable gas. Methane abatement options can therefore have a net profit, and even those that do not can be relatively cheap to deploy with large climate change mitigation benefits.

Better detection and quantification of fugitive methane emissions will contribute substantially to climate change mitigation, as methane represents 16% of total global greenhouse gas emissions and, due to the high global warming impact, more than a third of anthropogenic warming. As well as mitigation opportunities, this creates potentially huge opportunities for innovation and economic growth through the provision of new products and services for the sectors in which fugitive methane can be captured.

The project will develop methane measurement services, made up of a number of different products including modelling tools, a laser based open-path methane detection spectrometer and sensor networks in which the services can be adapted to user requirements depending on the sector, the complexity of the site, and the user requirements in each case.

The project will see the Centre for Carbon Measurement at NPL working with ARIA Technologies, CEREA and LSCE to adapt instrumentation, measurement techniques and methodologies for the target sectors. Industry representatives Cuadrilla Resources, Veolia Environnement and National Grid will provide sites and operational expertise to the project.

Publications from the project will include a set of guidelines per industry (municipal waste water treatment, transmission grid, shale gas extraction) for fugitive methane emission measurement best practice, a collection of reports summarizing the project results, scientific papers on different methods for quantifying fugitive methane emissions and the comparative accuracy levels, the learning from the project for emissions factors for municipal waste water treatment, the use of inverse modelling to estimate fugitive emissions when used in conjunction with measurements, as well as a comparison of different dispersion models.

The findings of this work are expected to contribute to standards and guideline documents for industry including for example Best Available Technology guidelines highlighting how to monitor sites and capture fugitive losses.

Jane Burston, head of the Centre for Carbon Measurement at the National Physical Laboratory said:

“Methane plays a big role in global warming. The IPCC recently updated their estimate of methane’s global warming potential from 72 times that of carbon dioxide to 86 times over a 20 year time period. So it’s a critical area to tackle for climate change mitigation. At the same time many opportunities to reduce fugitive losses are profit-making or cost neutral, so it’s a potential business opportunity too.”

Mary Ritter, Chief Executive Officer, Climate-KIC said:

“Methane is a significant driver of climate-change and a valuable resource. Fugitive methane emissions measurement services will help a wide range of operators to better manage their processes and increase their profitability. Climate-KIC is proud to fund the project and collaborate with FuME’s consortium to fight climate change by stimulating clean innovation and growth in Europe. “

“We are delighted to partner with Climate-KIC on this important quest to validate a new generation of measurement technologies,” said Francis Egan, CEO of Cuadrilla Resources.

Neil Dawson, Environmental Engineering Manager, National Grid, said: “We want to make absolutely sure that our gas transmission business plays a part in tackling climate change. That’s why we are bringing our expertise to the table to help develop methane measurement services to reduce fugitive methane emissions.”

Open-vent volcanoes and the maturation of volcanic hazards study

This is the cover of GSA Special Paper 498: 'Understanding Open-Vent Volcanism and Related Hazards.' -  Photos by Nick Varley (top) and Vinicio Bejarnao (bottom).
This is the cover of GSA Special Paper 498: ‘Understanding Open-Vent Volcanism and Related Hazards.’ – Photos by Nick Varley (top) and Vinicio Bejarnao (bottom).

Understanding and mitigating volcanic hazards is evolving and is increasingly being managed by scientists and engineers in their home countries. Nevertheless, scientists from countries where volcanic hazards are not as immediate are eager to work with them, especially when introducing new technology, which supports infrastructure development. The lure of working at sites of diverse volcanic activity is strong, and participation in international collaborative work during real volcanic crises is especially valuable to young scientists.

This new Special Paper from The Geological Society of America is the third this decade to focus mainly on Central American volcanic hazards, and its 12 chapters demonstrate the continued maturation of international hazards work. Heavily illustrated with color photos and graphics, the volume covers Guatemala, El Salvador, Costa Rica, Nicaragua, and Panama, and also includes chapters on volcanics in Chile, Mexico, and Italy.

Editors William I. Rose of Michigan Technological University, Jos√© Luis Palma of the Universidad de Concepci√≥n in Chile, Hugo Delgado Granados of UNAM, and Nick Varley of the Universidad de Colima explain that the studies in this GSA Special Paper focus on “open-vent volcanoes” because they “offer an apparent direct connection with active eruptive processes.” Between eruptions, they note, “open-vent volcanoes are characterized by persistent gas emissions,” and this, combined with their relative quiescence, make them ideal sites for collaborative study.