Cold case: Siberian hot springs reveal ancient ecology

Albert Colman, assistant professor in geophysical sciences, researches a class of bacteria that consumes the carbon monoxide both produced by other microbes and derived from the volcanic gases bubbling up in the hot springs in eastern Siberia's Kamchatka Peninsula. -  Courtesy of Albert Colman
Albert Colman, assistant professor in geophysical sciences, researches a class of bacteria that consumes the carbon monoxide both produced by other microbes and derived from the volcanic gases bubbling up in the hot springs in eastern Siberia’s Kamchatka Peninsula. – Courtesy of Albert Colman

Exotic bacteria that do not rely on oxygen may have played an important role in determining the composition of Earth’s early atmosphere, according to a theory that UChicago researcher Albert Colman is testing in the scalding hot springs of a volcanic crater in Siberia.

He has found that bacteria at the site produce as well as consume carbon monoxide, a surprising twist that scientists must take into account as they attempt to reconstruct the evolution of Earth’s early atmosphere.

Colman, an assistant professor in geophysical sciences, joined an American-Russian team in 2005 working in the Uzon Caldera of eastern Siberia’s Kamchatka Peninsula to study the microbiology and geochemistry of the region’s hot springs.

Colman and his colleagues focused on anaerobic carboxydotrophs – microbes with a physiology as exotic as their name. They use carbon monoxide mostly for energy, but also as a source of carbon for the production of new cellular material.

This carbon monoxide-based physiology results in the microbial production of hydrogen, a component of certain alternative fuels. The research team thus also sought to probe biotechnological applications for cleaning carbon monoxide from certain industrial waste gases and for biohydrogen production.

“We targeted geothermal fields,” Colman says, “believing that such environments would prove to be prime habitat for carboxydotrophs due to the venting of chemically reduced, or in other words, oxygen-free and methane-, hydrogen-, and carbon dioxide-rich volcanic gases in the springs.”

The team did discover a wide range of carboxydotrophs. Paradoxically, Colman found that much of the carbon monoxide at the Kamchatka site was not bubbling up with the volcanic gases; instead “it was being produced by the microbial community in these springs,” he says. His team began considering the implications of a strong microbial source of carbon monoxide, both in the local springs but also for the early Earth.

The Great Oxidation Event


Earth’s early atmosphere contained hardly any oxygen but relatively large amounts of carbon dioxide and possibly methane, experts believe. Then during the so-called Great Oxidation Event about 2.3 to 2.5 billion years ago, oxygen levels in the atmosphere rose from vanishingly small amounts to modestly low concentrations.

“This important transition enabled a widespread diversification and proliferation of metabolic strategies and paved the way for a much later climb in oxygen to levels that were high enough to support animal life,” Colman says.

The processing of carbon monoxide by the microbial community could have influenced atmospheric chemistry and climate during the Archean, an interval of Earth’s history that preceded the Great Oxidation Event.

Previous computer simulations rely on a primitive biosphere as the sole means of removing near-surface carbon monoxide produced when the sun’s ultraviolet rays split carbon dioxide molecules. This theoretical sink in the biosphere would have prevented substantial accumulation of atmospheric carbon monoxide.

“But our work is showing that you can’t consider microbial communities as a one-way sink for carbon monoxide,” Colman says. The communities both produce and consume carbon monoxide. “It’s a dynamic cycle.”

Colman’s calculations suggest that carbon monoxide may have nearly reached percentage concentrations of 1 percent in the atmosphere, tens of thousands of times higher than current concentrations. This in turn would have exerted influence on concentration of atmospheric methane, a powerful greenhouse gas, with consequences for global temperatures.

Toxic concentrations


Furthermore, such high carbon monoxide concentrations would have been toxic for many microorganisms, placing evolutionary pressure on the early biosphere.

“A much larger fraction of the microbial community would’ve been exposed to higher carbon monoxide concentrations and would’ve had to develop strategies for coping with the high concentrations because of their toxicity,” Colman says.
Colman and UChicago graduate student Bo He have conducted fieldwork in both Uzon and California’s Lassen Volcanic National Park. Colman has most recently journeyed to Kamchatka for additional fieldwork in 2007 and 2010.

“This fantastic field site has a wide variety of hot springs,” he says. “Different colors, temperatures, chemistries, different types of micro-organisms living in them. It’s a lot like Yellowstone in certain respects.”

Lassen’s springs have a narrower range of acidic chemistries, yet microbial production of carbon monoxide appears to be widespread in both settings.

Collaborator Frank Robb of the University of Maryland, Baltimore, lauds Colman for his “boundless enthusiasm” and for his “meticulous preparation,” much-needed qualities to ensure the safe transport of delicate instruments into the field.

Some of the microbial life within the caldera’s complex hydrothermal system may survive in even more extreme settings than scientists have observed at the surface, Colman says.

“One thing we really don’t know very well is the extent to which microbial communities beneath the surface influence what we see at the surface, but that’s possible as well,” Colman says.

“We know from culturing deep-sea vent microbes that they can live at temperatures that exceed the temperatures we’re observing right at the surface, and some of the turn out to metabolize carbon monoxide.”

Novel ash analysis validates volcano no-fly zones

Air safety authorities essentially had to fly blind when the ash cloud from Eyjafjallajökull caused them to close the airspace over Europe last year. 
Now a team of nanoscientists from University of Copenhagen have developed a way to provide the necessary information within hours. -  Photo: Mikal Schlosser/University of Copenhagen
Air safety authorities essentially had to fly blind when the ash cloud from Eyjafjallajökull caused them to close the airspace over Europe last year.
Now a team of nanoscientists from University of Copenhagen have developed a way to provide the necessary information within hours. – Photo: Mikal Schlosser/University of Copenhagen

Planes were grounded all over Europe when the Eyjafjallajökull volcano erupted in Iceland last year. But no one knew if the no fly zone was really necessary. And the only way to find out would have been to fly a plane through the ash cloud – a potentially fatal experiment.

Now a team of researchers from the University of Copenhagen and the University of Iceland have developed a protocol for rapidly providing air traffic authorities with the data they need for deciding whether or not to ground planes next time ash threatens airspace safety.

A study by the teams of Professors Susan Stipp from the Nano-Science Centre of the University of Copenhagen and Sigurdur Gislason from the University of Iceland is reported this week in the internationally recognized journal PNAS (Proceedings of the National Academy of Sciences, USA).

Volcanic ash could crash planes if the particles are small enough to travel high and far, if they are sharp enough to sandblast the windows and bodies of airplanes, or if they melt inside jet engines. The ash from the Eyjafjallajökull eruption was dangerous on all counts, so the authorities certainly made the right decision in April 2010. That’s one conclusion from the Copenhagen/Iceland paper but Professor Stipp thinks the team’s most important contribution is a method for quickly assessing future ash.

“I was surprised to find nothing in the scientific literature or on the web about characterizing ash to provide information for aviation authorities. So we decided to do something about it”, explains Stipp.

Some 10 million travelers were affected by the ash plume, which cost an estimated two and a half billion Euros.

“Aviation authorities were sitting on a knife-edge at the center of a huge dilemma. If they closed airspace unnecessarily, people, families, businesses and the economy would suffer, but if they allowed air travel, people and planes could be put at risk, perhaps with tragic consequences,” says Professor Stipp.

So Susan Stipp phoned her colleague and friend in Reykjavik, Siggi Gislason and while the explosive eruptions were at their worst, he and a student donned protective clothing, collected ash as it fell and sent some samples to Denmark. “In the Nano-Science Centre at the University of Copenhagen, we have analytical facilities and a research team that are unique in the world for characterizing natural nanoparticles and their reaction with air, water and oil.” explains Professor Susan Stipp

The newly developed protocol for assessing future ash can provide information for safety assessment in less than 24 hours. Within an hour of receiving the samples, scientists can tell how poisonous they are for the animals and people living closest to the eruption. Half a day enables them to predict the danger of sandblasting on aircraft, and assess the risk of fouling jet engines. Within a day they can tell the size of the particles, providing data for predicting where and how far the ash cloud will spread. Susan Stipp hopes that because of the analysis protocol, aviation authorities will not face such an impossible dilemma next time fine-grained ash threatens passenger safety. “Some of the analytical instruments needed are standard equipment in Earth science departments and some are commonly used by materials scientists, so with our protocol, aviation authorities ought to be able to get fast, reliable answers,” concludes Professor Stipp.

Team studies Earth’s recovery from prehistoric global warming

The Earth may be able to recover from rising carbon dioxide emissions faster than previously thought, according to evidence from a prehistoric event analyzed by a Purdue University-led team.

When faced with high levels of atmospheric carbon dioxide and rising temperatures 56 million years ago, the Earth increased its ability to pull carbon from the air. This led to a recovery that was quicker than anticipated by many models of the carbon cycle – though still on the order of tens of thousands of years, said Gabriel Bowen, the associate professor of earth and atmospheric sciences who led the study.

“We found that more than half of the added carbon dioxide was pulled from the atmosphere within 30,000 to 40,000 years, which is one-third of the time span previously thought,” said Bowen, who also is a member of the Purdue Climate Change Research Center. “We still don’t know exactly where this carbon went, but the evidence suggests it was a much more dynamic response than traditional models represent.”

Bowen worked with James Zachos, a professor of earth and planetary sciences at the University of California, Santa Cruz, to study the end of the Palaeocene-Eocene Thermal Maximum, an approximately 170,000-year-long period of global warming that has many features in common with the world’s current situation, he said.

“During this prehistoric event billions of tons of carbon was released into the ocean, atmosphere and biosphere, causing warming of about 5 degrees Celsius,” Bowen said. “This is a good analog for the carbon being released from fossil fuels today.”

Scientists have known of this prehistoric event for 20 years, but how the system recovered and returned to normal atmospheric levels has remained a mystery.

Bowen and Zachos examined samples of marine and terrestrial sediments deposited throughout the event. The team measured the levels of two different types of carbon atoms, the isotopes carbon-12 and carbon-13. The ratio of these isotopes changes as carbon dioxide is drawn from or added to the atmosphere during the growth or decay of organic matter.

Plants prefer carbon-12 during photosynthesis, and when they accelerate their uptake of carbon dioxide it shifts the carbon isotope ratio in the atmosphere. This shift is then reflected in the carbon isotopes present in rock minerals formed by reactions involving atmospheric carbon dioxide, Bowen said.

“The rate of the carbon isotope change in rock minerals tells us how rapidly the carbon dioxide was pulled from the atmosphere,” he said. “We can see the fluxes of carbon dioxide in to and out of the atmosphere. At the beginning of the event we see a shift indicating that a lot of organic-derived carbon dioxide had been added to the atmosphere, and at the end of the event we see a shift indicating that a lot of carbon dioxide was taken up as organic carbon and thus removed from the atmosphere.”

A paper detailing the team’s National Science Foundation-funded work was published in Nature Geoscience.

It had been thought that a slow and fairly constant recovery began soon after excess carbon entered the atmosphere and that the weathering of rocks, called silicate weathering, dictated the timing of the response.

Atmospheric carbon dioxide that reacts with silicon-based minerals in rocks is pulled from the air and captured in the end product of the reaction. This mechanism has a fairly direct correlation with the amount of carbon dioxide in the atmosphere and occurs relatively slowly, Bowen said.

The changes Bowen and Zachos found during the Palaeocene-Eocene Thermal Maximum went beyond the effects expected from silicate weathering, he said.

“It seems there was actually a long period of higher levels of atmospheric carbon dioxide followed by a short and rapid recovery to normal levels,” he said. “During the recovery, the rate at which carbon was pulled from the atmosphere was an order of magnitude greater than the slow drawdown of carbon expected from silicate weathering alone.”

A rapid growth of the biosphere, with a spread of forests, plants and carbon-rich soils to take in the excess carbon dioxide, could explain the quick recovery, Bowen said.

“Expansion of the biosphere is one plausible mechanism for the rapid recovery, but in order to take up this much carbon in forests and soils there must have first been a massive depletion of these carbon stocks,” he said. “We don’t currently know where all the carbon that caused this event came from, and our results suggest the troubling possibility that widespread decay or burning of large parts of the continental biosphere may have been involved.”

Release from a different source, such as volcanoes or sea floor sediments, may have started the event, he said.

“The release of carbon from the biosphere may have occurred as a positive feedback to the warming,” Bowen said. “The forests may have dried out, which can lead to die off and forest fires. If we take the Earth’s future climate to a place where that feedback starts to happen we could see accelerated rates of climate change.”

The team continues to work on new models of the carbon cycle and is also investigating changes in the water cycle during the Palaeocene-Eocene Thermal Maximum.

“We need to figure out where the carbon went all those years ago to know where it could go in the future,” he said. “These findings show that the Earth’s response is much more dynamic than we thought and highlight the importance of feedback loops in the carbon cycle.”

Melting ice on Arctic islands a major player in sea level rise

This is summer sea ice off the coast of Devon Island in Nunavut, Canada in August 2008. -  Alex Gardner
This is summer sea ice off the coast of Devon Island in Nunavut, Canada in August 2008. – Alex Gardner

Melting glaciers and ice caps on Canadian Arctic islands play a much greater role in sea level rise than scientists previously thought, according to a new study led by a University of Michigan researcher.

The 550,000-square-mile Canadian Arctic Archipelago contains some 30,000 islands. Between 2004 and 2009, the region lost the equivalent of three-quarters of the water in Lake Erie, the study found. Warmer-than-usual temperatures in those years caused a rapid increase in the melting of glacier ice and snow, said Alex Gardner, a research fellow in the Department of Atmospheric, Oceanic and Space Sciences who led the project. The study is published online in Nature on April 20.

“This is a region that we previously didn’t think was contributing much to sea level rise,” Gardner said. “Now we realize that outside of Antarctica and Greenland, it was the largest contributor for the years 2007 through 2009. This area is highly sensitive and if temperatures continue to increase, we will see much more melting.”

Ninety-nine percent of all the world’s land ice is trapped in the massive ice sheets of Antarctica and Greenland. Despite their size, they currently only account for about half of the land-ice being lost to oceans. This is partly because they are cold enough that ice only melts at their edges.

The other half of the ice melt adding to sea-level rise comes from smaller mountain glaciers and ice caps such as those in the Canadian Arctic, Alaska, and Patagonia. This study underscores the importance of these many smaller, often overlooked regions, Gardner said.

During the first three years of this study, from 2004 through 2006, the region lost an average of 7 cubic miles of water per year. That increased dramatically to 22 cubic miles of water—roughly 24 trillion gallons—per year during the latter part of the study. Over the entire six years, this added a total of 1 millimeter to the height of the world’s oceans. While that might not sound like much, Gardner says that small amounts can make big differences.

In this study, a one-degree increase in average air temperature resulted in 15 cubic miles of additional melting.

Because the study took place over just six years, however, the results don’t signify a trend.

“This is a big response to a small change in climate,” Gardner said. “If the warming continues and we start to see similar responses in other glaciated regions, I would say it’s worrisome, but right now we just don’t know if it will continue.”

The United Nations projects that the oceans will rise by a full meter by the end of century. This could have ramifications for tens of millions of people who live in coastal cities and low-lying areas across the globe. Future tsunamis and storm surges, for example, would more easily overtop ocean barriers.

To conduct the study, researchers from an international array of institutions performed numerical simulations and then used two different satellite-based techniques to independently validate their model results. Through laser altimetry, they measured changes in the region’s elevation over time. And through a technique called “gravimetry,” they measured changes in the Earth’s gravitational field, which signified a redistribution of mass—a loss of mass for glaciers and ice caps.

Arctic coasts on the retreat

The coastline in Arctic regions reacts to climate change with increased erosion and retreats by half a metre per year on average. This means substantial changes for Arctic ecosystems near the coast and the population living there. A consortium of more than thirty scientists from ten countries, including researchers from the Alfred Wegener Institute for Polar and Marine Research in the Helmholtz Association and from the Helmholtz Centre in Geesthacht, comes to this conclusion in two studies published in Estuaries and Coasts and online on www.arcticcoasts.org. They jointly investigated over 100,000 kilometres and thus a fourth of all Arctic coasts and their results have now been published for the first time.

The changes are particularly dramatic in the Laptev, East Siberian and Beaufort Seas, where coastal erosion rates reach more than 8 meters a year in some cases. Since around a third of the world’s coasts are located in the Arctic permafrost, coastal erosion may affect enormous areas in future. In general Arctic coasts react more sensitively to global warming than coasts in the mid-latitudes. Up to now they have been protected against the eroding force of the waves by large sea ice areas. Due to the continuous decline in sea ice, this protection is jeopardized and we have to reckon with rapid changes in a situation that has remained stable for millennia.

Two thirds of the Arctic coasts do not consist of rock, but of frozen soft substrate (permafrost). And precisely these coasts are extremely hard hit by erosion. As a rule, Arctic regions are quite thinly populated. However, as nearly everywhere in the world, the coasts in the far north are important axes for economic and social life. The growing need for global energy resources as well as increasing tourism and freight transport additionally intensify anthropogenic influence on the coastal regions of the Arctic. For wild animal stocks, like the great caribou herds of the north, and the widespread freshwater lakes near the coast progressive erosion brings about significant changes in ecological conditions.

Successful cooperation

More than thirty scientists from ten countries were involved in preparing the 170-page status report entitled “State of the Arctic Coast 2010″. The study was initiated and coordinated by the International Arctic Science Committee (IASC), the international joint project Land-Ocean Interactions in the Coastal Zone (LOICZ), the International Permafrost Association (IPA) and the Arctic Monitoring and Assessment Programme (AMAP) working group of the Arctic Council.

“This international and interdisciplinary report documents in particular the interest and expertise of German scientists in the field of Arctic coastal research,” says Dr. Volker Rachold, Executive Secretary of the IASC. “Three of the international organizations involved in the report are based in Germany. The secretariats of the IASC and IPA are located at the Potsdam Research Unit of the Alfred Wegener Institute for Polar and Marine Research in the Helmholtz Association (AWI). The international coordination office of the LOICZ project is funded by the Helmholtz Centre in Geesthacht (HZG) and has its domicile there at the Institute for Coastal Research. Among other things, researchers see the current study as an international and national contribution to the joint research program of the Helmholtz Association “Polar Regions and Coasts in a Changing Earth System” (PACES), which is supported by the Alfred Wegener Institute and the Helmholtz Centre in Geesthacht.

“When systematic data acquisition began in 2000, detailed information was available for barely 0.5% of the Arctic coasts,” says Dr. Hugues Lantuit from the Alfred Wegener Institute (AWI). At the same time the geologist from AWI’s Potsdam Research Unit heads the international secretariat of the IPA and is also one of the coordinators of the study. After over ten years of intensive work we have now gained a comprehensive overview of the state and risk of erosion in these areas. “The Arctic is developing more and more into a mirror of various drivers of global change and into a focal point of national and worldwide economic interest,” says Dr. Hartwig Kremer, head of the LOICZ project office.

Report cites ‘liquefaction’ as key to much of Japanese earthquake damage

Liquefaction in the recent subduction zone earthquake in Japan caused entire buildings to sink several feet lower than they had been previously. (Photo by Scott Ashford, courtesy of Oregon State University)
Liquefaction in the recent subduction zone earthquake in Japan caused entire buildings to sink several feet lower than they had been previously. (Photo by Scott Ashford, courtesy of Oregon State University)

The massive subduction zone earthquake in Japan caused a significant level of soil “liquefaction” that has surprised researchers with its widespread severity, a new analysis shows.

The findings also raise questions about whether existing building codes and engineering technologies are adequately accounting for this phenomenon in other vulnerable locations, which in the U.S. include Portland, Ore., parts of the Willamette Valley and other areas of Oregon, Washington and California.

A preliminary report about some of the damage in Japan has just been concluded by the Geotechnical Extreme Events Reconnaissance, or GEER advance team, in work supported by the National Science Foundation.

The broad geographic extent of the liquefaction over hundreds of miles was daunting to experienced engineers who are accustomed to seeing disaster sites, including the recent earthquakes in Chile and New Zealand.

“We’ve seen localized examples of soil liquefaction as extreme as this before, but the distance and extent of damage in Japan were unusually severe,” said Scott Ashford, a professor of geotechnical engineering at Oregon State University and a member of this research team.

“Entire structures were tilted and sinking into the sediments, even while they remained intact,” Ashford said. “The shifts in soil destroyed water, sewer and gas pipelines, crippling the utilities and infrastructure these communities need to function. We saw some places that sank as much as four feet.”

Some degree of soil liquefaction is common in almost any major earthquake. It’s a phenomenon in which saturated soils, particularly recent sediments, sand, gravel or fill, can lose much of their strength and flow during an earthquake. This can allow structures to shift or sink and significantly magnify the structural damage produced by the shaking itself.

But most earthquakes are much shorter than the recent event in Japan, Ashford said. The length of the Japanese earthquake, as much as five minutes, may force researchers to reconsider the extent of liquefaction damage possible in situations such as this.

“With such a long-lasting earthquake, we saw how structures that might have been okay after 30 seconds just continued to sink and tilt as the shaking continued for several more minutes,” he said. “And it was clear that younger sediments, and especially areas built on recently filled ground, are much more vulnerable.”

The data provided by analyzing the Japanese earthquake, researchers said, should make it possible to improve the understanding of this soil phenomenon and better prepare for it in the future. Ashford said it was critical for the team to collect the information quickly, before damage was removed in the recovery efforts.

“There’s no doubt that we’ll learn things from what happened in Japan that will help us to mitigate risks in other similar events,” Ashford said. “Future construction in some places may make more use of techniques known to reduce liquefaction, such as better compaction to make soils dense, or use of reinforcing stone columns.”

The massive subduction zone earthquakes capable of this type of shaking, which are the most powerful in the world, don’t happen everywhere, even in other regions such as Southern California that face seismic risks. But an event almost exactly like that is expected in the Pacific Northwest from the Cascadia Subduction Zone, and the new findings make it clear that liquefaction will be a critical issue there.

Many parts of that region, from northern California to British Columbia, have younger soils vulnerable to liquefaction – on the coast, near river deposits or in areas with filled ground. The “young” sediments, in geologic terms, may be those deposited within the past 10,000 years or more. In Oregon, for instance, that describes much of downtown Portland, the Portland International Airport, nearby industrial facilities and other cities and parts of the Willamette Valley.

Anything near a river and old flood plains is a suspect, and the Oregon Department of Transportation has already concluded that 1,100 bridges in the state are at risk from an earthquake on the Cascadia Subduction Zone. Fewer than 15 percent of them have been retrofitted to prevent collapse.

“Buildings that are built on soils vulnerable to liquefaction not only tend to sink or tilt during an earthquake, but slide downhill if there’s any slope, like towards a nearby river,” Ashford said. “This is called lateral spreading. In Portland we might expect this sideways sliding of more than four feet in some cases, more than enough to tear apart buildings and buried pipelines.”

Some damage may be reduced or prevented by different construction techniques or retrofitting, Ashford said. But another reasonable goal is to at least anticipate the damage – to know what will probably be destroyed, make contingency plans for what will be needed to implement repairs, and design ways to help protect and care for residents until services can be restored.

Small armies of utility crews are already at work in Japan on such tasks, Ashford said. There have been estimates of $300 billion in damage.

The recent survey in Japan identified areas as far away as Tokyo Bay that had liquefaction-induced ground failures. The magnitude of settlement and tilt was “larger than previously observed for such light structures,” the researchers wrote in their report.

Impacts and deformation were erratic, often varying significantly from one street to the next. Port facilities along the coast faced major liquefaction damage. Strong Japanese construction standards helped prevent many buildings from collapse – even as they tilted and sank into the ground.

Researchers use GPS data to model effects of tidal loads on Earth’s surface

The outline of each ellipse represents the motion the ground makes as the earth flexes in response to the time and space dependent tides. There is an ellipse for each of the GPS sites used in the study, and the color indicates the amplitude of the tidal response movement. -  California Institute of Technology
The outline of each ellipse represents the motion the ground makes as the earth flexes in response to the time and space dependent tides. There is an ellipse for each of the GPS sites used in the study, and the color indicates the amplitude of the tidal response movement. – California Institute of Technology

For many people, Global Positioning System (GPS) satellite technology is little more than a high-tech version of a traditional paper map. Used in automobile navigation systems and smart phones, GPS helps folks find their way around a new neighborhood or locate a nearby restaurant. But GPS is doing much, much more for researchers at the California Institute of Technology (Caltech): it’s helping them find their way to a more complete understanding of Earth’s interior structure.

Up until now, the best way to explore Earth’s internal structures-to measure geological properties such as density and elasticity-has been through seismology and laboratory experiments. “At its most fundamental level, seismology is sensitive to specific combinations of these properties, which control the speed of seismic waves,” says Mark Simons, professor of geophysics at Caltech’s Seismological Laboratory, part of the Division of Geological and Planetary Sciences. “However, it is difficult using seismology alone to separate the effects that variations in density have from those associated with variations in elastic properties.”

Now Simons and Takeo Ito, visiting associate at the Seismological Laboratory and assistant professor of earth and planetary dynamics at Nagoya University in Japan, are using data from GPS satellite systems in an entirely new way: to measure the solid earth’s response to the movements of ocean tides-which place a large stress on Earth’s surface-and to estimate separately the effects of Earth’s density and the properties controlling response when a force is applied to it (known as elastic moduli).

Their work was published in this week’s issue of Science Express.

By using measurements of Earth’s movement taken from high-precision, continuously recording permanent GPS receivers installed across the western United States by the Plate Boundary Observatory (PBO), the researchers were able to observe tide-induced displacements-or movements of Earth’s surface-of as little as one millimeter. PBO is a component of EarthScope, a program that seeks to understand the processes controlling earthquakes and volcanoes by exploring the structure and evolution of the North American continent.

The team focused on understanding the properties of the asthenosphere, a layer of weak and viscous upper mantle that lies below Earth’s crust, and used those measurements to build one-dimensional models of Earth’s response to the diurnal tides in the western United States.

“The asthenosphere plays an important role in plate tectonics, as it lies directly under the plates,” explains Ito. “The results of our study give us a better understanding of the asthenosphere, which in turn can help us understand how the plates move.”

The models provided a look at the variations in density from Earth’s surface down to a depth of about 400 kilometers. The researchers found that the density of the asthenosphere under the western United States and the eastern Pacific Ocean is abnormally low relative to the global average.

“Variations in density can either be caused by variations in the chemical makeup of the material, the presence of melt, or due to the effects of thermal expansion, whereby a given material will decrease in density as its temperature increases,” explains Simons. “In this study, we interpret the observed density anomaly to be due to the effects of elevated temperatures in the asthenosphere below the western United States and neighboring offshore areas. The required peak temperature anomaly would be about 300 degrees Celsius higher than the global average at those depths.”

This type of data provides keys to understanding the chemical and mechanical dynamics of the planet, such as how heat flows through the mantle and how tectonic plates on Earth’s surface are evolving.

“It is amazing that by measuring the twice-a-day centimeter-scale cyclic movement of Earth’s surface with a GPS receiver, we can infer the variation of density 220 kilometers below the surface,” says Simons.

Now that the researchers know it is possible to use GPS to derive measurements of internal Earth structures, they anticipate several new directions for this research.

“We hope to extend the observations to be global in scope, which may require temporary deployments of GPS in important areas that are typically tectonically bland-in other words, devoid of significant earthquakes and volcanoes-and thus do not have existing dense continuous GPS arrays already in place,” says Simons. Next steps may also include going beyond the current one-dimensional depth-dependent models to build 3-D models, and combining the GPS approach with more conventional seismic approaches.

“The method we developed for gathering data from GPS devices has significant potential for improving 3-D images of Earth’s internal structure,” says Ito.

Research digs deep into the fracking controversy

The turmoil in oil-producing nations is triggering turmoil at home, as rising oil prices force Americans to pay more at the pump. Meanwhile, there's a growing industry that's promising jobs and access to cheaper energy resources on American soil, but it's not without its controversy. Deborah Kittner, a University of Cincinnati doctoral student in geography, presents,  -  Provided by Deborah Kittner
The turmoil in oil-producing nations is triggering turmoil at home, as rising oil prices force Americans to pay more at the pump. Meanwhile, there’s a growing industry that’s promising jobs and access to cheaper energy resources on American soil, but it’s not without its controversy. Deborah Kittner, a University of Cincinnati doctoral student in geography, presents, – Provided by Deborah Kittner

The turmoil in oil-producing nations is triggering turmoil at home, as rising oil prices force Americans to pay more at the pump. Meanwhile, there’s a growing industry that’s promising jobs and access to cheaper energy resources on American soil, but it’s not without its controversy. Deborah Kittner, a University of Cincinnati doctoral student in geography, presents, “What’s the Fracking Problem? Extraction Industry’s Neglect of the Locals in the Pennsylvania Marcellus Region,” at the annual meeting of the Association of American Geographers. Kittner will be presenting April 14 at the meeting in Seattle.

Fracking involves using millions of gallons of water, sand and a chemical cocktail to break up organic-rich shale to release natural gas resources. Kittner’s research examined the industry in Pennsylvania, known as the “sweet spot” for this resource, because of the abundance of natural gas. Pittsburgh has now outlawed fracking in its city limits as has Buffalo, N.Y., amid concerns that chemical leaks could contaminate groundwater, wells and other water resources.

The EPA is now doing additional study on the relationship of hydraulic fracturing and drinking water and groundwater after congress stated its concern about the potential adverse impact that the process may have on water quality and public health. Kittner attended an EPA hearing and also interviewed people in the hydraulic fracturing industry. She says billions of dollars from domestic as well as international sources have been invested in the industry.

The chemical cocktail used in the process is actually relatively small. The mixture is about 95-percent water, nearly five percent sand, and the rest chemical, yet, Kittner says some of those chemicals are known toxins and carcinogens, hence, the “not in my backyard” backlash from communities that can be prospects for drilling. The flow-back water from drilling is naturally a very salty brine, prone to bacterial growth, and potentially contaminated with heavy metals, Kittner says. In addition, there’s the question of how to properly dispose of millions of gallons of contaminated water, as well as concerns about trucking it on winding, rural back roads.

Based on her research, Kittner says that overall, the industry is “working to be environmentally responsible, and it becomes frustrated at companies that do otherwise.”

“I think that the study that the EPA is doing is going to be really helpful, and the industry – however reluctant to new regulations – is working with the EPA on this,” Kittner says.

Electric Yellowstone

This image, based on variations in electrical conductivity of underground rock, shows the volcanic plume of partly molten rock that feeds the Yellowstone supervolcano. Yellow and red indicate higher conductivity, green and blue indicate lower conductivity. Made by University of Utah geophysicists and computer scientists, this is the first large-scale 'geoelectric' image of the Yellowstone hotspot. -  University of Utah.
This image, based on variations in electrical conductivity of underground rock, shows the volcanic plume of partly molten rock that feeds the Yellowstone supervolcano. Yellow and red indicate higher conductivity, green and blue indicate lower conductivity. Made by University of Utah geophysicists and computer scientists, this is the first large-scale ‘geoelectric’ image of the Yellowstone hotspot. – University of Utah.

University of Utah geophysicists made the first large-scale picture of the electrical conductivity of the gigantic underground plume of hot and partly molten rock that feeds the Yellowstone supervolcano. The image suggests the plume is even bigger than it appears in earlier images made with earthquake waves.

“It’s like comparing ultrasound and MRI in the human body; they are different imaging technologies,” says geophysics Professor Michael Zhdanov, principal author of the new study and an expert on measuring magnetic and electrical fields on Earth’s surface to find oil, gas, minerals and geologic structures underground.

“It’s a totally new and different way of imaging and looking at the volcanic roots of Yellowstone,” says study co-author Robert B. Smith, professor emeritus and research professor of geophysics and a coordinating scientist of the Yellowstone Volcano Observatory.

The new University of Utah study has been accepted for publication in Geophysical Research Letters, which plans to publish it within the next few weeks.

In a December 2009 study, Smith used seismic waves from earthquakes to make the most detailed seismic images yet of the “hotspot” plumbing that feeds the Yellowstone volcano. Seismic waves move faster through cold rock and slower through hot rock. Measurements of seismic-wave speeds were used to make a three-dimensional picture, quite like X-rays are combined to make a medical CT scan.

The 2009 images showed the plume of hot and molten rock dips downward from Yellowstone at an angle of 60 degrees and extends 150 miles west-northwest to a point at least 410 miles under the Montana-Idaho border – as far as seismic imaging could “see.”

In the new study, images of the Yellowstone plume’s electrical conductivity – generated by molten silicate rocks and hot briny water mixed in partly molten rock – shows the conductive part of the plume dipping more gently, at an angle of perhaps 40 degrees to the west, and extending perhaps 400 miles from east to west. The geoelectric image can “see” only 200 miles deep.

Two Views of the Yellowstone Volcanic Plume

Smith says the geoelectric and seismic images of the Yellowstone plume look somewhat different because “we are imaging slightly different things.” Seismic images highlight materials such as molten or partly molten rock that slow seismic waves, while the geoelectric image is sensitive to briny fluids that conduct electricity.

“It [the plume] is very conductive compared with the rock around it,” Zhdanov says. “It’s close to seawater in conductivity.”

The lesser tilt of the geoelectric plume image raises the possibility that the seismically imaged plume, shaped somewhat like a tilted tornado, may be enveloped by a broader, underground sheath of partly molten rock and liquids, Zhdanov and Smith say.

“It’s a bigger size” in the geoelectric picture, says Smith. “We can infer there are more fluids” than shown by seismic images.

Despite differences, he says, “this body that conducts electricity is in about the same location with similar geometry as the seismically imaged Yellowstone plume.”

Zhdanov says that last year, other researchers presented preliminary findings at a meeting comparing electrical and seismic features under the Yellowstone area, but only to shallow depths and over a smaller area.

The study was conducted by Zhdanov, Smith, two members of Zhdanov’s lab – research geophysicist Alexander Gribenko and geophysics Ph.D. student Marie Green – and computer scientist Martin Cuma of the University of Utah’s Center for High Performance Computing. Funding came from the National Science Foundation (NSF) and the Consortium for Electromagnetic Modeling and Inversion, which Zhdanov heads.

The Yellowstone Hotspot at a Glance

The new study says nothing about the chances of another cataclysmic caldera (giant crater) eruption at Yellowstone, which has produced three such catastrophes in the past 2 million years.

Almost 17 million years ago, the plume of hot and partly molten rock known as the Yellowstone hotspot first erupted near what is now the Oregon-Idaho-Nevada border. As North America drifted slowly southwest over the hotspot, there were more than 140 gargantuan caldera eruptions – the largest kind of eruption known on Earth – along a northeast-trending path that is now Idaho’s Snake River Plain.

The hotspot finally reached Yellowstone about 2 million years ago, yielding three huge caldera eruptions about 2 million, 1.3 million and 642,000 years ago. Two of the eruptions blanketed half of North America with volcanic ash, producing 2,500 times and 1,000 times more ash, respectively, than the 1980 eruption of Mount St. Helens in Washington state. Smaller eruptions occurred at Yellowstone in between the big blasts and as recently as 70,000 years ago.

Seismic and ground-deformation studies previously showed the top of the rising volcanic plume flattens out like a 300-mile-wide pancake 50 miles beneath Yellowstone. There, giant blobs of hot and partly molten rock break off the top of the plume and slowly rise to feed the magma chamber – a spongy, banana-shaped body of molten and partly molten rock located about 4 miles to 10 miles beneath the ground at Yellowstone.

Computing a Geoelectrical Image of Yellowstone’s Hotspot Plume

Zhdanov and colleagues used data collected by EarthScope, an NSF-funded effort to collect seismic, magnetotelluric and geodetic (ground deformation) data to study the structure and evolution of North America. Using the data to image the Yellowstone plume was a computing challenge because so much data was involved.

Inversion is a formal mathematical method used to “extract information about the deep geological structures of the Earth from the magnetic and electrical fields recorded on the ground surface,” Zhdanov says. Inversion also is used to convert measurements of seismic waves at the surface into underground images.

Magnetotelluric measurements record very low frequencies of electromagnetic radiation – about 0.0001 to 0.0664 Hertz – far below the frequencies of radio or TV signals or even electric power lines. This low-frequency, long-wavelength electromagnetic field penetrates a couple hundred miles into the Earth. By comparison, TV and radio waves penetrate only a fraction of an inch.

The EarthScope data were collected by 115 stations in Wyoming, Montana and Idaho – the three states straddled by Yellowstone National Park. The stations, which include electric and magnetic field sensors, are operated by Oregon State University for the Incorporated Research Institutions for Seismology, a consortium of universities.

In a supercomputer, a simulation predicts expected electric and magnetic measurements at the surface based on known underground structures. That allows the real surface measurements to be “inverted” to make an image of underground structure.

Zhdanov says it took about 18 hours of supercomputer time to do all the calculations needed to produce the geoelectric plume picture. The supercomputer was the Ember cluster at the University of Utah’s Center for High Performance Computing, says Cuma, the computer scientist.

Ember has 260 nodes, each with 12 CPU (central processing unit) cores, compared with two to four cores commonly found on personal computer, Cuma says. Of the 260 nodes, 64 were used for the Yellowstone study, which he adds is “roughly equivalent to 200 common PCs.”

To create the geoelectric image of Yellowstone’s plume required 2 million pixels, or picture elements.

Ancient fossils hold clues for predicting future climate change, scientists report

By studying fossilized mollusks from some 3.5 million years ago, UCLA geoscientists and colleagues have been able to construct an ancient climate record that holds clues about the long-term effects of Earth’s current levels of atmospheric carbon dioxide, a key contributor to global climate change.

Two novel geochemical techniques used to determine the temperature at which the mollusk shells were formed suggest that summertime Arctic temperatures during the early Pliocene epoch (3.5 million to 4 million years ago) may have been a staggering 18 to 28 degrees Fahrenheit warmer than today. And these ancient fossils, harvested from deep within the Arctic Circle, may have once lived in an environment in which the polar ice cap melted completely during the summer months.

“Our data from the early Pliocene, when carbon dioxide levels remained close to modern levels for thousands of years, may indicate how warm the planet will eventually become if carbon dioxide levels are stabilized at the current value of 400 parts per million,” said Aradhna Tripati, a UCLA assistant professor in the department of Earth and space sciences and the department of atmospheric and oceanic sciences.

The results of this study lend support to assertions made by climate modelers that summertime sea ice may be eliminated in the next 50 to 100 years, which would have far-reaching consequences for Earth’s climate, she said.

The research, federally funded by the National Science Foundation, is scheduled to be published in the April 15 print issue of Earth and Planetary Science Letters, a leading journal in geoscience, and is currently available online.

“The Intergovernmental Panel on Climate Change identifies the early Pliocene as the best geological analog for climate change in the 21st century and beyond,” said Tripati, who is also a researcher with UCLA’s Institute of the Environment and Sustainability and Institute of Geophysics and Planetary Physics. “The climate-modeling community hopes to use the early Pliocene as a benchmark for testing models used for forecasting future climate change.”

The poles are exhibiting the most warming of any place on the planet, and the effect is most severe in the Arctic, Tripati said. The poles are the first regions on Earth to respond to any global climate change; in some sense, the Arctic serves as the proverbial canary in the coal mine, the first warning sign of fast-approaching danger.

Ice sheets and sea ice in polar regions reflect incoming solar radiation to cool the Earth – a phenomenon that makes the poles incredibly sensitive to variations in climate, she said. An increase in Arctic temperatures would not only cause the ice sheets to melt but would also result in the exposed land and ocean absorbing significantly more incoming solar energy and further heating the planet.

Without a permanent ice cap in the Arctic, global temperatures in the early Pliocene were 2 to 5 degrees Fahrenheit higher than the current global average. This suggests that the carbon dioxide threshold for maintaining year-round Arctic ice may be well below modern levels, Tripati said.

What fossilized shells can tell us about climate

The research was conducted on mollusk fossils collected from Beaver Pond, located in the Strathcona Fiord on Ellesmere Island, at northernmost point of Canada, which is well within the Arctic Circle. Named for the numerous branches discovered with beaver teeth marks that have lasted for millions of years, Beaver Pond has proven to be a treasure trove of fossilized plant and animal specimens that remain remarkably well-preserved within a peat layer encased in ice, Tripati said.

Climate scientists typically determine ancient temperatures by analyzing the composition of core samples drilled miles into the ice sheets of Greenland or Antarctica.

“Ice cores are a remarkable archive of past climate change because they can give us direct insights into how the poles have responded to variations in past greenhouse gas levels,” Tripati said. “However, ice core data is available for only the past 800,000 years, during which carbon dioxide levels were never above 280 to 300 parts per million. To understand environmental change for earlier time periods in Earth’s history when carbon dioxide levels were near 400 parts per million, we have to rely on other archives.”

By measuring the isotopic content of oxygen in a combination of fossilized mollusk and plant samples, it is possible to determine the temperature at which the specimens originally formed, Tripati said. While this method enables climate reconstructions dating back millions of years without the need for ice core samples, it is uncommon to find a site that contains both plant and shell specimens from the same time and place.

Additionally, Tripati and her co-authors have pioneered a new method for measuring past temperature using only the calcium carbonate found in fossilized shells. Determining how much of the rarest isotopes of carbon and oxygen are present in the mollusk sample yields results consistent with the original method, which required an associated plant specimen.

Conclusions drawn from the two techniques used in this study also agree with three entirely different approaches used in a recently published study by several of the co-authors to determine the average temperatures at the same site. Given the consistency among many distinct processes, this new method can be considered a reliable technique for use on samples from a variety of time periods and locations, Tripati said

Samples were collected from Beaver Pond by co-author Natalia Rybczynski, a paleobiologist at the Canadian Museum of Nature and adjunct research professor at Carleton University.