Study shows deepwater oil plume in Gulf degraded by microbes

In the aftermath of the explosion of BP’s Deepwater Horizon drilling rig in the Gulf of Mexico, a dispersed oil plume was formed at a depth between 3,600 and 4,000 feet and extending some 10 miles out from the wellhead. An intensive study by scientists with the Lawrence Berkeley National Laboratory (Berkeley Lab) found that microbial activity, spearheaded by a new and unclassified species, degrades oil much faster than anticipated. This degradation appears to take place without a significant level of oxygen depletion.

“Our findings show that the influx of oil profoundly altered the microbial community by significantly stimulating deep-sea psychrophilic (cold temperature) gamma-proteobacteria that are closely related to known petroleum-degrading microbes,” says Terry Hazen, a microbial ecologist with Berkeley Lab’s Earth Sciences Division and principal investigator with the Energy Biosciences Institute, who led this study. “This enrichment of psychrophilic petroleum degraders with their rapid oil biodegradation rates appears to be one of the major mechanisms behind the rapid decline of the deepwater dispersed oil plume that has been observed.”

The uncontrolled oil blowout in the Gulf of Mexico from BP’s deepwater well was the deepest and one of the largest oil leaks in history. The extreme depths in the water column and the magnitude of this event posed a great many questions. In addition, to prevent large amounts of the highly flammable Gulf light crude from reaching the surface, BP deployed an unprecedented quantity of the commercial oil dispersant COREXIT 9500 at the wellhead, creating a plume of micron-sized petroleum particles. Although the environmental effects of COREXIT have been studied in surface water applications for more than a decade, its potential impact and effectiveness in the deep waters of the Gulf marine ecosystem were unknown.

Analysis by Hazen and his colleagues of microbial genes in the dispersed oil plume revealed a variety of hydrocarbon-degraders, some of which were strongly correlated with the concentration changes of various oil contaminants. Analysis of changes in the oil composition as the plume extended from the wellhead pointed to faster than expected biodegradation rates with the half-life of alkanes ranging from 1.2 to 6.1 days.

“Our findings, which provide the first data ever on microbial activity from a deepwater dispersed oil plume, suggest that a great potential for intrinsic bioremediation of oil plumes exists in the deep-sea,” Hazen says. “These findings also show that psychrophilic oil-degrading microbial populations and their associated microbial communities play a significant role in controlling the ultimate fates and consequences of deep-sea oil plumes in the Gulf of Mexico.”

The results of this research were reported in the journal Science (August 26, 2010 on-line) in a paper titled “Deep-sea oil plume enriches Indigenous oil-degrading bacteria.” Co-authoring the paper with Hazen were Eric Dubinsky, Todd DeSantis, Gary Andersen, Yvette Piceno, Navjeet Singh, Janet Jansson, Alexander Probst, Sharon Borglin, Julian Fortney, William Stringfellow, Markus Bill, Mark Conrad, Lauren Tom, Krystle Chavarria, Thana Alusi, Regina Lamendella, Dominique Joyner, Chelsea Spier, Jacob Baelum, Manfred Auer, Marcin Zemla, Romy Chakraborty, Eric Sonnenthal, Patrik D’haeseleer, Hoi-Ying Holman, Shariff Osman, Zhenmei Lu, Joy Van Nostrand, Ye Deng, Jizhong Zhou and Olivia Mason.

Hazen and his colleagues began their study on May 25, 2010.

At that time, the deep reaches of the Gulf of Mexico were a relatively unexplored microbial habitat, where temperatures hover around 5 degrees Celsius, the pressure is enormous, and there is normally little carbon present.

“We deployed on two ships to determine the physical, chemical and microbiological properties of the deepwater oil plume,” Hazen says. “The oil escaping from the damaged wellhead represented an enormous carbon input to the water column ecosystem and while we suspected that hydrocarbon components in the oil could potentially serve as a carbon substrate for deep-sea microbes, scientific data was needed for informed decisions.”

Hazen, who has studied numerous oil-spill sites in the past, is the leader of the Ecology Department and Center for Environmental Biotechnology at Berkeley Lab’s Earth Sciences Division. He conducted this research under an existing grant he holds with the Energy Biosciences Institute (EBI) to study microbial enhanced hydrocarbon recovery. EBI is a partnership led by the University of California (UC) Berkeley and including Berkeley Lab and the University of Illinois that is funded by a $500 million, 10-year grant from BP.

Results in the Science paper are based on the analysis of more than 200 samples collected from 17 deepwater sites between May 25 and June 2, 2010. Sample analysis was boosted by the use of the latest edition of the award-winning Berkeley Lab PhyloChip – a unique credit card-sized DNA-based microarray that can be used to quickly, accurately and comprehensively detect the presence of up to 50,000 different species of bacteria and archaea in a single sample from any environmental source, without the need of culturing. Use of the Phylochip enabled Hazen and his colleagues to determine that the dominant microbe in the oil plume is a new species, closely related to members of Oceanospirillales family, particularly Oleispirea antarctica and Oceaniserpentilla haliotis.

Hazen and his colleagues attribute the faster than expected rates of oil biodegradation at the 5 degrees Celsius temperature in part to the nature of Gulf light crude, which contains a large volatile component that is more biodegradable. The use of the COREXIT dispersant may have also accelerated biodegradation because of the small size of the oil particles and the low overall concentrations of oil in the plume. In addition, frequent episodic oil leaks from natural seeps in the Gulf seabed may have led to adaptations over long periods of time by the deep-sea microbial community that speed up hydrocarbon degradation rates.

One of the concerns raised about microbial degradation of the oil in a deepwater plume is that the microbes would also be consuming large portions of oxygen in the plume, creating so-called “dead-zones” in the water column where life cannot be sustained. In their study, the Berkeley Lab researchers found that oxygen saturation outside the plume was 67-percent while within the plume it was 59-percent.

“The low concentrations of iron in seawater may have prevented oxygen concentrations dropping more precipitously from biodegradation demand on the petroleum, since many hydrocarbon-degrading enzymes have iron as a component,” Hazen says. “There’s not enough iron to form more of these enzymes, which would degrade the carbon faster but also consume more oxygen.”

Researchers find a ‘great fizz’ of carbon dioxide at the end of the last ice age

Imagine loosening the screw-top of a soda bottle and hearing the carbon dioxide begin to escape. Then imagine taking the cap off quickly, and seeing the beverage foam and fizz out of the bottle. Then, imagine the pressure equalizing and the beverage being ready to drink.

Rutgers marine scientist Elisabeth Sikes and her colleagues say that something very similar happened on a grand scale over a 1,000 year period after the end of the last ice age – or glaciation, as scientists call it.

According to a paper published recently in the journal Nature, the last ice age featured a decrease in the amount of atmospheric carbon dioxide and an increase in the atmospheric carbon 14, the isotope that guides scientists in evaluating the rate of decay of everything from shells to trees.

In recent years, other researchers have suggested that some of that carbon dioxide flowed back into the northern hemisphere rather than being entirely released into the atmosphere in the southern hemisphere.

Sikes and her colleagues disagree. Their data, taken from cores of ocean sediment pulled up from 600 meters to 1,200 meters below the South Pacific and Southern Ocean, suggest that this “de-gassing” was regional, not global. This has important implications for understanding what controls where and how CO2 comes out of the ocean, and how fast – or, to put it another way, what tightens or loosens the bottle cap.

Carbon dioxide and carbon 14 in the atmosphere and ocean are on opposite ends of an environmental pulley. When the level of carbon dioxide in the atmosphere increases, the level of carbon 14 drops, and vice versa. That’s chemistry and ocean circulation. Biology also helps, because photosynthesizing organisms use carbon dioxide, then die and take it with them to the bottom. During the last ice age, the level of carbon dioxide in the atmosphere was lower because much of it was trapped in the bottom of the oceans.

The ventilation of the deep Southern Ocean – the circulation of oxygen through the deep waters – slowed considerably during the last ice age, causing carbon dioxide to build up. Sikes and her co-authors report that, as the ice began to melt, the oceanic bottle cap began to loosen, and the carbon dioxide began to leak back into the atmosphere. Then, as warming intensified, the cap came off, and the carbon dioxide escaped so quickly, and so thoroughly, that Sikes and her colleagues could find very little trace of it in the carbon 14 they examined in their samples.

Eventually, just like the carbonated drink in a bottle, equilibrium was established between the carbon dioxide in the atmosphere and the carbon dioxide in the ocean. Today, the carbon dioxide level in the atmosphere has been rising in the past 200 years pushing the levels in the ocean up. Human activity is responsible for that rise and the rise of other “greenhouse gases.” Some people have suggested we can pull carbon dioxide out of the atmosphere and force it back down to the bottom of the oceans by manipulating the biology – growing algae, for instance, which would increase photosynthesis and send carbon dioxide to the bottom when the organisms die. But Sikes’ results suggest that global warming could eventually result in another great fizz.

Geo-engineering and sea-level rise over the 21st century

Scientific findings by international research group of scientists from England, China and Denmark just published suggest that sea level will likely be 30-70 centimetres higher by 2100 than at the start of the century even if all but the most aggressive geo-engineering schemes are undertaken to mitigate the effects of global warming and greenhouse gas emissions are stringently controlled.

“Rising sea levels caused by global warming are likely to affect around 150 million people living in low-lying coastal areas, including some of the world’s largest cities,” explained Dr Svetlana Jevrejeva of the National Oceanography Centre.

Most scientists agree that anthropogenic carbon dioxide emissions contribute greatly to global warming, and that these emissions need to be controlled if damaging future impacts such as sea-level rise are to be averted. But if we fail to do so, is there a ‘Plan B’?

Scientists have proposed ways of ‘geo-engineering’ the Earth system to tackle global warming, thereby reducing its impact on both the main contributors of sea level rise: thermal expansion of ocean water and melting of glaciers and ice sheets. Jevrejeva and her colleagues have modelled sea level over the 21st century under various geo-engineering schemes and carbon dioxide emission scenarios.

“We used 300 years of tide gauge measurements to reconstruct how sea level responded historically to changes in the amount of heat reaching the Earth from the Sun, the cooling effects of volcanic eruptions, and past human activities,” said Jevrejeva. “We then used this information to simulate sea level under geo-engineering schemes over the next 100 years.”

Changes in temperature predicted to result from increased atmospheric carbon dioxide or geo-engineering are large compared with those caused by volcanism over the last 100,000 years or by changes in the amount of the Sun’s energy reaching the Earth over the last 8000 years.

“Natural sea-level variations caused by extreme events such as severe volcanic eruptions over the past several thousand years were generally much smaller than those caused by anthropogenic carbon dioxide emissions or predicted under effective geo-engineering schemes,” said Jevrejeva.

The researcher’s simulations show that injections of sulfur dioxide particles into the upper atmosphere, equivalent to a major volcanic eruption such as that of Mt Pinatubo every 18 months, would reduce temperature and delay sea-level rise by 40-80 years. Maintaining such an aerosol cloak could keep sea level close to what it was in 1990.

However, use of sulfur dioxide injection would be costly and also risky because its effects on ecosystems and the climate system are poorly understood.

“We simply do not know how the Earth system would deal with such large-scale geo-engineering action,” said Jevrejeva.

Large mirrors orbiting the Earth could deflect more of the Sun’s energy back out to space, reducing temperatures and help control sea level, but the logistics and engineering challenges of such a scheme are daunting.

The researchers argue that perhaps the least risky and most desirable way of limiting sea-level rise is bioenergy with carbon storage (BECS). Biofuel crops could be grown on a large-scale, and carbon dioxide released during their combustion or fermentation could be captured, and the carbon stored as biochar in the soil or in geological storage sites.

BECS has some advantages over chemical capture of carbon dioxide from the atmosphere, which requires an energy source, although both approaches could eventually reduce atmospheric carbon dioxide levels to pre-industrial level according to the new simulations.

“Substituting geo-engineering for greenhouse emission control would be to burden future generations with enormous risk,” said Jevrejeva.

Ancient microbes responsible for breathing life into ocean ‘deserts’

<IMG SRC="/Images/787465393.jpg" WIDTH="350" HEIGHT="262" BORDER="0" ALT="The orange cells in this microscope image are Synechococcus, a unicellular cyanobacterium only about 1 um in size. Organisms like Synechococcus were responsible for pumping oxygen into the environment 2.5 billion years ago. – Susanne Neuer/Amy Hansen”>
The orange cells in this microscope image are Synechococcus, a unicellular cyanobacterium only about 1 um in size. Organisms like Synechococcus were responsible for pumping oxygen into the environment 2.5 billion years ago. – Susanne Neuer/Amy Hansen

More than two and a half billion years ago, Earth differed greatly from our modern environment, specifically in respect to the composition of gases in the atmosphere and the nature of the life forms inhabiting its surface. While today’s atmosphere consists of about 21 percent oxygen, the ancient atmosphere contained almost no oxygen. Life was limited to unicellular organisms. The complex eukaryotic life we are familiar with – animals, including humans – was not possible in an environment devoid of oxygen.

The life-supporting atmosphere Earth’s inhabitants currently enjoy did not develop overnight. On the most basic level, biological activity in the ocean has shaped the oxygen concentrations in the atmosphere over the last few billion years. In a paper published today by Nature Geoscience online, Arizona State University researchers Brian Kendall and Ariel Anbar, together with colleagues at other institutions, show that “oxygen oases” in the surface ocean were sites of significant oxygen production long before the breathing gas began to accumulate in the atmosphere.

By the close of this period, Earth witnessed the emergence of microbes known as cyanobacteria. These organisms captured sunlight to produce energy. In the process, they altered Earth’s atmosphere through the production of oxygen – a waste product to them, but essential to later life. This oxygen entered into the seawater, and from there some of it escaped into the atmosphere.

“Our research shows that oxygen accumulation on Earth first began to occur in surface ocean regions near the continents where the nutrient supply would have been the highest,” explains Kendall, a postdoctoral research associate at the School of Earth and Space Exploration in ASU’s College of Liberal Arts and Sciences. “The evidence suggests that oxygen production in the oceans was vigorous in some locations at least 100 million years before it accumulated in the atmosphere. Photosynthetic production of oxygen by cyanobacteria is the simplest explanation.”

The idea of “oxygen oases,” or regions of initial oxygen accumulation in the surface ocean, was hypothesized decades ago. However, it is only in the past few years that compelling geochemical evidence has been presented for the presence of dissolved oxygen in the surface ocean 2.5 billion years ago, prior to the first major accumulation of oxygen in the atmosphere (known as the Great Oxidation Event).

Kendall’s work is the latest in a series of recent studies by a collaborative team of researchers from ASU; University of California, Riverside; and University of Maryland that point to the early rise of oxygen in the oceans. Together with colleagues from University of Washington and University of Alberta, this team first presented evidence for the presence of dissolved oxygen in these oceans in a series of four Science papers over the past few years. These papers focused on a geologic formation called the Mt. McRae Shale from Western Australia. One of these papers, led by the ASU team, presented geochemical profiles that showed an abundance of two redox-sensitive elements – rhenium (Re) and molybdenum (Mo) – implying that small amounts of oxygen mobilized these metals from the crust on land or in the ocean, and transport them through an oxic surface ocean to deeper anoxic waters where the metals were hidden into organic-rich sediments. Kendall participated in this research while a postdoctoral student at the University of Alberta.

Kendall’s goal in the new project was to look for evidence of dissolved oxygen in another location. He wanted to see if the geochemical evidence from the Mt. McRae Shale in Western Australia would be found in similarly-aged rocks from South Africa. Those rocks were obtained in a project supported by the Agouron Institute. Kendall’s research was supported by grants from NASA and the National Science Foundation.

What Kendall discovered was a unique relationship of high rhenium and low molybdenum enrichments in the samples from South Africa, pointing to the presence of dissolved oxygen on the seafloor itself.

“In South Africa, samples from the continental slope beneath the shallower platform were thought to be deposited at water depths too deep for photosynthesis. So it was a big surprise that we found evidence of dissolved oxygen on the seafloor at these depths. This discovery suggests that oxygen was produced at the surface in large enough quantities that some oxygen survived as it was mixed to greater depths. That implies a significantly larger amount of oxygen production and accumulation in ‘oxygen oases’ than was previously realized.”

A key contribution to this study came from Christopher Reinhard and Timothy Lyons, collaborators at the University of California, Riverside, and Simon Poulton at Newcastle University, who found that the chemistry of iron (Fe) in the same shales is also consistent with the presence of dissolved oxygen.

“It was especially satisfying to see two different geochemical methods – rhenium and molybdenum abundances and Fe chemistry – independently tell the same story,” Kendall noted.

Evidence that the atmosphere contained at most minute amounts of oxygen came from measurements of the relative abundances of sulfur (S) isotopes. These measurements were performed by Alan Kaufman, a collaborator at the University of Maryland.

“Research like Brian’s on the co-evolution of Earth’s atmosphere, oceans and biosphere is not only important for unraveling key events in Earth history, it also has broad relevance to our search for life on other planets,” explains Professor Ariel Anbar, director of the Astrobiology Program at ASU and Kendall’s postdoctoral mentor. “One of the ways we will look for life on planets orbiting other stars is to look for oxygen in their atmospheres. So we want to know how the rise of oxygen relates to the emergence of photosynthesis.”

On a more practical level, Anbar observes that the research also connects to emerging concerns about our own planet. “Recent research in the modern oceans reveals that the amount of oxygen is decreasing in some places,” he explains. “Some suspect this decrease is tied to global warming. One of the ways we might figure that out is to reconstruct ocean oxygen content on the slopes of the seafloor in recent history. So the same techniques that Brian is advancing and applying to billion-year-old rocks might be used to understand how humans are changing the environment today.”

Is the ice in the Arctic Ocean getting thinner and thinner?

Polar 5 tows a probe for sea ice thickness measurements -- the so called EM-Bird -- on a test flight. -  Christian Haas, University of Alberta / Alfred Wegener Institute
Polar 5 tows a probe for sea ice thickness measurements — the so called EM-Bird — on a test flight. – Christian Haas, University of Alberta / Alfred Wegener Institute

The extent of the sea ice in the Arctic will reach its annual minimum in September. Forecasts indicate that it will not be as low as in 2007, the year of the smallest area covered by sea ice since satellites started recording such data. Nevertheless, sea ice physicists at the Alfred Wegener Institute are concerned about the long-term equilibrium in the Arctic Ocean.

They have indications that the mass of sea ice is dwindling because its thickness is declining. To substantiate this, they are currently measuring the ice thickness north and east of Greenland using the research aircraft Polar 5. The objective of the roughly one-week campaign is to determine the export of sea ice from the Arctic. Around a third to half of the freshwater export from the Arctic Ocean takes place in this way – a major drive factor in the global ocean current system.

The question of when the Arctic will be ice-free in the summer has been preoccupying the sea ice researchers headed by Prof. Dr. RĂ¼diger Gerdes from the Alfred Wegener Institute for Polar and Marine Research in the Helmholtz Association for a long time now. Satellites have been recording the extent of the Arctic ice for more than 30 years. In addition to the area covered, the thickness of the ice is a decisive factor in assessing how much sea ice there is. However, the thickness can only be determined locally, for example by means of the so-called EM-Bird, an electromagnetic measuring device which helicopters or planes tow over the ice. For Gerdes this is a very special job because he usually models his forecasts on his home computer. The campaign with the research aircraft Polar 5 of the Alfred Wegener Institute now takes him on an expedition in the Arctic for the first time. “I’m very keen on seeing the results of the sea ice thickness measurements,” says Gerdes. “Only when we know the distribution of ice of varying thickness, can we calculate how much freshwater is carried out of the Arctic Ocean via ice.”

About 3000 cubic kilometres of ice drift out of the Arctic Ocean every year, corresponding to around 2700 billion tons. The ice exports freshwater that reaches the Arctic Ocean via rivers and precipitation. This maintains its salt concentration, which has been constant over the long term. The temperature rise observed worldwide is especially pronounced in the Arctic latitudes. Researchers have been observing that the ice is getting thinner and thinner for several years now. As a result, it stores and exports less freshwater and the salt concentration (also referred to as salinity) of the Arctic Ocean declines. On the one hand, this influences all living things that have adapted to the local conditions. On the other hand, changes in salinity also have an impact on current patterns of global ocean circulation and thus on meridional heat transport. In the TIFAX (Thick Ice Feeding Arctic Export) measurement campaign the researchers are primarily interested in ice that is several years old, several metres thick and occurs predominantly on the northern coast of Greenland. “Taking off on the measurement flights from Station Nord here is a special adventure,” reports Gerdes from one of the northernmost measuring stations in the world. “Flying through virtually unsettled regions of the Arctic in the high-tech research aircraft is a stark contrast to my modelling work on the computer.”

Surfing for earthquakes

A better understanding of the ground beneath our feet will result from research by seismologists and Rapid-a group of computer scientists at the University of Edinburgh. The Earth’s structure controls how earthquakes travel and the damage they can cause. A clear picture of this structure would be extremely valuable to earthquake planners, but it requires the analysis of huge amounts of data. The Rapid team developed a system that performs the seismologists’ data-crunching, and have made it easy to use by relying on an interface familiar to all scientists – a web browser.

Seismologists measure vibrations in the Earth at hundreds of observatories across Europe which allows them to study earthquakes as they travel across countries and continents. By measuring the speed and strength of the vibrations at different sites, deductions can be made about the type of ground they have traveled through. From this information, the structure of the Earth can be constructed. The problem with earthquakes is that they don’t occur when and where you need them.

Earthquakes aren’t the only things that cause vibrations: road traffic, waves pounding on the beach and even wind and thunder can cause detectable vibrations. These vibrations – known as noise – may lack the strength of earthquakes, but they compensate by being available in huge numbers. If enough noise is analyzed, it is possible to build up information about the Earth’s structure. The analysis is not without problems “You can use noise to analyze the Earth’s structure, but you need to analyze huge amounts of data and that’s nearly impossible on standard [computers]” explained Andreas Rietbrock, who helped develop the new system with the Rapid team and is Professor of Seismology at the University of Liverpool.

The Orfeus foundation collects seismic data from around Europe and makes it available for analysis through websites like the Earthquake Data Portal (www.seismicportal.eu). Only a few organizations have the resources and technical know-how needed to process this vast store of data. Orfeus asked the Rapid team to develop a system that would allow any seismologist to analyze seismic data using powerful computers located around Europe. “We don’t want [seismologists] to have to study how to access [remote] computer power and data” said Torild van Eck, Secretary General of Orfeus “Rapid is, for us, a tool to hide the tricky part of getting, steering and manipulating data”.

The Rapid team have developed a reputation for helping scientists use their data, and have worked with everyone from chemists to medics and biologists to engineers. “It’s been great working with the seismologists, because as a community they’re very open to trying out new ways of working. And they have really pushed the boundaries of our technology” said Jano van Hemert, leader of the Rapid team. For Orfeus, the team developed a web portal. This takes all the complex computing needed for seismic analysis and hides it behind a standard web browser. By presenting all of the analysis tools in such a familiar environment, any seismologist-even the most technophobic ones-can use the system. One of the first applications for the Rapid web portal is to allow seismologists to study noise for the analysis of the Earth’s structure. Rapid will build on this work with help from a grant from the UK’s Natural Environment Research Council, which has provided funding to explore whether it is possible to predict earthquakes and volcanic eruptions.

The Rapid web portal allows even the smallest seismology groups to perform the kind of analysis that was previously limited to organizations that could afford their own supercomputers. By making this analysis easy, Rapid and Orfeus have brought complex research programs into the hands of many more seismologists. More seismologists working together means that results are produced faster, and that means we could soon benefit from a better understanding of the ground beneath our feet.

Big quakes more frequent than thought on San Andreas fault

Earthquakes have rocked the powerful San Andreas fault that splits California far more often than previously thought, according to UC Irvine and Arizona State University researchers who have charted temblors there stretching back 700 years.

The findings, to be published in the Sept. 1 issue of Geology, conclude that large ruptures have occurred on the Carrizo Plain portion of the fault – about 100 miles northwest of Los Angeles – as often as every 45 to 144 years. But the last big quake was in 1857, more than 150 years ago.

UCI researchers said that while it’s possible the fault is experiencing a natural lull, they think it’s more likely a major quake could happen soon.

“If you’re waiting for somebody to tell you when we’re close to the next San Andreas earthquake, just look at the data,” said UCI seismologist Lisa Grant Ludwig, principal investigator on the study.

An associate professor of public health, she hopes the findings will serve as a wake-up call to Californians who’ve grown complacent about the risk of major earthquakes. She said the new data “puts the exclamation point” on the need for state residents and policymakers to be prepared.

For individuals, that means having ample water and other supplies on hand, safeguarding possessions in advance, and establishing family emergency plans. For regulators, Ludwig advocates new policies requiring earthquake risk signs on unsafe buildings and forcing inspectors in home-sale transactions to disclose degrees of risk.

Sinan Akciz, UCI assistant project scientist and the study’s lead author, was part of a team that collected charcoal samples from carefully dug trenches in the Carrizo Plain, along with earlier samples that Ludwig had stored for decades in her garage. The charcoal forms naturally after wildfires, then is washed into the plain by rains, building up over the centuries in layers that are fragmented during earthquakes. Akciz dated the samples via recently developed radiocarbon techniques to determine time frames for six major earthquakes, the earliest occurring about 1300 A.D.

The field data confirmed what Ludwig had long suspected: The widely accepted belief that a major earthquake happened on the fault every 250 to 400 years was inaccurate. Not all quakes were as strong as originally thought, either; but they all packed a wallop, ranging between magnitude 6.5 and 7.9.

“What we know is for the last 700 years, earthquakes on the southern San Andreas fault have been much more frequent than everyone thought,” said Akciz. “Data presented here contradict previously published reports.”

“We’ve learned that earthquake recurrence along the San Andreas fault is complex,” agreed co-author Ramon Arrowsmith, a geology professor at Arizona State. “While earthquakes may be more frequent, they may also be smaller. That’s a bit of good news to offset the bad.”

Ken Hudnut, a geophysicist with the U.S. Geological Survey, said the research is significant because it revises long-standing concepts about well-spaced, extremely strong quakes on the 810-mile fault.

“I believe they’ve done a really careful job,” he said, adding that the work was rigorously field-checked by many scientists. “When people come up with new results challenging old notions, others need to see the evidence for themselves.”

Upending previous San Andreas fault modeling is part of a broader shift in seismic research. Experts are increasingly tracking webs of trigger points, smaller faults and more frequent quakes rather than focusing on large, single faults where they assumed there would be well-spaced shakers.

As for the 153-year hiatus since the magnitude 7.8 Fort Tejon quake, Ludwig said: “People should not stick their heads in the ground. There are storm clouds gathered on the horizon. Does that mean it’s definitely going to rain? No, but when you have that many clouds, you think, ‘I’m going to take my umbrella with me today.’ That’s what this research does: It gives us a chance to prepare.”

Geologists revisit the Great Oxygenation Event

The new understanding of early ocean chemistry solves some mysteries about the evolution of early lifeforms, such as the nature of the spiny fossils found in the Doushantuo formation in southern China. The Doushantuo formation, laid down during the Ediacaran period, contains some of the earliest known fossils of multicellular animals. -  Stephen Dornbos/University of Wisconsin-Milwaukee
The new understanding of early ocean chemistry solves some mysteries about the evolution of early lifeforms, such as the nature of the spiny fossils found in the Doushantuo formation in southern China. The Doushantuo formation, laid down during the Ediacaran period, contains some of the earliest known fossils of multicellular animals. – Stephen Dornbos/University of Wisconsin-Milwaukee

In “The Sign of the Four” Sherlock Holmes tells Watson he has written a monograph on 140 forms of cigar-, cigarette-, and pipe-tobacco, “with colored plates illustrating the difference in the ash.” He finds the ash invaluable for the identification of miscreants who happen to smoke during the commission of a crime.

But Sherlock Holmes and his cigarette ash and pipe dottle don’t have a patch on geologists and the “redox proxies” from which they deduce chemical conditions early in Earth’s history.

Redox proxies, such as the ratio of chromium isotopes in banded iron formations or the ratio of isotopes in sulfide particles trapped in diamonds, tell geologists indirectly whether the Earth’ s atmosphere and oceans were reducing (inclined to give away electrons to other atoms) or oxidizing (inclined to glom onto them).

It makes all the difference: the bacterium that causes botulism, and the methanogens that make swamp gas are anaerobes, and thrive in reducing conditions. Badgers and butterflies, on the other hand, are aerobes, and require oxygen to keep going.

In the July issue of Nature Geoscience Washington University in St. Louis geochemist David Fike gives an unusually candid account of the difficulties his community faces in correctly interpreting redox proxies, issuing a call for denser sampling and more judicious reading of rock samples.

The world ocean


Fike, assistant professor of earth and planetary sciences in Arts & Sciences, focuses on the dramatic change from anoxic to oxygenated conditions in the world’s oceans that preceded the Ediacaran period (from 635 to 542 million years ago) when the first multicellular animals appeared.

If you look in a textbook, you’ll find a story that goes something like this: Four billion years ago the earth’s atmosphere was a deadly mixture of gases spewed forth by volcanoes, such as nitrogen and its oxides, carbon dioxide, methane, ammonia, sulfur dioxide and hydrogen sulfide.

The oceans that formed from condensing water vapor (or incoming comets) were reservoirs of dissolved iron, pumped through hydrothermal vents on the ocean floor.

Then about 2.7 billion years ago, cyanobacteria, which have been called the most self-sufficient organisms on the planet because they can both photosynthesize and fix nitrogen, began bubbling oxygen into the atmosphere and shallow waters.

At first oxygen built up gradually in the atmosphere, but about 2.5 billion years ago there was a sudden spike upward, traditionally called the Great Oxygenation Event.

The oxygen killed off anerobes that didn’t find refuge in sediments, the deep ocean and other airless environments and led to the evolution of aerobes that could use oxygen to spark their metabolism.

At roughly the same time iron began to precipitate out of the oceans, forming rocks peculiar to this period called banded iron formations that consist of alternating layers of gray and red rock.

Banded iron formations were created episodically from about 3 billion years ago until 1.8 billion years ago and almost never again.

The usual story is that iron was being swept from the oceans by increasing levels of dissolved oxygen.

And then, another two billion years after the Great Oxygenation Event, multicellular lifeforms finally put in an appearance. The first metazoans, as they are called, were the bizarre Edicaran fauna, sometimes unflatteringly compared to sacks of mud and quilted mattresses.

The assumption was oxygen levels were now high enough to support something more than a single cell in lonely solitude.

Of course, this story has holes you could drive a truck through.

Why did oxygen levels spike 2.5 billion years ago, and how much oxygen was there in the atmosphere really? Why are banded iron formations made of layers only a few centimeters thick, and why did they stop forming so abruptly? If the oceans were oxygenated 2.5 billion years ago, why did multicellular life delay its appearance for another 2 billion years? And did all these changes really take place at pretty much the same time everywhere on Earth?

The problems arise, says Fike, because scientists don’t have dense enough data to recognize spatial variations in Earth’s geochemical past and because the geochemical proxies are so devilishly hard to interpret.

The world beach
The story started to fall apart in 1998, says Fike, when Don Canfield of Odense University in Denmark suggested that sulfur compounds had also played a role in the transformation of Earth’s chemistry.

Canfield argued that that the Great Oxygenation Event actually took place in two steps and that it was sulfides rather than oxygen that removed the iron from deep ocean water.

The first rise in oxygen caused oxidative weathering of rocks on land that delivered sulfates to the ocean through rivers and streams. In the ocean, sulfate-reducing bacteria converted the sulfates to sulfide to gain the energy they needed for daily housekeeping. The dissolved iron combined with the sulfides to form iron sulfide minerals such as pyrite that dropped out of solution.

During the second, much later stage, enough oxygen was generated to sweep the deep ocean of the toxic sulfides, ushering in the era of biological innovation, a.k.a. the mud sacks and quilted mattresses.

These transitions were still discussed as changes in bulk ocean chemistry – just from one anoxic chemistry to another anoxic chemistry.

However, in the July issue of Nature Geoscience, Simon Poulton of the University of Newcastle in England showed that sulfidic water protruded into the ocean only in a narrow wedge along the shorelines of ancient continents. This meant that the water column, instead of being homogeneous, was stratified, with different chemistries in different layers.

So much for the world ocean.

It’s Complicated
“Recent geochemical evidence indicates that, at least locally, ferruginous (iron rich) or even sulphidic (sulfur rich) conditions persisted through the Ediacaran period, long after the Great Oxygenation Event,” Fike says.

“Things are much more complicated than we had supposed.”

“As a community, we don’t have a good sense of the spatial variation of these zones within different bodies of water, ” says Fike.

“What’s more, different assessments can arise from the interpretation of different geochemical proxies, from physical separation between different ocean basins, or from the reworking of sediments after deposition,” he continues.

The underlying problem is a low sampling rate. “As we try to unravel these changes in Earth’s history, ” Fike says, “we often don’t have 100 different places where we can measure rocks of the same age. We’re stuck with a few samples, and the natural tendency is to take your rocks and extrapolate.”

The only way “to wring order from the chaos,” Fike says, is to develop a full three-dimensional model of the Earth that has enough spatial resolution to wash out bad data.

Mystery of the vanishing acritarchs
“If you map out redox proxies in enough spatial detail, you can tell a beautiful, consistent story that relates environmental change to the paleontological record,” Fike says.

To illustrate, he tells the story of a group of spiny acritarchs, microfossils found in the one of the oldest fossil beds on Earth, the Doushantuo formation in south China.

Nobody was really sure what the acritarchs were. Some people thought they were green algae. Others thought they might be dinoflagellates that had evolved spines to avoid predation by animals.

“Scientists looking at the Doushantuo thought they understood what they were seeing,” Fike says. “Oxygen is appearing, the acritarchs are evolving, and this is the start of the big rise in evolution associated with the final oxygen event.”

“But then they noticed that after the big rise in spiny cysts and just when we see evidence for oxygen in the rock record, the acritarchs disappear. And that really doesn’t make sense if you’re evolving new groups because of the increase in oxygen.”

“In 2009 a group of scientists led by Phoebe Cohen of Harvard University inspected acritarchs with transmission electron microscopes and concluded that they are not algae but rather animals, encased in protective cysts that animals form when conditions are not favorable to life,” says Fike.

At the same time a group of scientists (including Fike) led by Chao Li of the University of California Riverside measured redox proxies in several different sections through the formation.

These measurements showed that the Nanhua Basin had had a layered chemical structure with deep iron-rich waters, near-shore wedges of sulfur-rich water and an oxygenated surface.

Both the sulfur- and the iron-rich waters would have been lethal to oxygen-loving species.

A Cautionary Tale


At the same time Fike acknowledges that spatial variability in redox proxies may make many geologists feel ill at ease because it might instead reflect an unusual depositional context or the reworking of the proxy after deposition instead of a significant change in geochemistry.

By way of illustration, he describes a study of Amazonian mud belts, published this year by Robert Aller of Stony Brook University and colleagues in Geochimica et Cosmochimica Acta.

“The Amazon dumps mud rich in organic material into the Atlantic,” Fike says. “The mud is deposited and the oxygen in it is consumed by biological activity, but then a storm churns it up, it gets reoxygenated, and redeposited. And this process happens over and over again.”

By the time the muds become sediments, their chemistry is very different from what it was when they were first deposited.

“The redox indicators for the Amazonian sediments suggest that they were deposited under anoxic, sulfate-poor conditions, but we know they were deposited in well-oxygenated, sulfate-rich marine waters,” Fike writes.

It is as if the murderer had deliberately removed cigar ash and substituted cigarette ash at the scene of the crime.

“Much work remains ahead of us before we can have a true sense of the three-dimensional redox structure of the oceans and how it varied through time,” Fike concludes.

Deep plumes of oil could cause dead zones in the Gulf

A new simulation of oil and methane leaked into the Gulf of Mexico suggests that deep hypoxic zones or “dead zones” could form near the source of the pollution. The research investigates five scenarios of oil and methane plumes at different depths and incorporates an estimated rate of flow from the Deepwater Horizon spill, which released oil and methane gas into the Gulf from April to mid July of this year.

A scientific paper on the research has been accepted for publication by Geophysical Research Letters, a journal of the American Geophysical Union,

Scientists at the National Oceanic and Atmospheric Administration (NOAA) and Princeton University conducted the research. Based on their simulations, they conclude that the ocean hypoxia or toxic concentrations of dissolved oil arising from the Deepwater Horizon blowout are likely to be “locally significant but regionally confined to the northern Gulf of Mexico.”

A hypoxic or “dead” zone is a region of ocean where oxygen levels have dropped too low to support most forms of life, typically because microbes consuming a glut of nutrients in the water use up the local oxygen as they consume the material.

“According to our simulations, these hypoxic areas will be peaking in October,” says study coauthor Robert Hallberg of the NOAA Geophysical Fluid Dynamics Laboratory in Princeton, N.J.. “Oxygen drawdown will go away slowly, as the tainted water is mixed with Gulf waters that weren’t affected. We’re estimating a couple of years” before the dead zone has dissipated, he adds.

Although the Princeton-NOAA study was carried out when the flow rate from the Deepwater Horizon spill was still underestimated, the simulated leak lasted longer than did the actual spill. Consequently, says Alistair Adcroft of Princeton University and the NOAA Geophysical Fluid Dynamics Laboratory, another study coauthor, “the overall impact on oxygen turns out to be about the same” as would be expected from the Deepwater Horizon spill.

A seismic triple whammy

Keith Koper, director of the University of Utah Seismograph Stations, helped conduct a study in the journal Nature revealing that a magnitude-8.1 earthquake near Samoa and Tonga in 2009 was one of three powerful quakes that struck within a two-minute period. The quakes triggered tsunamis that killed 192 people. -  Remi Barron, University of Utah.
Keith Koper, director of the University of Utah Seismograph Stations, helped conduct a study in the journal Nature revealing that a magnitude-8.1 earthquake near Samoa and Tonga in 2009 was one of three powerful quakes that struck within a two-minute period. The quakes triggered tsunamis that killed 192 people. – Remi Barron, University of Utah.

A magnitude-8.1 earthquake and tsunami that killed 192 people last year in Samoa, American Samoa and Tonga actually was a triple whammy: The 8.1 “great earthquake” concealed and triggered two major quakes of magnitude 7.8, seismologists report in the Thursday, Aug. 19, issue of the journal Nature.

“At first, we thought it was one earthquake,” says study co-author Keith Koper, director of the University of Utah Seismograph Stations. “When we looked at the data, it turned out it wasn’t just one great earthquake, but three large earthquakes that happened within two minutes of one another. The two quakes that were hidden by the first quake ended up being responsible for some of the damage and tsunami waves.”

In terms of energy release, the two magnitude-7.8 quakes combined “represent the energy release of another magnitude-8 quake,” says Koper, a seismologist and associate professor of geology and geophysics at the University of Utah. “It was essentially a great earthquake that was triggered. It was not some silly little aftershock.”

Another study in the same issue of Nature reportedly refers to the two 7.8 quakes as a single quake. “I realize it looks inconsistent, but sometimes two events that occur quite close in time and space are considered a doublet, or two pieces of one earthquake,” says Koper, who came to Utah this year from St. Louis University.

The quakes on Sept. 29, 2009, generated tsunami waves that varied in height depending on where they struck, but in some places the water reached more than 49 feet above sea level. The disaster killed at least 149 people in Samoa, 34 people in American Samoa and nine on Niuatoputapu, an island in the northern part of Tonga.

Quake Pattern Never Seen Previously

The most important scientific aspect of the quakes was their unprecedented pattern, Koper says. In technical terms, it is the first known case of a large “normal” fault earthquake (the 8.1) occurring on a plate of Earth’s crust beneath the ocean, and then triggering major “thrust” quakes (the 7.8s) in the “subduction zone,” where the oceanic plate is diving or “subducting” beneath a continental plate of Earth’s crust.

Usually the opposite occurs: big “megathrust” quakes on the subduction zone boundary between two plates trigger other quakes on the oceanic plate that is diving or “subducting” under the continental plate.

Thrust quakes are those in which ground is pushed together along a fault, forcing the ground on one side of the fault either under or over ground on the other side. In the southwest Pacific Ocean, the Pacific Plate is moving westward and is thrust under the Tonga block, a “microplate” on the northeast edge of the Australian plate.

During normal quakes, ground is pulled apart along a fault. The magnitude-8.1 quake occurred when the Pacific plate broke at the “outer rise” where it begins to dive westward beneath the Tonga block. “This is the first time a large normal-faulting quake has been shown to trigger large thrust-faulting earthquakes,” says Koper.

By showing that outer-rise normal quakes can trigger subduction-zone quakes, “this study will affect the way earthquake and tsunami hazards are calculated, not just in this region but potentially in other places around the world,” Koper says.

He says all three quakes “contributed to the tsunami, but major components in the tsunami were these 7.8 thrust events.”

All three quakes began 9 to 12 miles deep. The magnitude-8.1 quake lasted 60 seconds. The first magnitude-7.8 quake started sometime between 49 and 89 seconds after the 8.1 quake. The second 7.8 began 90 to 130 seconds after the first quake started.

The National Science Foundation and the U.S. Geological Survey funded the study, which was led by seismologist Thorne Lay of the University of California, Santa Cruz. In addition to Utah’s Koper, other co-authors are seismologists Charles Ammon of Pennsylvania State University, Hiroo Kanamori of the California Institute of Technology, Luis Rivera of the University of Strasbourg in France and Alexander Hutko of the Incorporated Research Institutions for Seismology’s Seattle data center.

A Seismic Detective Story

Scientists became suspicious that the Samoa-Tonga quake wasn’t a single quake when they noticed a discrepancy in so-called “beach balls,” which are graphical depictions of fault motions during a quake.

“It was a real interesting detective story,” says Koper. “When we first looked at this, we knew there were some inconsistencies. We just couldn’t explain the seismograms with one earthquake, so we knew there was a problem. It took us several months to figure it out. We had to do subtle technical modeling of the seismograms.”

A single quake at the location of the magnitude-8.1 quake could not explain the pattern of tsunami waves and how they varied in height in various areas, says Koper.

Also, “almost all the aftershocks were not where the main shock occurred,” he adds. “That’s very uncommon. That was a red flag when I saw that.”

Koper says the first person to suggest more than one quake was Chen Ji, of the University of California, Santa Barbara, who argued at a scientific meeting last December that the Samoa quake hid a separate quake. That prompted the new study.

Koper says the researchers did extensive “waveform modeling” to analyze the properties of quake waves, and concluded the Sept. 29 “quake” really was three quakes.

Not Your Mother’s Subduction Zone Earthquake

The Samoa-Tonga region sits on a plate boundary. The Pacific plate beneath the ocean pushes westward, colliding with and diving beneath the Tongan block. The magnitude-8.1 quake occurred when part of the diving Pacific plate pulled apart and broke as it dived beneath the Tonga block.

“The plate itself broke,” Koper says. “It wasn’t the rubbing of one plate against another. The bending stress [as the Pacific plate dives] got so big that it broke.”

Scientists know of only three previous cases of great earthquakes – those measuring magnitude 8 or more – that happened due to pull-apart or normal faulting within a diving seafloor plate. They were the 1933 Sanriku, Japan, quake (about magnitude 8.4), which killed more than 3,000 people; the 1977 Sumba, Indonesia quake (8.3), which claimed 189 lives; and the 2007 Kuril Islands, Russia (magnitude 8.1).

Koper says the 2009 Samoa-Tonga quake sequence was “the first time a large normal-faulting quake has been shown to have triggered large thrust-faulting earthquakes on a plate boundary. We didn’t realize these thrust earthquakes could be triggered by a normal earthquake. We’ve had seismometers only 100 years and good observations only the last 50 years, so not enough earthquake cycles have been observed to see this before.”

“The shaking from the 8.1 triggered these two other large [7.8] earthquakes that happened in the normal place: the interface between the subducting Pacific plate and the overriding Tongan block,” he says.

The Tonga subduction zone doesn’t have an extensive history of great earthquakes like the subduction zones where the Pacific plate dives beneath Alaska and Chile. Scientists believe the Pacific plate usually slides under the Tonga block with most of the stress being relieved by moderate quakes and gradual creeping motion – known as aseismic slip – rather than producing great quakes, Koper says.

The 8.1 quake on the subducting Pacific plate may have occurred because the slowly diving rock pulled the rock behind it. Another factor could be an east-west “tear” in the Pacific plate north of Tonga and southwest of Samoa, where the Pacific plate moves west and is not subducting, as it is just to the south.