6 years after the tsunami disaster

Six years after the tsunami disaster of 26/12/2004, the set-up of the German-Indonesian Tsunami Early Warning System for the Indian Ocean (GITEWS) has been completed. The project ends on 31 March 2011. After that, Indonesia accepts the sole responsibility for the overall system.

“The innovative technical approach of GITEWS is based on a combination of different sensors, whose central element is a fast and precise detection and analysis of earthquakes, supported by GPS measurements,” says Professor Reinhard Hüttl, Scientific Director of the GFZ German Research Centre for Geosciences. “The GFZ-developed evaluation of Seismology via the SeisComP3 system proved to be so fast and reliable that it has now been installed in over 40 countries.”

A tsunami warning takes place no more than five minutes after a submarine earthquake, based on all the available information from the 300 stations that were built throughout Indonesia in the past 6 years. These include seismometers, GPS stations, tide gauges and buoy systems. Via a tsunami-simulation system, the information is converted into a situation map providing the appropriate warning levels for the affected coastline. A key outcome of GITEWS project is, however, that the buoy systems do not contribute to this process that occurs in these first few minutes. There are therefore considerations to shift the GITEWS buoys further into the open ocean and to use them to verify an ocean-wide tsunami that could threaten other countries bordering the Indian Ocean.

The Mentawai quake on 25 October this year, however, also showed the limits of any tsunami warning. The tsunami caused by the earthquake strongly affected the upstream Pagai islands in the Sunda Arc. The first waves arrived around the same time as the triggered tsunami alert, 4 minutes 46 seconds after the quake, and demanded some 500 lives. Several teams of tsunami experts from Japan, Indonesia, Germany and the USA noted in a follow-up analysis that the warning had arrived on the islands, but there had been no time to react. For the main island of Sumatra with the larger cities of Padang and Bengkulu, the time between the warning and the arrival of the first waves amounted to about 40 minutes, but in this case the Pagai Islands acted as a perfect shield against a tsunami reaching the coast of Sumatra.

The important conclusion is that even with the extremely short premonition times off Indonesia, the GITEWS system has proven to be technically and organizationally functional. Since September 2007, four tsunami events were detected and warnings were issued for each. Especially the inhabitants of the off-shore islands, however, need to receive intensified and improved training on how to act when threatened. This includes not only the correct response during a tsunami alert, but also the correct behaviour before, during and after earthquakes.

Immediately after the disaster of 26 December 2004, the Federal Government of Germany contracted the Helmholtz Association, represented by the Helmholtz Centre Potsdam – GFZ German Research Centre for Geosciences, to develop and implement an early warning system for tsunamis in the Indian Ocean. The funds to the amount of 45 million euros are a contribution of the Federal Government from the aid-for-flood-victims pool.

A natural phenomenon like the tsunami of 2004 cannot be prevented, and such disasters will continue to claim victims, even with a perfectly working alarm system. But the repercussions of such a natural disaster can be minimized with an early warning system. This is the aim of GITEWS.

Cornstarch might have ended the Gulf spill agony sooner

On May 25th, 2010, the online arm of Upstream, a newspaper for the international oil and gas industry, reported that British Petroleum had started top-kill procedures on the Macondo well in the Gulf of Mexico.

“The company said that the operation, which will pump heavy mud down the wellbore in an attempt to gain control of the oil flow and ultimately kill the well, began at 1 pm CST,” Upstream reported.

The article continued: “Earlier BP Chief Tony Hayward gave the top-kill procedure a 60 percent to 70 percent chance of success.”

Physicists watching the situation with interest, were skeptical. One of them, Jonathan Katz, PhD, professor of physics in Arts & Sciences at Washington University in St. Louis, had earlier suggested a simple fix, a change to the mud recipe, that might have altered the odds.

His plan was not adopted and the top kill failed.

After it failed, Katz and colleagues at Lawrence Livermore National Laboratory ran experiments with a scale model of the oil well to test his idea.

Their analysis, in press at Physical Review Letters, a peer-reviewed scientific journal that focuses on the rapid dissemination of significant, or notable, results, shows that had Katz’s recipe been followed, the top kill might have worked, plugging the gushing well in May instead of two months later.

Making waves

When the top kill was proposed, Katz was serving on the science panel Secretary of Energy Steven Chu had organized to advise the Obama administration about the oil well disaster. Katz and Richard Garwin, an eminent physicist who was also a panelist, discussed the kill plan and realized they had misgivings.

“We were worried that a phenomenon known as a Kelvin-Helmholtz instability would disperse the dense mud into tiny droplets that would be carried out of the well by the leaking crude oil,” says Katz.

A Kelvin-Helmholtz instability can occur whenever two fluids move past one another at different speeds. When wind blows over water, for example, the instability manifests itself in the form of waves that rise gently and then curl into chaotic turbulence.

The two physicists made some calculations that showed the interface between the descending kill mud and the ascending crude oil would be similarly unstable.

Katz then realized that a novel mud might suppress the instability. Something had to be added to the mud that would change its dynamic properties.

Ketchup vs. quicksand

To work for the top kill, the mud would need to behave less like ketchup and more like quicksand.

Ketchup is what is known as a shear-thinning fluid (or a Bingham plastic, after the scientist who first mathematically characterized such fluids). Initially it resists flowing. It begins to flow only when the pressure of your fingers on the bottle produces a stress on the condiment that is greater than what is called the yield stress. But after that, it flows freely.

Or as they used to say in the ketchup commercial “Anticipation is making me wait.”

Other examples of shear-thinning fluids are toothpaste, mayonnaise, mustard, and – crucially – drilling mud, which is typically a slurry, or watery mixture, of clay and other minerals.

To suppress instability, the mud needed to be a shear-thickening rather than a shear-thinning fluid – like quicksand. As every reader of the Worst Case Scenario Survival Handbook knows, when you fall into quicksand, it is important to move slowly. The faster you move, the more the quicksand resists your movement.

The additive Katz suggested wasn’t esoteric or expensive. It was the kitchen staple cornstarch.

If you mix cornstarch and water, pour it in a cookie pan and slap it with your open hand, it doesn’t spatter. You can let your hand sink into it, but you can’t easily jerk it out. Children play with it, and recipes for cornstarch “oobleck” can be found on the web.

“It can flow slowly as a liquid, but turns stiff and elastic when flow is rapid,” Katz says. “If an instability were to occur, this stiffness would suppress it, and the novel mud would sink in the well, accumulating at the bottom until its pressure became sufficient to stop the leak.”

A foregone conclusion

But the oil industry is conservative and BP stuck with variations on methods that had worked in the past, even though Katz and Garwin predicted failure.

In late May, crews pumped more than 30,000 barrels of heavy mud down the well in a top-kill attempt. As the physicists had feared, the well spat out the mud and crude oil like a toddler spewing strained peas.

The leak was not stopped until mid-July when the well was finally capped.

Afterward the top-kill attempt, Katz couldn’t help but wonder whether his suggestion would have worked. Together with collaborators at the Lawrence Livermore National Laboratory, he constructed a model oil well consisting of a six-foot length of transparent plastic tube filled with a clear oil.

“We poured cornstarch ‘mud’ into the top of the oil column and observed that, as predicted, the instability was suppressed. The surrogate ‘mud’ sank rapidly through the oil to the bottom of the tube,” Katz says.

Based on this experiment, the addition of a shear-thickening polymer like cornstarch to a dense top-kill mud might have allowed slugs of mud to descend against the upwelling oil instead of being ripped up and spat out of the well. Eventually, the column of mud would have prevented any further infiltration from the oil reservoir, killing the well.

Katz hopes there will never be an opportunity to repeat the experiment at full scale and under field conditions, but recommends the Boy Scout motto: ‘Be prepared.’

Drilling in the holy land

About 50 miles from Bethlehem, a drilling project is determining the climate and earthquake activity of the Holy Land. Scientists from eight nations are examining the ground below the Dead Sea, by placing a borehole in this deepest basin in the world. The International Continental Scientific Drilling Program ICDP brings together research teams from Israel, Japan, Norway, Switzerland, the USA and Germany. Particularly noteworthy: Researchers from Jordan and Palestine are also involved.

Scientists and technicians of the GFZ German Research Centre for Geosciences have now completed a geophysical measurement procedure in the hole and helped with the initial examination of the cores in a field laboratory. “We have drilled through about half a million years of sedimentary deposits,” estimates Dr. Ulrich Harms from the ICDP’s operational support group at the GFZ. “From this, we can deduce not only the climate history, but also the earthquake activity in this seismically very active region.” The direction and inclination of the well were determined with high precision below this lake which is around 300 meters deep here, and the physical properties of the rocks were measured down to the bottom of the 460 meters deep borehole.

These unique measurements are used to record a continuous survey of the deposits in the Dead Sea and to compare it with the recovered cores. Although scientific drilling attempts to recover cores over the entire length of a hole, it is not always possible. These special borehole measurements are conducted to cover the gaps. In addition, a second series of cores is obtained from a second well in order to verify and secure the data.

“If everything goes perfectly, we may soon be able to provide information about past climate and environmental changes in the Bethlehem area,” says Ulrich Harms. His colleague Professor Achim Brauer, a paleo-climatologist at the GFZ, is one of the initiators of the ICDP project. He and his team will analyze the drill cores. They are not just interested in the climate at the time of Jesus’ birth but in the climate of the whole history of mankind. The region of the Holy Land is considered a land bridge across which early man migrated in several waves from Africa to the north. The climate history of the land of the Bible is therefore closely connected with the history of mankind.

Sea-level study brings good and bad news to Chesapeake Bay

Global mean sea level trends in millimeters per year as determined by satellite altimetry. Note that the Chesapeake Bay region is experiencing a relatively minor sea-level rise of about 1.8 mm per year. - Data provided by NOAA Laboratory for Satellite Altimetry
Global mean sea level trends in millimeters per year as determined by satellite altimetry. Note that the Chesapeake Bay region is experiencing a relatively minor sea-level rise of about 1.8 mm per year. – Data provided by NOAA Laboratory for Satellite Altimetry

A new study of local sea-level trends by researchers at the Virginia Institute of Marine Science (VIMS) brings both good and bad news to localities concerned with coastal inundation and flooding along the shores of Chesapeake Bay.

Dr. John Boon, the study’s lead author, says the good news is that “absolute sea level in Chesapeake Bay is rising only about half as fast as the global average rise rate.” The bad news, says Boon, is that “local subsidence more than makes up for it.”

Boon has previously warned of the long-term impacts of sea-level rise in Chesapeake Bay, particularly in light of the increased likelihood of coastal flooding during hurricanes and nor’easters.

In their report, Boon and his co-authors, VIMS professor John Brubaker and assistant research scientist David Forrest, stress the distinction between absolute sea level-a measure of the volume and mass of ocean water, and relative sea level-the level of the ocean surface measured relative to land (and more specifically a tide gauge).

The authors note that for Chesapeake Bay, relatively moderate rates of absolute sea-level rise, when combined with locally high rates of land subsidence and an increasing coastal population, add up to a significant and growing threat. They call for continued operation of the local tide gauge network and addition of new mapping tools such as LIDAR to aid in smarter coastal planning and improved emergency-response measures.

The VIMS study was funded by the Norfolk District of the U.S Army Corps of Engineers and reviewed by officials with the National Oceanic and Atmospheric Administration (NOAA) and the Maryland Geological Survey. It is available as a pdf file from the Hargis Library at VIMS.

The Good News

Data from NOAA satellites and tide gauges show that absolute sea level is rising at a rate of about 1.8 millimeters per year in Chesapeake Bay. That’s only about half of the globally averaged 3.1-mm per year rate of absolute sea-level rise, as reported by the Intergovernmental Panel on Climate Change (3.1 mm is about a tenth of an inch).

Rates of change in absolute sea level vary widely around the globe. In the Indo-Pacific, sea level is rising as fast as 10 mm per year. In the Gulf of Alaska and other areas, sea level is falling. These regional differences reflect differences in water temperature (warmer water is less dense and takes up more volume than colder water), current patterns, local addition of meltwater from ice sheets and glaciers, and other factors.

The Bad News

The VIMS study, based on a detailed analysis of simultaneous 35-year records from 10 tide gauges between Norfolk and Baltimore, shows that relative rates of sea-level rise in Chesapeake Bay range from 2.91 to 5.80 millimeters per year. That’s the bad news, as even the lowest of these values is higher than the highest rise rates observed in many other localities along the U.S. East Coast. A rise rate of even 2.91 mm per year, when compounded over a century, equals a foot of sea-level rise (a 5.8 mm annual rise rate equals a two-foot rise).

The difference between these local rates of relative sea-level rise and the regional absolute average reflects sinking of the land surface. For instance, the measured rate of relative sea-level rise at Sewell’s Point in Norfolk was 4.52 mm per year between 1976 and 2007. Because absolute sea-level rise only added 1.8 mm to the water level, the additional apparent rise in sea level of 2.72 mm (4.52 mm -1.8 mm) must instead be due to land subsidence. The authors write “on average, about 50% of the relative sea level rise measured at Bay water level stations is due to local subsidence.”

The mid-Atlantic region is slowly sinking in response to land movements associated with melting of the polar ice caps following the last Ice Age, faulting associated with the Chesapeake Bay Impact Crater, local groundwater withdrawals, and other factors. There is no evidence to suggest that local subsidence rates are likely to change significantly in the coming decades.

The Uncertain

Another goal of the VIMS study was to determine if local rates of sea-level rise have increased during the last few decades. As Boon puts it “We know that sea-level is rising, and wanted to find out if it is rising faster now than it did before.”

An acceleration in sea-level rise with global warming is predicted by many climate models, and has been observed on a global basis. The IPCC reports that the globally averaged rate of sea-level rise increased from 1.8 mm per year between 1961 and 2003 to 3.1 mm per year between 1993 and 2003. The more recent period coincides with the deployment of satellites that allow for an accurate global picture of absolute sea-level rise.

The results from this part of the VIMS study were inconclusive, largely because available tide-gauge records aren’t long enough to allow the researchers to tease a change in the long-term trend apart from year-to-year variability caused by phenomena such as the North Atlantic Oscillation, a see-saw in atmospheric pressure and wind fields between Iceland and the Azores.

The authors conclude “While relative sea level continues to rise at some of the highest rates found along the U.S. Atlantic Coast, there is presently no evidence of a statistically significant increase marking an acceleration in relative sea-level rise” at any of the 5 Bay stations measured. Four of the five stations studied showed an increase in the rate of relative sea-level rise between 1944-1975 and 1976-2007, but none of the increases are statistically significant.

The authors caution that “small but steady increases in relative sea-level rise rate with time are still a possibility.” They estimate that an increase “on the order of 0.5 mm per year may be required for a statistically significant acceleration to be confirmed in the years ahead.”

550 million years ago rise in oxygen drove evolution of animal life

Researchers funded by the Biotechnology and Biological Sciences Research Council (BBSRC) at the University of Oxford have uncovered a clue that may help to explain why the earliest evidence of complex multicellular animal life appears around 550 million years ago, when atmospheric oxygen levels on the planet rose sharply from 3% to their modern day level of 21%.

The team, led by Professor Chris Schofield, has found that humans share a method of sensing oxygen with the world’s simplest known living animal – Trichoplax adhaerens – suggesting the method has been around since the first animals emerged around 550 million years ago.

This discovery, published today (17 December) in the January 2011 edition of EMBO Reports, throws light on how humans sense oxygen and how oxygen levels drove the very earliest stages of animal evolution.

Professor Schofield said “It’s absolutely necessary for any multicellular organism to have a sufficient supply of oxygen to almost every cell and so the atmospheric rise in oxygen made it possible for multicellular organisms to exist.

“But there was still a very different physiological challenge for these organisms than for the more evolutionarily ancient single-celled organisms such as bacteria. Being multicelluar means oxygen has to get to cells not on the surface of the organism. We think this is what drove the ancesters of Trichoplax adhaerens to develop a system to sense a lack of oxygen in any cell and then do something about it.”

The oxygen sensing process enables animals to survive better at low oxygen levels, or ‘hypoxia’. In humans this system responds to hypoxia, such as is caused by high altitudes or physical exertion, and is very important for the prevention of stroke and heart attacks as well as some types of cancer.

Trichoplax adhaerens is a tiny seawater organism that lacks any organs and has only five types of cells, giving it the appearance of an amoeba. By analysing how Trichoplax reacts to a lack of oxygen, Oxford researcher Dr Christoph Loenarz found that it uses the same mechanism as humans – in fact, when the key enzyme from Trichoplax was put it in a human cell, it worked just as well as the human enzyme usually would.

They also looked at the genomes of several other species and found that this mechanism is present in multi-cellular animals, but not in the single-celled organisms that were the precursors of animals, suggesting that the mechanism evolved at the same time as the earliest multicellular animals

Defects in the most important human oxygen sensing enzyme can cause polycythemia
– an increase in red blood cells. The work published today could also open up new approaches to develop therapies for this disorder.

Professor Douglas Kell, Chief Executive, BBSRC said “Understanding how animals – and ultimately humans – evolved is essential to our ability to pick apart the workings of our cells. Knowledge of normal biological processes underpins new developments that can improve quality of life for everyone. The more skillful we become in studying the evolution of some of our most essential cell biology, the better our chances of ensuring long term health and well being to match the increase in average lifespan in the UK and beyond.”

Ancient raindrops reveal a wave of mountains sent south by sinking Farallon plate

Hari Mix, a doctoral candidate in Environmental Earth System Science, analyzed samples taken from dozens of basins around the western United States.
Hari Mix, a doctoral candidate in Environmental Earth System Science, analyzed samples taken from dozens of basins around the western United States.

50 million years ago, mountains began popping up in southern British Columbia. Over the next 22 million years, a wave of mountain building swept (geologically speaking) down western North America as far south as Mexico and as far east as Nebraska, according to Stanford geochemists. Their findings help put to rest the idea that the mountains mostly developed from a vast, Tibet-like plateau that rose up across most of the western U.S. roughly simultaneously and then subsequently collapsed and eroded into what we see today.

The data providing the insight into the mountains – so popularly renowned for durability – came from one of the most ephemeral of sources: raindrops. Or more specifically, the isotopic residue – fingerprints, effectively – of ancient precipitation that rained down upon the American west between 65 and 28 million years ago.

Atoms of the same element but with different numbers of neutrons in their nucleus are called isotopes. More neutrons make for a heavier atom and as a cloud rises, the water molecules that contain the heavier isotopes of hydrogen and oxygen tend to fall first. By measuring the ratio of heavy to light isotopes in the long-ago rainwater, researchers can infer the elevation of the land when the raindrops fell.

The water becomes incorporated into clays and carbonate minerals on the surface, or in volcanic glass, which are then preserved for the ages in the sediments.

Hari Mix, a PhD candidate in Environmental Earth System Science at Stanford, worked with the analyses of about 2,800 samples – several hundred that he and his colleagues collected, the rest from published studies – and used the isotopic ratios to calculate the composition of the ancient rain. Most of the samples were from carbonate deposits in ancient soils and lake sediments, taken from dozens of basins around the western U.S.

Using the elevation trends revealed in the data, Mix was able to decipher the history of the mountains. “Where we got a huge jump in isotopic ratios, we interpret that as a big uplift,” he said.

“We saw a major isotopic shift at around 49 million years ago, in southwest Montana,” he said. “And another one at 39 mya, in northern Nevada” as the uplift moved southward. Previous work by Chamberlain’s group had found evidence for these shifts in data from two basins, but Mix’s work with the larger data set demonstrated that the pattern of uplift held across the entire western U.S.

The uplift is generally agreed to have begun when the Farallon plate, a tectonic plate that was being shoved under the North American plate, slowly began peeling away from the underside of the continent.

“The peeling plate looked sort of like a tongue curling down,” said Page Chamberlain, a professor in environmental Earth system science who is Mix’s advisor.

As hot material from the underlying mantle flowed into the gap between the peeling plates, the heat and buoyancy of the material caused the overlying land to rise in elevation. The peeling tongue continued to fall off, and hot mantle continued to flow in behind it, sending a slow-motion wave of mountain-building coursing southward.

“We knew that the Farallon plate fell away, but the geometry of how that happened and the topographic response to it is what has been debated,” Mix said.

Mix and Chamberlain estimate that the topographic wave would have been at least one to two kilometers higher than the landscape it rolled across and would have produced mountains with elevations up to a little over 4 kilometers (about 14,000 feet), comparable to the elevations existing today.

Mix said their isotopic data corresponds well with other types of evidence that have been documented.

“There was a big north to south sweep of volcanism through the western U.S. at the exact same time,” he said.

There was also a simultaneous extension of the Earth’s crust, which results when the crust is heated from below, as it would have been by the flow of hot magma under the North American plate.

“The pattern of topographic uplift we found matches what has been documented by other people in terms of the volcanology and extension,” Mix said.

“Those three things together, those patterns, all point to something going on with the Farallon plate as being responsible for the construction of the western mountain ranges, the Cordillera.”

Chamberlain said that while there was certainly elevated ground, it was not like Tibet.

“It was not an average elevation of 15,000 feet. It was something much more subdued,” he said.

“The main implication of this work is that it was not a plateau that collapsed, but rather something that happened in the mantle, that was causing this mountain growth,” Chamberlain said.

First measurement of magnetic field in Earth’s core

A cross-section of the earth's interior shows the outer crust, the hot gooey mantle, the liquid outer core and the solid, frozen inner core (gray). (Calvin J. Hamilton graphic)
A cross-section of the earth’s interior shows the outer crust, the hot gooey mantle, the liquid outer core and the solid, frozen inner core (gray). (Calvin J. Hamilton graphic)

A University of California, Berkeley, geophysicist has made the first-ever measurement of the strength of the magnetic field inside Earth’s core, 1,800 miles underground.

The magnetic field strength is 25 Gauss, or 50 times stronger than the magnetic field at the surface that makes compass needles align north-south. Though this number is in the middle of the range geophysicists predict, it puts constraints on the identity of the heat sources in the core that keep the internal dynamo running to maintain this magnetic field.

“This is the first really good number we’ve had based on observations, not inference,” said author Bruce A. Buffett, professor of earth and planetary science at UC Berkeley. “The result is not controversial, but it does rule out a very weak magnetic field and argues against a very strong field.”

The results are published in the Dec. 16 issue of the journal Nature.

A strong magnetic field inside the outer core means there is a lot of convection and thus a lot of heat being produced, which scientists would need to account for, Buffett said. The presumed sources of energy are the residual heat from 4 billion years ago when the planet was hot and molten, release of gravitational energy as heavy elements sink to the bottom of the liquid core, and radioactive decay of long-lived elements such as potassium, uranium and thorium.

A weak field – 5 Gauss, for example – would imply that little heat is being supplied by radioactive decay, while a strong field, on the order of 100 Gauss, would imply a large contribution from radioactive decay.

“A measurement of the magnetic field tells us what the energy requirements are and what the sources of heat are,” Buffett said.

About 60 percent of the power generated inside the earth likely comes from the exclusion of light elements from the solid inner core as it freezes and grows, he said. This constantly builds up crud in the outer core.

The Earth’s magnetic field is produced in the outer two-thirds of the planet’s iron/nickel core. This outer core, about 1,400 miles thick, is liquid, while the inner core is a frozen iron and nickel wrecking ball with a radius of about 800 miles – roughly the size of the moon. The core is surrounded by a hot, gooey mantle and a rigid surface crust.

The cooling Earth originally captured its magnetic field from the planetary disk in which the solar system formed. That field would have disappeared within 10,000 years if not for the planet’s internal dynamo, which regenerates the field thanks to heat produced inside the planet. The heat makes the liquid outer core boil, or “convect,” and as the conducting metals rise and then sink through the existing magnetic field, they create electrical currents that maintain the magnetic field. This roiling dynamo produces a slowly shifting magnetic field at the surface.

“You get changes in the surface magnetic field that look a lot like gyres and flows in the oceans and the atmosphere, but these are being driven by fluid flow in the outer core,” Buffett said.

Buffett is a theoretician who uses observations to improve computer models of the earth’s internal dynamo. Now at work on a second generation model, he admits that a lack of information about conditions in the earth’s interior has been a big hindrance to making accurate models.

He realized, however, that the tug of the moon on the tilt of the earth’s spin axis could provide information about the magnetic field inside. This tug would make the inner core precess – that is, make the spin axis slowly rotate in the opposite direction – which would produce magnetic changes in the outer core that damp the precession. Radio observations of distant quasars – extremely bright, active galaxies – provide very precise measurements of the changes in the earth’s rotation axis needed to calculate this damping.

“The moon is continually forcing the rotation axis of the core to precess, and we’re looking at the response of the fluid outer core to the precession of the inner core,” he said.

By calculating the effect of the moon on the spinning inner core, Buffett discovered that the precession makes the slightly out-of-round inner core generate shear waves in the liquid outer core. These waves of molten iron and nickel move within a tight cone only 30 to 40 meters thick, interacting with the magnetic field to produce an electric current that heats the liquid. This serves to damp the precession of the rotation axis. The damping causes the precession to lag behind the moon as it orbits the earth. A measurement of the lag allowed Buffett to calculate the magnitude of the damping and thus of the magnetic field inside the outer core.

Buffett noted that the calculated field – 25 Gauss – is an average over the entire outer core. The field is expected to vary with position.

“I still find it remarkable that we can look to distant quasars to get insights into the deep interior of our planet,” Buffett said.

Tiny 3-D images shed light on origin of Earth’s core

Silicate material mixed with iron shown at low pressure, with iron forming small, discrete spheres (lighter-colored areas) inside the silicate.
Silicate material mixed with iron shown at low pressure, with iron forming small, discrete spheres (lighter-colored areas) inside the silicate.

To answer the big questions, it often helps to look at the smallest details. That is the approach Stanford mineral physicist Wendy Mao is taking to understanding a major event in Earth’s inner history. Using a new technique to scrutinize how minute amounts of iron and silicate minerals interact at ultra-high pressures and temperatures, she is gaining insight into the biggest transformation Earth has ever undergone – the separation of its rocky mantle from its iron-rich core approximately 4.5 billion years ago.

The technique, called high-pressure nanoscale X-ray computed tomography, is being developed at SLAC National Accelerator Laboratory. With it, Mao is getting unprecedented detail – in three-dimensional images – of changes in the texture and shape of molten iron and solid silicate minerals as they respond to the same intense pressures and temperatures found deep in the Earth.

Mao will present the results of the first few experiments with the technique at the annual meeting of the American Geophysical Union in San Francisco on Thursday, Dec. 16.

Tomography refers to the process that creates a three-dimensional image by combining a series of two-dimensional images, or cross-sections, through an object. A computer program interpolates between the images to flesh out a recreation of the object.

Researchers at SLAC have developed a way to combine a diamond anvil cell, which compresses tiny samples between the tips of two diamonds, with nanoscale X-ray computed tomography to capture images of material at high pressure. The pressures deep in the Earth are so high – millions of times atmospheric pressure – that only diamonds can exert the needed pressure without breaking under the force.

At present, the SLAC researchers and their collaborators from HPSync, the High Pressure Synergetic Consortium at the Advanced Photon Source at Argonne National Laboratory, are the only group using this technique.

“It is pretty exciting, being able to measure the interactions of iron and silicate materials at very high pressures and temperatures, which you could not do before,” said Mao, an assistant professor of geological and environmental sciences and of photon science. “No one has ever imaged these sorts of changes at these very high pressures.”

It is generally agreed that the initially homogenous ball of material that was the very early Earth had to be very hot in order to differentiate into the layered sphere we live on today. Since the crust and the layer underneath it, the mantle, are silicate-rich, rocky layers, while the core is iron-rich, it’s clear that silicate and iron went in different directions at some point. But how they separated out and squeezed past each other is not clear. Silicate minerals, which contain silica, make up about 90 percent of the crust of the Earth.

If the planet got hot enough to melt both elements, it would have been easy enough for the difference in density to send iron to the bottom and silicates to the top.

If the temperature was not hot enough to melt silicates, it has been proposed that molten iron might have been able to move along the boundaries between grains of the solid silicate minerals.

“To prove that, though, you need to know whether the molten iron would tend to form small spheres or whether it would form channels,” Mao said. “That would depend on the surface energy between the iron and silicate.”

Previous experimental work has shown that at low pressure, iron forms isolated spheres, similar to the way water beads up on a waxed surface, Mao said, and spheres could not percolate through solid silicate material.

Mao said the results of her first high-pressure experiments using the tomography apparatus suggest that at high pressure, since the silicate transforms into a different structure, the interaction between the iron and silicate could be different than at low pressure.

“At high pressure, the iron takes a more elongate, platelet-like form,” she said. That means the iron would spread out on the surface of the silicate minerals, connecting to form channels instead of remaining in isolated spheres.

“So it looks like you could get some percolation of iron at high pressure,” Mao said. “If iron could do that, that would tell you something really significant about the thermal history of the Earth.”

But she cautioned that she only has data from the initial experiments.

“We have some interesting results, but it is the kind of measurement that you need to repeat a couple times to make sure,” Mao said.

Geologist develops improved seismic model for monitoring nuclear explosions in Middle East

Geologists from the University of Rhode Island and Princeton University, in collaboration with Lawrence Livermore National Laboratory, have taken an important step toward helping the United States government monitor nuclear explosions by improving a 3-dimensional model originally developed at Harvard University. The improvements make the model more accurate at detecting the location, source and depth of seismic activity.

The results of their research were presented today at a meeting of the American Geophysical Union in San Francisco.

The National Nuclear Security Administration uses numerous seismic models in its efforts to monitor the globe for underground nuclear explosions detonated by nations that seek to keep their nuclear activities undetected. But not only is it difficult to identify exactly where an explosion takes place, it is especially challenging to differentiate the seismic waves generated by nuclear explosions from those generated by earthquakes, volcanic activity and mine collapses.

“The goal is to build a model of the Earth that will locate seismic events and characterize those events precisely while reducing potential errors,” said Brian Savage, URI assistant professor of geosciences.

The model spans the politically sensitive region from Turkey to India, including Iran, Iraq and Afghanistan, a region Savage describes as “tectonically complex.”

Savage and his colleagues analyzed data from 200 earthquakes collected by 150 seismic stations in the region between 1990 and 2007. They compared the data with that from simulated earthquakes to identify deficiencies in the model, then propagated the simulated earthquakes in reverse to determine where to improve and update the modes.

Different types of seismic waves travel in different ways and at different speeds. P-waves, for instance, are the first waves recorded from an earthquake or explosion, and they behave similar to sound waves. S-waves are secondary waves that travel in a snake-like side-to-side fashion. Surface waves are a combination of the two traveling much slower with much larger amplitude.

“Depending on the material the waves travel through, it may slow down or speed up the waves,” said Savage, who notes that the model requires a great deal of computer power to run. “So when you look at the relative timing of the waves, you can tell what the material is that it’s traveling through.”

The improvements the researchers made to the model focused on long period surface waves and identifying the magnitude of a seismic event.

“The amplitude ratios of different wave types is a key factor in discriminating whether an event is manmade or not,” Savage said.

The improved model is expected to be complete by next summer. The research was funded by he National Nuclear Security Administration and the Air Force Research Laboratory.

Hot stuff: Magma at shallow depth under Hawaii

Lava erupting from the Puʻu ʻŌʻō vent
Lava erupting from the Puʻu ʻŌʻō vent

Ohio State University researchers have found a new way to gauge the depth of the magma chamber that forms the Hawaiian Island volcanic chain, and determined that the magma lies much closer to the surface than previously thought.

The finding could help scientists predict when Hawaiian volcanoes are going to erupt. It also suggests that Hawaii holds great potential for thermal energy.

Julie Ditkof, an honors undergraduate student in earth sciences at Ohio State, described the study at the American Geophysical Union Meeting in San Francisco on Tuesday, December 14.

For her honors thesis, Ditkof took a technique that her advisor Michael Barton, professor of earth sciences, developed to study magma in Iceland, and applied it to Hawaii.

She discovered that magma lies an average of 3 to 4 kilometers (about 1.9 to 2.5 miles) beneath the surface of Hawaii.

“Hawaii was already unique among volcanic systems, because it has such an extensive plumbing system, and the magma that erupts has a unique and variable chemical composition,” Ditkof explained. “Now we know the chamber is at a shallow depth not seen anywhere else in the world.”

For example, Barton determined that magma chambers beneath Iceland lie at an average depth of 20 kilometers.

While that means the crust beneath Hawaii is much thinner than the crust beneath Iceland, Hawaiians have nothing to fear.

“The crust in Hawaii has been solidifying from eruptions for more than 300,000 years now. The crust doesn’t get consumed by the magma chamber. It floats on top,” Ditkof explained.

The results could help settle two scientific debates, however.

Researchers have wondered whether more than one magma chamber was responsible for the varying chemical compositions, even though seismological studies indicated only one chamber was present.

Meanwhile, those same seismological studies pegged the depth as shallow, while petrologic studies – studies of rock composition – pegged it deeper.

There has never been a way to prove who was right, until now.

“We suspected that the depth was actually shallow, but we wanted to confirm or deny all those other studies with hard data,” Barton said.

He and Ditkof determined that there is one large magma chamber just beneath the entire island chain that feeds the Hawaiian volcanoes through many different conduits.

They came to this conclusion after Ditkof analyzed the chemical composition of nearly 1,000 magma samples. From the ratio of some elements to others – aluminum to calcium, for example, or calcium to magnesium – she was able to calculate the pressure at which the magma had crystallized.

For his studies of Iceland, Barton created a methodology for converting those pressure calculations to depth. When Ditkof applied that methodology, she obtained an average depth of 3 to 4 kilometers.

Researchers could use this technique to regularly monitor pressures inside the chamber and make more precise estimates of when eruptions are going to occur.

Barton said that, ultimately, the finding might be more important in terms of energy.

“Hawaii has huge geothermal resources that haven’t been tapped fully,” he said, and quickly added that scientists would have to determine whether tapping that energy was practical – or safe.

“You’d have to drill some test bore holes. That’s dangerous on an active volcano, because then the lava could flow down and wipe out your drilling rig.”