Magnetic pole reversal happens all the (geologic) time

A schematic diagram of Earth's interior and the movement of magnetic north from 1900 to 1996. The outer core is the source of the geomagnetic field. -  Dixon Rohr
A schematic diagram of Earth’s interior and the movement of magnetic north from 1900 to 1996. The outer core is the source of the geomagnetic field. – Dixon Rohr

Scientists understand that Earth’s magnetic field has flipped its polarity many times over the millennia. In other words, if you were alive about 800,000 years ago, and facing what we call north with a magnetic compass in your hand, the needle would point to ‘south.’ This is because a magnetic compass is calibrated based on Earth’s poles. The N-S markings of a compass would be 180 degrees wrong if the polarity of today’s magnetic field were reversed. Many doomsday theorists have tried to take this natural geological occurrence and suggest it could lead to Earth’s destruction. But would there be any dramatic effects? The answer, from the geologic and fossil records we have from hundreds of past magnetic polarity reversals, seems to be ‘no.’

Reversals are the rule, not the exception. Earth has settled in the last 20 million years into a pattern of a pole reversal about every 200,000 to 300,000 years, although it has been more than twice that long since the last reversal. A reversal happens over hundreds or thousands of years, and it is not exactly a clean back flip. Magnetic fields morph and push and pull at one another, with multiple poles emerging at odd latitudes throughout the process. Scientists estimate reversals have happened at least hundreds of times over the past three billion years. And while reversals have happened more frequently in “recent” years, when dinosaurs walked Earth a reversal was more likely to happen only about every one million years.

Sediment cores taken from deep ocean floors can tell scientists about magnetic polarity shifts, providing a direct link between magnetic field activity and the fossil record. The Earth’s magnetic field determines the magnetization of lava as it is laid down on the ocean floor on either side of the Mid-Atlantic Rift where the North American and European continental plates are spreading apart. As the lava solidifies, it creates a record of the orientation of past magnetic fields much like a tape recorder records sound. The last time that Earth’s poles flipped in a major reversal was about 780,000 years ago, in what scientists call the Brunhes-Matuyama reversal. The fossil record shows no drastic changes in plant or animal life. Deep ocean sediment cores from this period also indicate no changes in glacial activity, based on the amount of oxygen isotopes in the cores. This is also proof that a polarity reversal would not affect the rotation axis of Earth, as the planet’s rotation axis tilt has a significant effect on climate and glaciation and any change would be evident in the glacial record.

Earth’s polarity is not a constant. Unlike a classic bar magnet, or the decorative magnets on your refrigerator, the matter governing Earth’s magnetic field moves around. Geophysicists are pretty sure that the reason Earth has a magnetic field is because its solid iron core is surrounded by a fluid ocean of hot, liquid metal. This process can also be modeled with supercomputers. Ours is, without hyperbole, a dynamic planet. The flow of liquid iron in Earth’s core creates electric currents, which in turn create the magnetic field. So while parts of Earth’s outer core are too deep for scientists to measure directly, we can infer movement in the core by observing changes in the magnetic field. The magnetic north pole has been creeping northward – by more than 600 miles (1,100 km) – since the early 19th century, when explorers first located it precisely. It is moving faster now, actually, as scientists estimate the pole is migrating northward about 40 miles per year, as opposed to about 10 miles per year in the early 20th century.

Another doomsday hypothesis about a geomagnetic flip plays up fears about incoming solar activity. This suggestion mistakenly assumes that a pole reversal would momentarily leave Earth without the magnetic field that protects us from solar flares and coronal mass ejections from the sun. But, while Earth’s magnetic field can indeed weaken and strengthen over time, there is no indication that it has ever disappeared completely. A weaker field would certainly lead to a small increase in solar radiation on Earth – as well as a beautiful display of aurora at lower latitudes — but nothing deadly. Moreover, even with a weakened magnetic field, Earth’s thick atmosphere also offers protection against the sun’s incoming particles.

The science shows that magnetic pole reversal is – in terms of geologic time scales – a common occurrence that happens gradually over millennia. While the conditions that cause polarity reversals are not entirely predictable – the north pole’s movement could subtly change direction, for instance – there is nothing in the millions of years of geologic record to suggest that any of the 2012 doomsday scenarios connected to a pole reversal should be taken seriously. A reversal might, however, be good business for magnetic compass manufacturers.

Setting the stage for life: Scientists make key discovery about the atmosphere of early Earth

Scientists in the New York Center for Astrobiology at Rensselaer Polytechnic Institute have used the oldest minerals on Earth to reconstruct the atmospheric conditions present on Earth very soon after its birth. The findings, which appear in the Dec. 1 edition of the journal Nature, are the first direct evidence of what the ancient atmosphere of the planet was like soon after its formation and directly challenge years of research on the type of atmosphere out of which life arose on the planet.

The scientists show that the atmosphere of Earth just 500 million years after its creation was not a methane-filled wasteland as previously proposed, but instead was much closer to the conditions of our current atmosphere. The findings, in a paper titled “The oxidation state of Hadean magmas and implications for early Earth’s atmosphere,” have implications for our understanding of how and when life began on this planet and could begin elsewhere in the universe. The research was funded by NASA.

For decades, scientists believed that the atmosphere of early Earth was highly reduced, meaning that oxygen was greatly limited. Such oxygen-poor conditions would have resulted in an atmosphere filled with noxious methane, carbon monoxide, hydrogen sulfide, and ammonia. To date, there remain widely held theories and studies of how life on Earth may have been built out of this deadly atmosphere cocktail.

Now, scientists at Rensselaer are turning these atmospheric assumptions on their heads with findings that prove the conditions on early Earth were simply not conducive to the formation of this type of atmosphere, but rather to an atmosphere dominated by the more oxygen-rich compounds found within our current atmosphere – including water, carbon dioxide, and sulfur dioxide.

“We can now say with some certainty that many scientists studying the origins of life on Earth simply picked the wrong atmosphere,” said Bruce Watson, Institute Professor of Science at Rensselaer.

The findings rest on the widely held theory that Earth’s atmosphere was formed by gases released from volcanic activity on its surface. Today, as during the earliest days of the Earth, magma flowing from deep in the Earth contains dissolved gases. When that magma nears the surface, those gases are released into the surrounding air.

“Most scientists would argue that this outgassing from magma was the main input to the atmosphere,” Watson said. “To understand the nature of the atmosphere ‘in the beginning,’ we needed to determine what gas species were in the magmas supplying the atmosphere.”

As magma approaches the Earth’s surface, it either erupts or stalls in the crust, where it interacts with surrounding rocks, cools, and crystallizes into solid rock. These frozen magmas and the elements they contain can be literal milestones in the history of Earth.

One important milestone is zircon. Unlike other materials that are destroyed over time by erosion and subduction, certain zircons are nearly as old as the Earth itself. As such, zircons can literally tell the entire history of the planet – if you know the right questions to ask.

The scientists sought to determine the oxidation levels of the magmas that formed these ancient zircons to quantify, for the first time ever, how oxidized were the gases being released early in Earth’s history. Understanding the level of oxidation could spell the difference between nasty swamp gas and the mixture of water vapor and carbon dioxide we are currently so accustomed to, according to study lead author Dustin Trail, a postdoctoral researcher in the Center for Astrobiology.

“By determining the oxidation state of the magmas that created zircon, we could then determine the types of gases that would eventually make their way into the atmosphere,” said Trail.

To do this Trail, Watson, and their colleague, postdoctoral researcher Nicholas Tailby, recreated the formation of zircons in the laboratory at different oxidation levels. They literally created lava in the lab. This procedure led to the creation of an oxidation gauge that could then be compared with the natural zircons.

During this process they looked for concentrations of a rare Earth metal called cerium in the zircons. Cerium is an important oxidation gauge because it can be found in two oxidation states, with one more oxidized than the other. The higher the concentrations of the more oxidized type cerium in zircon, the more oxidized the atmosphere likely was after their formation.

The calibrations reveal an atmosphere with an oxidation state closer to present-day conditions. The findings provide an important starting point for future research on the origins of life on Earth.

“Our planet is the stage on which all of life has played out,” Watson said. “We can’t even begin to talk about life on Earth until we know what that stage is. And oxygen conditions were vitally important because of how they affect the types of organic molecules that can be formed.”

Despite being the atmosphere that life currently breathes, lives, and thrives on, our current oxidized atmosphere is not currently understood to be a great starting point for life. Methane and its oxygen-poor counterparts have much more biologic potential to jump from inorganic compounds to life-supporting amino acids and DNA. As such, Watson thinks the discovery of his group may reinvigorate theories that perhaps those building blocks for life were not created on Earth, but delivered from elsewhere in the galaxy.

The results do not, however, run contrary to existing theories on life’s journey from anaerobic to aerobic organisms. The results quantify the nature of gas molecules containing carbon, hydrogen, and sulfur in the earliest atmosphere, but they shed no light on the much later rise of free oxygen in the air. There was still a significant amount of time for oxygen to build up in the atmosphere through biologic mechanisms, according to Trail.

Penn and Brown researchers demonstrate earthquake friction effect at the nanoscale

Earthquakes are some of the most daunting natural disasters that scientists try to analyze. Though the earth’s major fault lines are well known, there is little scientists can do to predict when an earthquake will occur or how strong it will be. And, though earthquakes involve millions of tons of rock, a team of University of Pennsylvania and Brown University researchers has helped discover an aspect of friction on the nanoscale that may lead to a better understanding of the disasters.

Robert Carpick, a professor who chairs the Department of Mechanical Engineering and Applied Mechanics in Penn’s School of Engineering and Applied Science, led the research in collaboration with Terry Tullis and David Goldsby, professors of geological science at Brown. The experimental and modeling work was conducted by first author Qunyang Li, a postdoctoral researcher in Carpick’s group, who has recently been appointed an associate professor in the School of Aerospace at Tsinghua University, China.

Their work will be published in the journal Nature.

The team’s research was spurred by an unusual phenomenon that has been observed in both natural and laboratory-simulated faults: materials become more resistant to sliding the longer they are in contact with one another. This trait is actually fundamental to why earthquakes happen at all. The longer materials are in contact, the stronger the resistance between them and the more violent and unstable the subsequent sliding is. Energy is stored over the time the materials are in contact and is then catastrophically released as an earthquake.

While geologists, physicists and mechanics researchers have studied this phenomenon for decades, the mechanism behind this increase of friction over time has only been hypothesized. There are two main theories as to why this “frictional aging” occurs.

“One hypothesis is that points of contact deform and grow over time – that contact quantity increases,” Carpick said. “The other is that bonding at the points of contact strengthens over time – that contact quality increases.”

The difficulty in proving that either theory holds true lies in the fact that points of contact are necessarily embedded at the juncture of two materials and are therefore hard to observe. One of the original breakthrough experiments on these theories projected light through transparent materials held together to measure the growth of apparent contact points. While this lent credence to the contact quantity theory, there was not yet a way to assess the bond strengths at those individual points of contacts or to be sure that the observations were of single points of contacts or clusters of even smaller nanoscale contacts.

It was not until Carpick and Tullis met at a conference designed to bring physicists and mechanics researchers together with geologists that they realized that the tools of the former group could resolve the latter group’s contact quality theory. The solution came from moving from the massive scale of earthquakes to the smallest scales imaginable.

“We want to simplify the case,” Li said. “So in our experiment we look at only one point of contact: the tip of an atomic force microscope.”

An atomic force microscope is an ideal tool for investigating bonding strength where two surfaces meet. Instead of using light, atomic force microscopes measure nanoscale details using an extremely sharp probe tip that is sensitive to the push and pull of individual atoms.

The researchers simulated rock-on-rock contact with silica, a major component in most geological materials. They pressed a silica tip against a silica surface for different lengths of time and then dragged it to measure the amount of friction it experienced. They repeated these experiments with surfaces made out of different materials: diamond and graphite. Critically, both diamond and graphite are chemically inert. As they don’t easily form chemical bonds with silica, any frictional aging that occurred with them would necessarily be due to changing contact area and not increased bond strength.

The results showed a stark difference in the frictional aging between the materials.

“We saw a huge amount of aging with silica on silica. But with silica on diamond or graphite, even though the tip is experiencing about the same stress levels, we see almost no aging,” Li said. “If the increasing contact area was responsible for the increase in frictional aging, you would see similar amounts in these cases. You might even see more aging with diamond because it is stiffer, leading to a slightly higher stress level in the silica, and this would cause more deformation on the tip.”

The frictional aging seen in the silica-on-silica experiment was so intense that the researchers had another mystery on their hands: how to reconcile strong aging on the nanoscale with the weaker level seen on the macroscale where earthquakes actually occur.

The solution to that puzzle stems from the fact that not all contact points are created equal. Two different contact points on the same surface that are close to one another will sense each other’s presence. This “elastic coupling,” as it is known, means that only a few of the contact points within an area will be resisting the sliding motion at their full capacity; some will have started to slide earlier, and others will slide later. It is too difficult to make them all slide at once.

So, the overall level of resistance relies not only on the maximum resistance any contact point can provide, but also on the small fraction of contact points able to provide this resistance.

“When you take a lot of contact points,”Carpick said, “all of them could have this large amount of aging. But when you try to shear them, you see only a small fraction reach that very high value of friction at any given time. So, you need a very large effect on the level of a single contact point to get even a very modest effect on the macroscopic scale.”

While showing that nansocale experiment can provide useful data for these kinds of applications was in itself an important finding for the research team, the ability to reconcile the laboratory data with geologists’ observations will have a lasting effect on the field.

“If we can understand the fundamental physics,” Tullis said, “then theories and equations based on that physics would have the capability of being extrapolated beyond the laboratory scale. Therefore we could use them with more confidence in all the earthquake modeling that’s already being done.”

“We’re not ruling out the quantity argument, we’re just ruling in the quality argument,” Carpick said. “Future research will go to higher stress levels, where maybe contact quantity could start to come into play. We’d also like to look at different temperatures, which matter in the geological context, and do experiments where we can actually watch the contact in real time, using an electron microscope.”

Earthquakes: Water as a lubricant

Geophysicists from Potsdam have established a mode of action that can explain the irregular distribution of strong earthquakes at the San Andreas Fault in California. As the science magazine Nature reports in its latest issue, the scientists examined the electrical conductivity of the rocks at great depths, which is closely related to the water content within the rocks. From the pattern of electrical conductivity and seismic activity they were able to deduce that rock water acts as a lubricant.

Los Angeles moves toward San Francisco at a pace of about six centimeters per year, because the Pacific plate with Los Angeles is moving northward, parallel to the North American plate which hosts San Francisco. But this is only the average value. In some areas, movement along the fault is almost continuous, while other segments are locked until they shift abruptly several meters against each other releasing energy in strong earthquakes. After the San Francisco earthquake of 1906, the plates had moved by six meters.

The San Andreas Fault acts like a seam of the earth, ranging through the entire crust and reaching into the mantle. Geophysicists from the GFZ German Research Centre for Geosciences have succeeded in imaging this interface to great depths and to establish a connection between processes at depth and events at surface. “When examining the image of the electrical conductivity, it becomes clear that rock water from depths of the upper mantle, i.e. between 20 to 40 km, can penetrate the shallow areas of the creeping section of the fault, while these fluids are detained in other areas beneath an impermeable layer”, says Dr. Oliver Ritter of the GFZ. “A sliding of the plates is supported, where fluids can rise.”

These results suggest that significant differences exist in the mechanical and material properties along the fault at depth. The so-called tremor signals, for instance, appear to be linked to areas underneath the San Andreas Fault, where fluids are trapped. Tremors are low-frequency vibrations that are not associated with rupture processes as they are typical of normal earthquakes. These observations support the idea that fluids play an important role in the onset of earthquakes.

2 million dollar grant could make early earthquake warning a reality in the Northwest

The GPS component of an advanced seismometer sits atop Radar Ridge outside Astoria, Ore.  The installation is part of the Pacific Northwest Seismic Network. -  Pacific Northwest Seismic Network
The GPS component of an advanced seismometer sits atop Radar Ridge outside Astoria, Ore. The installation is part of the Pacific Northwest Seismic Network. – Pacific Northwest Seismic Network

When a magnitude 9 earthquake devastated Japan in March some residents got a warning, ranging from a few seconds to a minute or more, that severe shaking was on the way.

Now, with a $2 million grant from the Gordon and Betty Moore Foundation to the University of Washington, a similar warning system could be operational in the Pacific Northwest in as little as three years.

One-quarter of the grant money will go to placing 24 sensors that combine strong-motion detection and GPS readings along the coast to record the first signals from a major earthquake on the Cascadia subduction zone, which is just off the Pacific Coast from northern California to southern British Columbia.

“The main point is to spot a big earthquake at the time it starts. The main motivation for these stations is Puget Sound,” said John Vidale, a UW professor of Earth and space sciences and director of the Pacific Northwest Seismograph Network based at UW.

The cities of Portland, Ore., and Vancouver, B.C., also would benefit from the system, but they are not believed to be as vulnerable as Seattle and the surrounding area, which is closer to the subduction zone. In addition, much of Seattle is built on a softer basin more susceptible to shaking in a huge quake.

The system is designed to provide warning for very large coastal earthquakes. Smaller earthquakes might be more dangerous locally, if they happen for example in the immediate Puget Sound region, but it is more difficult and costlier to provide warning for them.

A warning that strong shaking is coming from a coastal quake could, for example, allow a doctor to halt a surgery. Trains could be stopped before they reach vulnerable bridges and sensitive equipment could be shut down before suffering significant damage.

Inexpensive and very simplified systems that send alarms when shaking is detected – such as those that close gates on the Alaskan Way Viaduct in Seattle – are currently in operation, but Vidale noted that they provide much less lead time and much less accurate warning.

The San Francisco-based Moore Foundation also is making $2 million grants to the University of California, Berkeley, and the California Institute of Technology to build on a prototype earthquake early warning system already in development in California. The three universities will collaborate with the U.S. Geological Survey on the project.

It is estimated that a comprehensive earthquake early warning system along the West Coast would cost $150 million over five years, about $70 million of that in the Northwest.

The work in the Northwest will build on work already being done in California, Vidale said, though the seismic characteristics of the two regions are different. California already makes warnings available to some emergency managers, a capability still several years away in the Northwest.

The new monitors will send data on strong shaking associated with an earthquake, which will help seismologists determine the size of the quake. But they also will provide GPS data, monitored at Central Washington University, to show how far the ground is moving. That can be a key piece of information in determining quickly whether an earthquake is occurring in the subduction zone, where it could grow to a magnitude 9 and trigger a Pacificwide tsunami. In such a quake, ground can move from several inches to several feet.

Some of the monitors could be placed along the Washington coast, though more likely they will be deployed along the northern California-Oregon coast, Vidale said. There already are some monitors along the Washington coast and there is much less data available farther south.

In the easiest scenario, Vidale said, the system could detect a magnitude 7 or 7.5 earthquake within the first 30 seconds. A quake of that intensity could grow to a magnitude 9 as the rupture spreads along the fault line.

Vidale said geologic evidence indicates that, historically, perhaps half of the Cascadia subduction zone earthquakes that achieved a magnitude of 7 or 7.5 grew to the range of magnitude 9. Scientists have shown that the last major quake on the subduction zone, in January 1700, was likely a magnitude 9 that set off a tsunami across the Pacific and caused land along the Washington coast to drop substantially.

Detecting a magnitude 7 or 7.5 quake at the southern end of the fault, off northern California or southern Oregon, could provide as much as five minutes warning to the Seattle area, he said. A rupture of that magnitude off the Washington coast might provide only 30 seconds of warning to the Seattle area, but Portland and Vancouver would still receive warning.

In the early stages of the system’s operation, Vidale said, data will be shared only with a few companies on a test basis because there will not be enough confidence in the information.

“We have to learn what we’re doing before we tell the public about it,” he said. “I think at the end of three years we could have enough confidence to share the information with the public. But we have to have confidence and we have to have a delivery system.”

Implementing delivery will be up to emergency managers in three states and one Canadian province, he noted, and so will require a great deal of coordination and cooperation. The Moore Foundation grant is for three years, so additional funding would be needed after that.

Rise of atmospheric oxygen more complicated than previously thought

These are rock drill cores removed from the drill hole at the FAR DEEP site in northwestern Russia. -  FAR DEEP
These are rock drill cores removed from the drill hole at the FAR DEEP site in northwestern Russia. – FAR DEEP

The appearance of oxygen in the Earth’s atmosphere probably did not occur as a single event, but as a long series of starts and stops, according to an international team of researchers who investigated rock cores from the FAR DEEP project.

The Fennoscandia Arctic Russia – Drilling Early Earth Project — FAR DEEP — took place during the summer of 2007 near Murmansk in the northwest region of Russia. The project, part of the International Continental Scientific Drilling Program, drilled a series of shallow, two-inch diameter cores and, by overlapping them, created a record representing stone deposited during the Proterozoic Eon — 2,500 million to 542 million years ago.

“We’ve always thought that oxygen came into the atmosphere really quickly during an event,” said Lee R. Kump, professor and head of geosciences, Penn State. “We are no longer looking for an event. Now we are looking for when and why oxygen became a stable part of the Earth’s atmosphere.”

The researchers report in today’s (Dec. 1) issue of Science Express that evaluation of these cores and comparison with cores from Gabon previously analyzed by others, supports the conclusion that the Great Oxidation Event played out over hundreds of millions of years. Oxygen levels gradually crossed the low atmospheric oxygen threshold for pyrite — an iron sulfur mineral — oxidation by 2,500 million years ago and the loss of any mass-independently fractionated sulfur by 2,400 million years ago. Then oxygen levels rose at an ever-increasing rate through the Paleoproterozoic, achieving about 1 percent of the present atmospheric level.

“The definition of when an oxygen atmosphere occurred depends on which threshold you are looking for,” said Kump. “It could be when pyrite becomes oxidized, when sulfur MIF disappears or when deep crustal oxidation occurs.

When the mass-independent fractionated sulfur disappeared, the air on Earth was still not breathable by animal standards. When red rocks containing iron oxides appeared 2,300 million years ago, the air was still unbreathable.

“At about 1 percent oxygen, the groundwater became strongly oxidized, making it possible for groundwater seeping through rocks to oxidize organic materials,” said Kump.

Initially, any oxygen in the atmosphere, produced by the photosynthesis of single-celled organisms, was used up when sulfur, iron and other elements oxidized. When sufficient oxygen accumulated in the atmosphere, it permeated the groundwater and began oxidizing buried organic material, oxidizing carbon to create carbon dioxide.

The cores from the FAR-DEEP project were compared with the Francevillian samples from Gabon using the ratio of carbon isotopes 13 and 12 to see if the evidence for high rates of oxygen accumulation existed worldwide. Both the FAR-DEEP project’s cores and the Francevillian cores show large deposits of carbon in the form of fossilized petroleum. Both sets of cores also show similar changes in carbon 13 through time, indicating that the changes in carbon isotopes occurred worldwide and oxygen levels throughout the atmosphere were high.

“Although others have documented huge carbon isotope variations at later times in Earth history associated with stepwise increases in atmospheric oxygen, our results are less equivocal because we have many lines of data all pointing to the same thing,” said Kump. “These indications include not only carbon13 isotope profiles in organic mater from two widely separated locations, but also supporting profiles in limestones and no indication that processes occurring since that time have altered the sign

Lava fingerprinting reveals differences between Hawaii’s twin volcanoes

Hawaii’s main volcano chains — the Loa and Kea trends — have distinct sources of magma and unique plumbing systems connecting them to the Earth’s deep mantle, according to UBC research published this week in Nature Geoscience, in conjunction with researchers at the universities of Hawaii and Massachusetts.

This study is the first to conclusively relate geochemical differences in surface lava rocks from both chains to differences in their deep mantle sources, 2,800 kilometres below the Earth’s surface, at the core-mantle boundary.

“We now know that by studying oceanic island lavas we can approach the composition of the Earth’s mantle, which represents 80 per cent of the Earth’s volume and is obviously not directly accessible,” says Dominique Weis, Canada Research Chair in the Geochemistry of the Earth’s Mantle and Director of UBC’s Pacific Centre for Isotopic and Geochemical Research.

“It also implies that mantle plumes indeed bring material from the deep mantle to the surface and are a crucial means of heat and material transport to the surface.”

The results of this study also suggest that a recent dramatic increase in Hawaiian volcanism, as expressed by the existence of the Hawaiian islands and the giant Mauna Loa and Mauna Kea volcanoes (which are higher than Mount Everest when measured from their underwater base) is related to a shift in the composition and structure of the source region of the Hawaiian mantle plume. Thus, this work shows, for the first time, that the chemistry of hotspot lavas is a novel and elegant probe of deep earth evolution.

Weis and UBC colleagues Mark Jellinek and James Scoates made the connection by fingerprinting samples of Hawaiian island lavas — generated over the course of five million years — by isotopic analyses. The research included collecting 120 new samples from Mauna Loa — “the largest volcano on Earth” emphasizes co-author and University of Massachusetts professor Michael Rhodes.

“Hawaiian volcanoes are the best studied in the world and yet we are continuing to make fundamental discoveries about how they work,” according to co-author and University of Hawaii volcanologist Michael Garcia.

The next steps for the researchers will be to study the entire length of the Hawaiian chain (which provides lava samples ranging in age from five to 42 million years old) as well as other key oceanic islands to assess if the two trends can be traced further back in time and to strengthen the relationship between lavas and the composition of the deep mantle.

Earth’s past gives clues to future changes

Scientists are a step closer to predicting when and where earthquakes will occur after taking a fresh look at the formation of the Andes, which began 45 million years ago.

Published today in Nature, research led by Dr Fabio Capitanio of Monash University’s School of Geosciences describes a new approach to plate tectonics. It is the first model to go beyond illustrating how plates move, and explain why.

Dr Capitanio said that although the theory had been applied only to one plate boundary so far, it had broader application.

Understanding the forces driving tectonic plates will allow researchers to predict shifts and their consequences, including the formation of mountain ranges, opening and closing of oceans, and earthquakes.

Dr Capitanio said existing theories of plate tectonics had failed to explain several features of the development of the Andes, motivating him to take a different approach.

“We knew that the Andes resulted from the subduction of one plate, under another; however, a lot was unexplained. For example, the subduction began 125 million years ago, but the mountains only began to form 45 million years ago. This lag was not understood,” Dr Capitanio said.

“The model we developed explains the timing of the Andes formation and unique features such as the curvature of the mountain chain.”

Dr Capitanio said the traditional approach to plate tectonics, to work back from data, resulted in models with strong descriptive, but no predictive power.

“Existing models allow you to describe the movement of the plates as it is happening, but you can’t say when they will stop, or whether they will speed up, and so on.

“I developed a three-dimensional, physical model – I used physics to predict the behaviour of tectonic plates. Then, I applied data tracing the Andes back 60 million years. It matched.”

Collaborators on the project were Dr Claudio Faccenna of Universita Roma Tre, Dr Sergio Zlotnik of UPC-Barcelona Tech, and Dr David R Stegman of University of California San Diego. The researchers will continue to develop the model by applying it to other subduction zones.

Ancient environment found to drive marine biodiversity

Much of our knowledge about past life has come from the fossil record – but how accurately does that reflect the true history and drivers of biodiversity on Earth?

“It’s a question that goes back a long way to the time of Darwin, who looked at the fossil record and tried to understand what it tells us about the history of life,” says Shanan Peters, an assistant professor of geoscience at the University of Wisconsin-Madison.

In fact, the fossil record can tell us a great deal, he says in a new study. In a report published Friday, Nov. 25 in Science magazine, he and colleague Bjarte Hannisdal, of the University of Bergen in Norway, show that the evolution of marine life over the past 500 million years has been robustly and independently driven by both ocean chemistry and sea level changes.

The time period studied covered most of the Phanerozoic eon, which extends to the present and includes the evolution of most plant and animal life.

Hannisdal and Peters analyzed fossil data from the Paleobiology Database ( along with paleoenvironmental proxy records and data on the rock record that link to ancient global climates, tectonic movement, continental flooding, and changes in biogeochemistry, particularly with respect to oxygen, carbon, and sulfur cycles. They used a method called information transfer that allowed them to identify causal relationships – not just general associations – between diversity and environmental proxy records.

“We find an interesting web of connections between these different systems that combine to drive what we see in the fossil record,” Peters says. “Genus diversity carries a very direct and strong signal of the sulfur isotopic signal. Similarly, the signal from sea level, how much the continents are covered by shallow seas, independently propagates into the history of marine animal diversity.”

The dramatic changes in biodiversity seen in the fossil record at many different timescales – including both proliferations and mass extinctions as marine animals diversified, evolved, and moved onto land – likely arose through biological responses to changes in the global carbon and sulfur cycles and sea level through geologic time.

The strength of the interactions also shows that the fossil record, despite its incompleteness and the influence of sampling, is a good representation of marine biodiversity over the past half-billion years.

“These results show that the number of species in the oceans through time has been influenced by the amount and availability of carbon, oxygen and sulfur, and by sea level,” says Lisa Boush, program director in the National Science Foundation’s Division of Earth Sciences, which funded the research. “The study allows us to better understand how modern changes in the environment might affect biodiversity today and in the future.”

Peters says the findings also emphasize the interconnectedness of physical, chemical, and biological processes on Earth.

“Earth systems are all connected. It’s important to realize that because when we perturb one thing, we’re not just affecting that one thing. There are consequences throughout the whole Earth system,” he says. “The challenge is understanding how perturbation of one thing – for example, the carbon cycle – will eventually affect the future biodiversity of the planet.”