Counting on bigger mining returns

Mining companies need to make long-term strategic plans about how and when to produce raw materials and metals from the ground, despite uncertainty about the mineral deposits to be found there.

Thanks to computational advances over the last decade, McGill Professor Roussos Dimitrakopoulos, Canada Research Chair in Mining Engineering, has developed new mathematical modeling techniques for mine planning and production forecasting that take into account uncertainty in the supply of minerals. The result of this research has been both a much higher return on investment for the mining companies but also more metal production from the same asset.

Now, thanks to a Collaborative Research and Development Grant of $2.7 million spread over five years from the Natural Sciences and Engineering Research Council, and co-funded by six major global mining companies, Dimitrakopoulos will be able to build upon earlier research to produce global mine-optimization models which will be able to factor in uncertainty in all aspects of mine management in order to determine the best production schedules. The companies involved are Anglo Gold Ashanti, Barrick Gold, BHP Billiton, De Beers, Newmont and Vale.

The models will be able to take into account multiple mines and material types; multiple ore/waste processing streams; and both stockpiles and products, while at the same time taking into account uncertainty in demand and hence in the commodity prices for minerals.

The new uncertainty models of mine management will promote more sustainable development and use of mineral resources, while managing and reducing risks and maximizing the return on investment.

The unique and long-standing partnership of Prof. Dimitrakopoulos and his laboratory with the six mining companies, which together represent about 75 per cent of all mining activity on the globe, underscores the importance the research undertaken not only for Canada, but also for the global community as a whole. Indeed, one of the conditions of the NSERC award is that all partners have agreed that the developments will be placed in the public domain without restrictions within a year, so that the outcomes of the grant research will become accessible to both academics and practitioners around the world.

Japan earthquake appears to increase quake risk elsewhere in the country

Japan’s recent magnitude 9.0 earthquake, which triggered a devastating tsunami, relieved stress along part of the quake fault but also has contributed to the build up of stress in other areas, putting some of the country at risk for up to years of sizeable aftershocks and perhaps new main shocks, scientists say.

After studying data from Japan’s extensive seismic network, researchers from the Woods Hole Oceanographic Institution (WHOI), Kyoto University and the U.S. Geological Survey (USGS) have identified several areas at risk from the quake, Japan’s largest ever, which already has triggered a large number of aftershocks.

Data from the magnitude 9.0 Tohoku earthquake on March 11 has brought scientists a small but perceptible step closer to a better assessment of future seismic risk in specific regions, said Shinji Toda of Kyoto University, a lead author of the study. “Even though we cannot forecast precisely, we can explain the mechanisms involved in such quakes to the public,” he said. Still, he added, the findings do bring scientists “a little bit closer” to being able to forecast aftershocks.

“Research over the past two decades has shown that earthquakes interact in ways never before imagined,” Toda, Jian Lin of WHOI and Ross S. Stein of USGS write in a summary of their paper in press for publication in the Tohoku Earthquake Special Issue of the journal Earth, Planets and Space. “A major shock does relieve stress-and thus the likelihood of a second major tremor-but only in some areas. The probability of a succeeding earthquake adjacent to the section of the fault that ruptured or on a nearby but different fault can jump” significantly.

The Tohoku earthquake, centered off northern Honshu Island, provided an “unprecedented” opportunity to utilize Japan’s “superb monitoring networks” to gather data on the quake, the scientists said. The Tohoku quake, the fourth largest earthquake ever recorded, was “the best-recorded [large quake] the world has ever known.”

This made the quake a “special” one in terms of scientific investigation, Lin said. “We felt we might be able to find something we didn’t see before” in previous quakes, he said.

The magnitude 9 quake appears to have influenced large portions of Honshu Island, Toda said. At particular risk, he said, are the Tokyo area, Mount Fuji and central Honshu including Nagano.

The Kantu fragment, which is close to Tokyo, also experienced an increase in stress. Previous government estimates have put Tokyo at a 70 percent risk for a magnitude 7 earthquake over the next 30 years. The new data from the Tohoku quake increase those odds to “more than 70 percent,” Toda said. “That is really high.”

Using a model known as Coulomb stress triggering, Lin and his colleagues found measureable increases in stress along faults to the north at Sanriku-Hokobu, south at Off Boso and at the Outer Trench Slope normal faults east of the quake’s epicenter off the Japan coast near the city of Sendai.

“Based on our other studies, these stress increases are large enough to increase the likelihood of triggering significant aftershocks or subsequent mainshocks,” the researchers said.

Stein of the USGS emphasized the ongoing risk to parts of Japan. “There remains a lot of real estate in Japan–on shore and off–that could host large, late aftershocks of the Tohoku quake,” he said.

“In addition to the megathrust surface to the north or south of the March 11 rupture, we calculate that several fault systems closer to Tokyo have been brought closer to failure, and some of these have lit up in small earthquakes since March 11. So, in our judgment, Central Japan, and Tokyo in particular, is headed for a long vigil that will not end anytime soon.”

Lin added that aftershocks, as well as new mainshocks, could continue for “weeks, months, years.”

Toda explained that the magnitude of future quakes is proportional to the length of the fault involved.

In a separate paper submitted to Geophysical Research Letters, the researchers “report on a broad and unprecedented increase in seismicity rate for microearthquakes over a broad (360 by 120 mile) area across inland Japan, parts of the Japan Sea and the Izu islands, following the 9.0 Tohoku mainshock.”

“The crust on the land was turned on?far away from a fault,” Lin said. Most of these are relatively small quakes-magnitude 2 to 4-“but a lot of them,” Lin said. “This is surprising; we’ve never seen this before,” he said. “Such small events?may have happened following major quakes in other places but may have been missed due to poor seismic networks.”

“The 9.0 Tohoku quake caught many people including scientists by surprise,” Lin said. “It had been thought that a large quake in this area would go up to about 8.2, not 9.0″ That estimate was significantly influenced by historical data. “The Tohoku quake reminded us that considering only the historical earthquakes is inadequate, even in a country of relatively long written records like Japan and China,” he said.

“Historical records, and especially the instrumental records, are indeed too short to provide a full picture of the potential of large earthquakes in a region. Thus we must encourage many more studies to find geological evidence (for example, through analyzing sediment cores extracted on land and undersea) that might provide clues of large earthquake and tsunami events that occurred hundreds to thousands of years ago.”

“We must recognize that because our knowledge is incomplete, our estimation of seismic hazard is likely to be underestimated in many cases. Thus we must prepare for potential hazard that might be worse than we already know,” Lin said.

The finding that a quake such as this one can increase stresses elsewhere “means that new quakes could occur in the region,” Lin said. “We must factor in this new information on stresses into earthquake preparedness.”

Experts quantify melting glaciers’ effect on ocean currents

A team of scientists from the University of Sheffield and Bangor University have used a computer climate model to study how freshwater entering the oceans at the end of the penultimate Ice Age 140,000 years ago affected the parts of the ocean currents that control climate.

A paper based on the research, co-authored by Professor Grant Bigg, Head of the University of Sheffield’s Department of Geography, his PhD student Clare Green, and Dr Mattias Green, a Senior Research fellow at Bangor University’s School of Ocean Sciences, is currently featured as an Editor’s Highlight in top US journal, Paleoceanography. The study is the first of this kind for the time period.

The research found that freshwater entering the ocean from melting ice sheets can weaken the climate controlling part of the large-scale ocean circulation, with dramatic climate change as a consequence. During the period of the study, the experts noted that the global temperature dropped by up to two degrees over a few centuries, but changes were not uniform over the planet, and it took a long time for the climate to recover after the ice sheets had melted completely.

The team argues that it is not only the volume of freshwater being released from the melting ice sheet which is important but also the state of the freshwater: icebergs act to reduce the ocean circulation less than meltwater, but the effects of icebergs last for longer periods of time. The effect is similar to the difference between adding very cold water to a drink or adding an ice cube or two.

The study also shows that at the end of the more recent Ice Age 20,000 years ago, the ocean circulation was more sensitive to ice sheet collapses than during the earlier period.

Professor Grant Bigg, Head of the University of Sheffield’s Department of Geography, said: “An important component of the work is that it shows that the impact of freshwater releases from past, or future, ice masses depends critically on the form – whether fresh water or icebergs – and the location of the release.

“The Arctic has been surrounded by extensive glaciations several times in the past and this study has shown that large-scale changes in such Arctic ice sheets could affect the climate in places far from the release site. Our work also suggests that the Pacific Ocean may have been more sensitive to major changes in past glaciations than previously realised. We plan to investigate this possibility more in the future.”

Dr Mattias Green from Bangor University, added: “With meltwater- similar to adding water to your drink, the water spreads out quickly and has an immediate effect, but it is also absorbed quickly into the rest of the ocean. In a similar way to your ice cube, the icebergs drift along and melt more slowly. This means the immediate impact is weaker, but they are there for a longer time and distribute the water over a larger area.

“Our results lead us to conclude that a future ice sheet collapse, that might happen in Antarctica or Greenland, would have climatic consequences, but the exact impact needs to be evaluated in each case.”

Team debunks theory on end of ‘Snowball Earth’ ice age

Crystals of highly carbon-13-depleted carbonate are observed using a light microscope. -  Thomas Bristow
Crystals of highly carbon-13-depleted carbonate are observed using a light microscope. – Thomas Bristow

There’s a theory about how the Marinoan ice age-also known as the “Snowball Earth” ice age because of its extreme low temperatures-came to an abrupt end some 600 million years ago. It has to do with large amounts of methane, a strong greenhouse gas, bubbling up through ocean sediments and from beneath the permafrost and heating the atmosphere.

The main physical evidence behind this theory has been samples of cap dolostone from south China, which were known to have a lot less of the carbon-13 isotope than is normally found in these types of carbonate rocks. (Dolostone is a type of sedimentary rock composed of the carbonate mineral, dolomite; it’s called cap dolostone when it overlies a glacial deposit.) The idea was that these rocks formed when Earth-warming methane bubbled up from below and was oxidized-“eaten”-by microbes, with its carbon wastes being incorporated into the dolostone, thereby leaving a signal of what had happened to end the ice age. The idea made sense, because methane also tends to be low in carbon-13; if carbon-13-depeleted methane had been made into rock, that rock would indeed also be low in carbon-13. But the idea was controversial, too, since there had been no previous isotopic evidence in carbonate rock of methane-munching microbes that early in Earth’s history.

And, as a team of scientists led by researchers from the California Institute of Technology (Caltech) report in this week’s issue of the journal Nature, it was also wrong-at least as far as the geologic evidence they looked at goes. Their testing shows that the rocks on which much of that ice-age-ending theory was based were formed millions of years after the ice age ended, and were formed at temperatures so high there could have been no living creatures associated with them.

“Our findings show that what happened in these rocks happened at very high temperatures, and abiologically,” says John Eiler, the Robert P. Sharp Professor of Geology and professor of geochemistry at Caltech, and one of the paper’s authors. “There is no evidence here that microbes ate methane as food. The story you see in this rock is not a story about ice ages.”

To tell the rocks’ story, the team used a technique Eiler developed at Caltech that looks at the way in which rare isotopes (like the carbon-13 in the dolostone) group, or “clump,” together in crystalline structures like bone or rock. This clumping, it turns out, is highly dependent upon the temperature of the immediate environment in which the crystals form. Hot temperatures mean less clumping; low temperatures mean more.

“The rocks that we analyzed for this study have been worked on before,” says Thomas Bristow, the paper’s first author and a former postdoc at Caltech who is now at NASA Ames Research Center, “but the unique advance available and developed at Caltech is the technique of using carbonate clumped-isotopic thermometry to study the temperature of crystallization of the samples. It was primarily this technique that brought new insights regarding the geological history of the rocks.”

What the team’s thermometer made very clear, says Eiler, is that “the carbon source was not oxidized and turned into carbonate at Earth’s surface. This was happening in a very hot hydrothermal environment, underground.”

In addition, he says, “We know it happened at least millions of years after the ice age ended, and probably tens of millions. Which means that whatever the source of carbon was, it wasn’t related to the end of the ice age.”

Since this rock had been the only carbon-isotopic evidence of a Precambrian methane seep, these findings bring up a number of questions-questions not just about how the Marinoan ice age ended, but about Earth’s budget of methane and the biogeochemistry of the ocean.

“The next stage of the research is to delve deeper into the question of why carbon-13-depleted carbonate rocks that formed at methane seeps seem to only be found during the later 400 million years of Earth history,” says John Grotzinger, the Fletcher Jones Professor of Geology at Caltech and the principal investigator on the work described. “It is an interesting fact of the geologic record that, despite a well-preserved record of carbonates beginning 3.5 billion years ago, the first 3 billion years of Earth history does not record evidence of methane oxidation. This is a curious absence. We think it might be linked to changes in ocean chemistry through time, but more work needs to be done to explore that.”

Unusual earthquake gave Japan tsunami extra punch, say Stanford scientists

This diagram shows the March 11 fault motion sequence. 1. Rupture of the fault plane begins at the epicenter. 2. Rupture travels westward, down the fault plane towards Honshu. The island suffers violent shaking for 40 seconds. 3. The upward sloping east side of the fault plane begins to rupture, continuing for 30 to 35 seconds. The sediments overlying the east side expand up the fault plane in response to the force of the rupture. 4. The water above the sediments is pushed into an unstable dome that then flows out in all directions as a tsunami. -  Anna Cobb, Stanford News Service
This diagram shows the March 11 fault motion sequence. 1. Rupture of the fault plane begins at the epicenter. 2. Rupture travels westward, down the fault plane towards Honshu. The island suffers violent shaking for 40 seconds. 3. The upward sloping east side of the fault plane begins to rupture, continuing for 30 to 35 seconds. The sediments overlying the east side expand up the fault plane in response to the force of the rupture. 4. The water above the sediments is pushed into an unstable dome that then flows out in all directions as a tsunami. – Anna Cobb, Stanford News Service

The magnitude 9 earthquake and resulting tsunami that struck Japan on March 11 were like a one-two punch – first violently shaking, then swamping the islands – causing tens of thousands of deaths and hundreds of billions of dollars in damage. Now Stanford researchers have discovered the catastrophe was caused by a sequence of unusual geologic events never before seen so clearly.

“It was not appreciated before this earthquake that this size of earthquake was possible on this plate boundary,” said Stanford geophysicist Greg Beroza. “It was thought that typical earthquakes were much smaller.”

The earthquake occurred in a subduction zone, where one great tectonic plate is being forced down under another tectonic plate and into the Earth’s interior along an active fault.

The fault on which the Tohoku-Oki earthquake took place slopes down from the ocean floor toward the west. It first ruptured mainly westward from its epicenter – 32 kilometers (about 20 miles) below the seafloor – toward Japan, shaking the island of Honshu violently for 40 seconds.

Surprisingly, the fault then ruptured eastward from the epicenter, up toward the ocean floor along the sloping fault plane for about 30 or 35 seconds.

As the rupture neared the seafloor, the movement of the fault grew rapidly, violently deforming the seafloor sediments sitting on top of the fault plane, punching the overlying water upward and triggering the tsunami.

“When the rupture approached the seafloor, it exploded into tremendously large slip,” said Beroza.”It displaced the seafloor dramatically.

“This amplification of slip near the surface was predicted in computer simulations of earthquake rupture, but this is the first time we have clearly seen it occur in a real earthquake.

“The depth of the water column there is also greater than elsewhere,” Beroza said. “That, together with the slip being greatest where the fault meets the ocean floor, led to the tsunami being outlandishly big.”

Beroza is one of the authors of a paper detailing the research, published online last week in Science Express.

“Now that this slip amplification has been observed in the Tohoku-Oki earthquake, what we need to figure out is whether similar earthquakes – and large tsunamis – could happen in other subduction zones around the world,” he said.

Beroza said the sort of “two-faced” rupture seen in the Tohoku-Oki earthquake has not been seen in other subduction zones, but that could be a function of the limited amount of data available for analyzing other earthquakes.

There is a denser network of seismometers in Japan than any other place in the world, he said. The sensors provided researchers with much more detailed data than is normally available after an earthquake, enabling them to discern the different phases of the March 11 temblor with much greater resolution than usual.

Prior to the Tohoku-Oki earthquake, Beroza and Shuo Ma, who is now an assistant professor at San Diego State University, had been working on computer simulations of what might happen during an earthquake in just such a setting. Their simulations had generated similar “overshoot” of sediments overlying the upper part of the fault plane.

Following the Japanese earthquake, aftershocks as large as magnitude 6.5 slipped in the opposite direction to the main shock. This is a symptom of what is called “extreme dynamic overshoot” of the upper fault plane, Beroza said, with the overextended sediments on top of the fault plane slipping during the aftershocks back in the direction they came from.

“We didn’t really expect this to happen because we believe there is friction acting on the fault” that would prevent any rebound, he said. “Our interpretation is that it slipped so much that it sort of overdid it. And in adjusting during the aftershock sequence, it went back a bit.

“We don’t see these bizarre aftershocks on parts of the fault where the slip is less,” he said.

The damage from the March 11 earthquake was so extensive in part simply because the earthquake was so large. But the way it ruptured on the fault plane, in two stages, made the devastation greater than it might have been otherwise, Beroza said.

The deeper part of the fault plane, which sloped downward to the west, was bounded by dense, hard rock on each side. The rock transmitted the seismic waves very efficiently, maximizing the amount of shaking felt on the island of Honshu.

The shallower part of the fault surface, which slopes upward to the east and surfaces at the Japan Trench – where the overlying plate is warped downward by the motion of the descending plate – had massive slip. Unfortunately, this slip was ideally situated to efficiently generate the gigantic tsunami, with devastating consequences.

Earth: Waves of disaster: Lessons from Japan and New Zealand

On Feb. 22, a magnitude-6.1 earthquake struck Christchurch, New Zealand, killing nearly 200 people and causing $12 billion in damage. About three weeks later, a massive magnitude-9.0 earthquake struck northern Honshu, Japan. The quake and tsunami killed about 30,000 people and caused an estimated $310 billion in damage. Both events are stark reminders of human vulnerability to natural disasters and provide a harsh reality check: Even technologically advanced countries with modern building codes are not immune from earthquake disasters.

Both events also offer lessons to be learned, as EARTH explores in the June features “Don’t Forget About the Christchurch Earthquake” and “Japan’s Megaquake and Killer Tsunamis.” What could have been done to prevent or mitigate the damage in both countries? And what can similar locations around the world learn? Furthermore, how did the March temblor and tsunami off the coast of Japan complicate the picture of foreshocks and aftershocks?

Discover what these events are teaching scientists about earthquakes, and read other stories on topics such as what scientists are doing to try to get ahead of the mysterious disease that’s killing bats in droves, what legacy can still be found in the sands of the D-Day beaches, and how the Japanese disaster may change the face of nuclear energy worldwide, all in the June issue. Plus, don’t miss the story about the new biofuel made from grass.

2 Greenland glaciers lose enough ice to fill Lake Erie

A new study aimed at refining the way scientists measure ice loss in Greenland is providing a “high-definition picture” of climate-caused changes on the island.

And the picture isn’t pretty.

In the last decade, two of the largest three glaciers draining that frozen landscape have lost enough ice that, if melted, could have filled Lake Erie.

The three glaciers – Helheim, Kangerdlugssuaq and Jakobshavn Isbrae – are responsible for as much as one-fifth of the ice flowing out from Greenland into the ocean.

“Jakobshavn alone drains somewhere between 15 and 20 percent of all the ice flowing outward from inland to the sea,” explained Ian Howat, an assistant professor of earth sciences at Ohio State University. His study appears in the current issue of the journal Geophysical Research Letters.

As the second largest holder of ice on the planet, and the site of hundreds of glaciers, Greenland is a natural laboratory for studying how climate change has affected these ice fields.

Researchers focus on the “mass balance” of glaciers, the rate of new ice being formed as snow falls versus the flow of ice out into the sea.

The new study suggests that, in the last decade, Jakobshavn Isbrae has lost enough ice to equal 11 years’ worth of normal snow accumulation, approximately 300 gigatons (300 billion tons) of ice.

“Kangerdlugssuaq would have to stop flowing and accumulate snowfall for seven years to regain the ice it has lost,” said Howat, also a member of the Byrd Polar Research Center at Ohio State.

Surprisingly, the researchers found that the third glacier, Helheim, had actually gained a small amount of mass over the same period. It gained approximately one-fifteenth of what Jakobshavn had lost, Howat said.

The real value of the research, however, is the confirmation that the new techniques Howat and his colleagues developed will provide scientists a more accurate idea of exactly how much ice is being lost.

“These glaciers change pretty quickly. They speed up and then slow down. There’s a pulsing in the flow of ice,” Howat said. “There’s variability, a seasonal cycle and lots of different changes in the rate that ice is flowing through these glaciers.”

Past estimates, he said, have been merely snapshots of what was going on at these glaciers in terms of mass loss. “We really need to sample them very frequently or else we won’t really know how much change has occurred.

“This new research pumps up the resolution and gives us a kind of high-definition picture of ice loss,” he said.

To get this longer-timeframe image, Howat and colleagues drew on data sets provided by at least seven orbiting satellites and airplanes, as well as other sources.

“To get a good picture of what’s going on, we need different tools and each one of these satellites plays an important role and adds more information,” Howat said.

The next step is to look at the next-largest glaciers in Greenland and work their way down through smaller and smaller ice flows.

“Currently, the missing piece is ice thickness data for all of the glaciers, but a NASA aircraft is up there getting it. When that’s available, we’ll be able to apply this technique to the entire Greenland ice sheet and get a monthly total mass balance for the last 10 years or so,” he said.

Researchers release first large observational study of 9.0 Tohoku-Oki earthquake

The image represents on overhead model of the estimated fault slip due to
the 9.0 Tohoku‑Oki earthquake. The fault responsible for this earthquake
dips under Japan, starting at the Japan Trench (indicated by the barbed
line), which is the point of contact between the subducting Pacific Plate
and the overriding Okhotsk Plate. The magnitude of fault slip is indicated
both by the color and the contours, which are at 8‑meter intervals. The
question mark indicates the general region where researchers currently lack
information about future seismic potential. -  Mark Simons/Caltech
Seismological Laboratory
The image represents on overhead model of the estimated fault slip due to
the 9.0 Tohoku‑Oki earthquake. The fault responsible for this earthquake
dips under Japan, starting at the Japan Trench (indicated by the barbed
line), which is the point of contact between the subducting Pacific Plate
and the overriding Okhotsk Plate. The magnitude of fault slip is indicated
both by the color and the contours, which are at 8‑meter intervals. The
question mark indicates the general region where researchers currently lack
information about future seismic potential. – Mark Simons/Caltech
Seismological Laboratory

When the magnitude 9.0 Tohoku-Oki earthquake and resulting tsunami struck off the northeast coast of Japan on March 11, they caused widespread destruction and death. Using observations from a dense regional geodetic network (allowing measurements of earth movement to be gathered from GPS satellite data), globally distributed broadband seismographic networks, and open-ocean tsunami data, researchers have begun to construct numerous models that describe how the earth moved that day.

Now, a study led by researchers at the California Institute of Technology (Caltech), published online in the May 19 issue of Science Express, explains the first large set of observational data from this rare megathrust event.

“This event is the best recorded great earthquake ever,” says Mark Simons, professor of geophysics at Caltech’s Seismological Laboratory and lead author of the study. For scientists working to improve infrastructure and prevent loss of life through better application of seismological data, observations from the event will help inform future research priorities.

Simons says one of the most interesting findings of the data analysis was the spatial compactness of the event. The megathrust earthquake occurred at a subduction zone where the Pacific Plate dips below Japan. The length of fault that experienced significant slip during the Tohoku-Oki earthquake was about 250 kilometers, about half of what would be conventionally expected for an event of this magnitude.

Furthermore, the area where the fault slipped the most-30 meters or more-happened within a 50- to 100-kilometer-long segment. “This is not something we have documented before,” says Simons. “I’m sure it has happened in the past, but technology has advanced only in the past 10 to 15 years to the point where we can measure these slips much more accurately through GPS and other data.”

For Jean Paul Ampuero, assistant professor of seismology at Caltech’s Seismological Laboratory who studies earthquake dynamics, the most significant finding was that high- and low-frequency seismic waves can come from different areas of a fault. “The high-frequency seismic waves in the Tohoku earthquake were generated much closer to the coast, away from the area of the slip where we saw low-frequency waves,” he says.

Simons says there are two factors controlling this behavior; one is because the largest amount of stress (which is what generates the highest-frequency waves) was found at the edges of the slip, not near the center of where the fault began to break. He compares the finding to what happens when you rip a piece of paper in half. “The highest amounts of stress aren’t found where the paper has just ripped, but rather right where the paper has not yet been torn,” he explains. “We had previously thought high-frequency energy was an indicator of fault slippage, but it didn’t correlate in our models of this event.” Equally important is how the fault reacts to these stress concentrations; it appears that only the deeper segments of the fault respond to these stresses by producing high-frequency energy.

Ampuero says the implications of these observations of the mechanical properties of tectonic faults need to be further explored and integrated in physical models of earthquakes, which will help scientists better quantify earthquake hazards.

“We learn from each significant earthquake, especially if the earthquake is large and recorded by many sensors,” says Ampuero. “The Tohoku earthquake was recorded by upwards of 10 times more sensors at near-fault distances than any other earthquake. This will provide a sharper and more robust view of earthquake rupture processes and their effects.”

For seismologist Hiroo Kanamori, Caltech’s Smits Professor of Geophysics, Emeritus, who was in Japan at the time of the earthquake and has been studying the region for many years, the most significant finding was that a large slip occurred near the Japan Trench. While smaller earthquakes have happened in the area, it was believed that the relatively soft material of the seafloor would not support a large amount of stress. “The amount of strain associated with this large displacement is nearly five to 10 times larger than we normally see in large megathrust earthquakes,” he notes. “It has been generally thought that rocks near the Japan Trench could not accommodate such a large elastic strain.”

The researchers are still unsure why such a large strain was able to accumulate in this area. One possibility is that either the subducting seafloor or the upper plate (or both) have some unusual structures-such as regions that were formerly underwater mountain ranges on the Pacific Plate-that have now been consumed by the subduction zone and cause the plates to get stuck and build up stress.

“Because of this local strengthening-whatever its cause-the Pacific Plate and the Okhotsk Plate had been pinned together for a long time, probably 500 to 1000 years, and finally failed in this magnitude 9.0 event,” says Kanamori. “Hopefully, detailed geophysical studies of seafloor structures will eventually clarify the mechanism of local strengthening in this area.”

Simons says researchers knew very little about the area where the earthquake occurred because of limited historical data.

“Instead of saying a large earthquake probably wouldn’t happen there, we should have said that we didn’t know,” he says. Similarly, he says the area just south of where the fault slipped is in a similar position; researchers don’t yet know what it might do in the future.

“It is important to note that we are not predicting an earthquake here,” emphasizes Simons. “However, we do not have data on the area, and therefore should focus attention there, given its proximity to Tokyo.”

He says that the relatively new Japanese seafloor observation systems will prove very useful in scientists’ attempts to learn more about the area.

“Our study is only the first foray into what is an enormous quantity of available data,” says Simons. “There will be a lot more information coming out of this event, all of which will help us learn more in order to help inform infrastructure and safety procedures.”

Scientists find odd twist in slow ‘earthquakes': Tremor running backwards

Earthquake scientists trying to unravel the mysteries of an unfelt, weeks-long seismic phenomenon called episodic tremor and slip have discovered a strange twist. The tremor can suddenly reverse direction and travel back through areas of the fault that it had ruptured in preceding days, and do so 20 to 40 times faster than the original fault rupture.

“Regular tremor and slip goes through an area fairly slowly, breaking it. Then once it’s broken and weakened an area of the fault, it can propagate back across that area much faster,” said Heidi Houston, a University of Washington professor of Earth and space sciences and lead author of a paper documenting the findings, published in Nature Geoscience.

Episodic tremor and slip, also referred to as slow slip, was documented in the Pacific Northwest a decade ago and individual events have been observed in Washington and British Columbia on a regular basis, every 12 to 15 months on average.

Slow-slip events tend to start in the southern Puget Sound region, from the Tacoma area to as far north as Bremerton, and move gradually to the northwest on the Olympic Peninsula, following the interface between the North American and Juan de Fuca tectonic plates toward Vancouver Island in Canada. The events typically last three to four weeks and release as much energy as a magnitude 6.8 earthquake, though they are not felt and cause no damage.

In a normal earthquake a rupture travels along the fault at great speed, producing potentially damaging ground shaking. In episodic tremor and slip, the rupture moves much more slowly along the fault but it maintains a steady pace, Houston said.

“There’s not a good understanding yet of why it’s so slow, what keeps it from picking up speed and becoming a full earthquake,” she said.

Houston and her co-authors – Brent Delbridge, a UW physics undergraduate; Aaron Wech, a former UW graduate student now at Victoria University of Wellington, New Zealand; and Kenneth Creager, a UW Earth and space sciences professor – analyzed data collected from tremor events in July 2004, September 2005, January 2007, May 2008 and May 2009 (the 2004 and 2005 events took place only toward the north end of the Olympic Peninsula). The five events provided about 110 days’ worth of data representing some 16,000 distinct locations.

The scientists found a distinct signal for clusters of tremor moving rapidly backwards from the leading edge of the tremor, through an area of the fault that had already experienced tremor.

They also noted that rapid tremor reversal appears to happen more readily near the Strait of Juan de Fuca, suggesting that stress from tides could play a role in generating the reversal because the interface appears to be more sensitive just after having been ruptured by the initial tremor event.

Houston noted that episodic tremor and slip occurs at a depth of 22 to 34 miles, where high temperatures have made the tectonic plates more pliable and thus more slippery. At a substantially shallower depth, perhaps 12 miles, the plates are not slippery and so are tightly locked together.

In the locked zone, the tectonic plates can hold the buildup of stress for hundreds of years, rather than just 15 months, but when the interface ruptures it can unleash a great megathrust earthquake such as the one that struck off the coast of Japan in March. Such earthquakes occur in the Cascadia subduction zone every 500 years, on average, and the last one – estimated at around magnitude 9.0 – happened in January 1700. Houston noted that the region is within the large time window when another megathrust earthquake could occur.

One key question still to be answered, she said, is what is happening on the plate interface between the locked zone and the depth where tremor occurs. Scientists hope to get a better understanding of the interplay between tremor events and subduction zone earthquakes, including whether the interval between tremor events changes as the end of the 500-year subduction zone earthquake cycle gets nearer.

“Various aspects of the tremor signal may change as the seismic cycle matures,” Houston said. “It’s also possible that the noise our seismometers detect from tremor events might get louder just before a big earthquake.”

Young graphite in old rocks challenges the earliest signs of life

<IMG SRC="/Images/536429107.jpg" WIDTH="300" HEIGHT="450" BORDER="0" ALT="Boston College assistant professor of Earth and environmental sciences Dominic Papineau and colleagues report in the journal Nature Geoscience that carbon laced within ancient rock formations may be millions of years younger than the rock itself, raising questions about the evidence of the earliest signs of life. – Lee Pellegrini, Boston College”>
Boston College assistant professor of Earth and environmental sciences Dominic Papineau and colleagues report in the journal Nature Geoscience that carbon laced within ancient rock formations may be millions of years younger than the rock itself, raising questions about the evidence of the earliest signs of life. – Lee Pellegrini, Boston College

Carbon found within ancient rocks has played a crucial role developing a time line for the emergence of biological life on the planet billions of years ago. But applying cutting-edge technology to samples of ancient rocks from northern Canada has revealed the carbon-based minerals may be much younger than the rock they inhabit, a team of researchers report in the latest edition of the journal Nature Geoscience.

The team – which includes researchers from Boston College, the Carnegie Institution of Washington, NASA’s Johnson Space Center and the Naval Research Laboratory – says new evidence from Canada’s Hudson Bay region shows carbonaceous particles are millions of years younger than the rock in which they’re found, pointing to the likelihood that the carbon was mixed in with the metamorphic rock later than the rock’s earliest formation – estimated to be 3.8 to 4.2 billion years ago.

The samples come from the Nuvvuagittuq Supracrustal Belt, a sedimentary banded iron formation located in the Archean Superior craton, one of the earth’s ancient continental shields. Samples were subjected to a range of high-tech tests in an effort to more clearly characterize the carbon in the rock.

Traditional techniques used by scientists have involved collecting samples and crushing them into powder and then determining the bulk characteristics of carbon minerals. The new approach relies upon a variety of microscopy and spectroscopy methods to characterize intact micro-fabricated cross-sections of crystalline graphite removed from the rock samples. The results found that the carbon was very young compared to the age of these oldest rock samples ever unearthed.

“The characteristics of the poorly crystalline graphite within the samples are not consistent with the metamorphic history of the rock,” said Boston College Assistant Professor of Earth and Environmental Sciences Dominic Papineau, a co-author of the report. “The carbon in the graphite is not as old as the rock. That can only ring a bell and require us to ask if we need to reconsider earlier studies.”

Nearly 4,000-million years old samples from Greenland have been used to develop the dominant time line regarding the emergence of the earliest biosphere. The recent findings suggest the biosphere may have emerged millions of years later, a hypothesis that now demands a rigorous study, said Papineau.

“It could be that researchers in the field need to go back to Greenland to restudy these rocks and determine if the the carbonaceous materials are in fact as old as the metamorphosed rock itself,” Papineau said.

As the planet evolved, rock and other matter was subjected to a range of temperatures that leave telltale signatures scientists can now study. The team’s examination found that the rock samples were subjected to high-grade metamorphism. Yet the crystalline structure of the graphite present in the samples was not, leading scientists to conclude the matter infiltrated the rock at a later stage in time, though the exact timing is not clear at this point.

The presence of carbon and the specific characteristics of that carbon’s source material are crucial to understanding the evolution of the early microbial biosphere. The subject of much debate within scientific circles, a new set of assumptions may be required when using the presence of carbon to date milestones in the earth’s evolution.

“We can no longer assume that carbon is indigenous in the oldest metamorphosed sedimentary rock,” said Papineau. “In very old rocks, the fundamental questions are now whether the carbon is biological in origin and if it is indigenous to the rocks.”