Satellite research reveals smaller volcanoes could cool climate

A 2011 eruption of Eritrea's Nabro volcano caused the largest stratospheric aerosol load ever recorded by OSIRIS on Sweden's Odin satellite in its more than 10 years of flight. -  Image: Swedish Space Corporation
A 2011 eruption of Eritrea’s Nabro volcano caused the largest stratospheric aerosol load ever recorded by OSIRIS on Sweden’s Odin satellite in its more than 10 years of flight. – Image: Swedish Space Corporation

A University of Saskatchewan-led international research team has discovered that aerosols from relatively small volcanic eruptions can be boosted into the high atmosphere by weather systems such as monsoons, where they can affect global temperatures. The research appears in the July 6 issue of the journal Science.

Adam Bourassa, from the U of S Institute of Space and Atmospheric Studies, led the research. He explains that until now it was thought that a massively energetic eruption was needed to inject aerosols past the troposphere, the turbulent atmospheric layer closest to the earth, into the stable layers of the stratosphere higher up.

“If an aerosol is in the lower atmosphere, it’s affected by the weather and it precipitates back down right away,” Bourassa says. “Once it reaches the stratosphere, it can persist for years, and with that kind of a sustained lifetime, it can really have a lasting effect.” That effect is the scattering of incoming sunlight and the potential to cool the Earth’s surface.

For example, the massive eruption of Mount Pinatubo in the Philippines in 1991 temporarily dropped temperatures by half a degree Celsius world-wide.

The research team includes scientists from the U of S, Rutgers University in New Jersey, the National Centre for Atmospheric Research in Colorado, and the University of Wyoming. They looked at the June 2011 eruption of the Nabro volcano in Eritrea in northeast Africa. Wind carried the volcanic gas and aerosol – minute droplets of sulfuric acid – into the path of the annual Asian summer monsoon.

The stratosphere’s calm layers are high – from 10 km up at the poles to 17 km altitude at the equator – and it was thought storms could not pierce it. For example, the distinctive flattened “anvil” shape at the top of large thunderstorms is created as the storm pushes against the stratosphere.

Dust from the Nabro volcano, being slightly heavier, settled out, but the monsoon lofted volcanic gas and the lighter liquid droplets into the stratosphere where they were detected by the Canadian Space Agency’s OSIRIS instrument aboard the Swedish satellite Odin. The Nabro volcano caused the largest stratospheric aerosol load ever recorded by OSIRIS in its more than 10 years of flight.

OSIRIS, designed in part at the U of S, is used to study the upper atmosphere, particularly the ozone layer and atmospheric aerosols. Originally intended for a two-year mission, the instrument has been functioning flawlessly since its launch in 2001. It circles the earth from pole to pole once every hour and a half, downloading fresh data to the analysis center at the U of S campus.

“There are only a few instruments that can measure stratospheric aerosols, and OSIRIS is one of them,” Bourassa says. “It’s become extremely important for climate studies, because we’ve captured more than a full decade of data. The longer it’s up, the more valuable it becomes.”

The hope is these latest findings will provide another piece of the puzzle to allow more accurate models of climate behavior and change.

Toward a better understanding of earthquakes

Rebecca Harrington, Geophysical Institute, and Peter Duffner, Black Forest Observatory Schiltach, install a seismic station near Cholame, Calif. -  Werner Scherer, KIT
Rebecca Harrington, Geophysical Institute, and Peter Duffner, Black Forest Observatory Schiltach, install a seismic station near Cholame, Calif. – Werner Scherer, KIT

The earth is shaken daily by strong earthquakes recorded by a number of seismic stations worldwide. Tectonic tremor, however, is a new type of seismic signal that seismologist started studying only within the last few years. Tremor is less hazardous than earthquakes and occurs at greater depth. The link between tremor and earthquakes may provide clues about the more destructive earthquakes that occur at shallower depths. Geophysicists of Karlsruhe Institute of Technology (KIT) collected seismic data of tectonic tremor in California. These data are now being evaluated in order to better understand this new seismic phenomenon.

About a decade ago, researchers discovered a previously unknown seismic signal, now referred to as tectonic tremor. Contrary to earthquakes, tectonic tremor causes relatively weak ground shaking. While tremor may last longer than earthquakes, it does not cause any direct danger. “Both earthquakes and tremor have the same cause. They result from the relative movement on fault surfaces, a result of the motion of the tectonic plates,” explains seismologist Dr. Rebecca Harrington, who heads a research group at KIT. “While earthquakes at our research site in California typically occur at depths of up to 15 km below the surface, tectonic tremor signals are generated at depths ranging from approximately 15 to 35 km.”

Tectonic tremor was first detected a decade ago in subduction zones in Japan and in the Pacific Northwest in North America. Since then, seismologists have discovered that tremor occurs in many other places, including the San Andreas fault in California. The San Andreas fault marks the boundary where the Pacific Plate and the North American plate drift past each other, generating many earthquakes in the process. KIT researchers have collected new seismic data recording tremor closer to where it occurs than the seismic stations currently installed near Cholame. In mid-2010, KIT researchers, together with scientists of the University of California, Riverside, and the US Geological Survey, Pasadena, installed 13 seismic stations near Cholame, located approximately halfway between San Francisco and Los Angeles. Each seismic station was equipped with a broadband seismometer in a thermally insulated hole in the ground, a small computer, and a solar panel for power. Broadband seismometers are extremely sensitive to small ground motions, are therefore ideal for detecting tremor and small earthquakes. The data recorded over a period of 14 months are presently being analyzed at KIT.

Tectonic tremor signals have a unique character that differs from earthquakes, making them more difficult to detect using automated techniques. In order to address the detection problem, the KIT researchers first developed a new algorithm for the automatic isolation of tectonic tremor. Using their new technique, they found over 2600 tremor events that are now being studied in detail. “In addition to detecting tremor, we will determine their size or magnitude of the individual events. In order to do so, each of the tremor events must be precisely located,” says Rebecca Harrington. Additionally, KIT geophysicists compare the tremor and earthquake recordings in California with earthquake recordings at Mount St. Helens volcano, located in the Cascadia subduction zone, located to north of California, in the US state of Washington. A volcano eruption from 2004-2008 produced a series of earthquakes on newly formed faults, where the scientists of the US Geological Survey collect data that are also made available to Rebecca Harrington.

Seismology is still a long way from being able to predict earthquakes. However, seismologists can better estimate the danger posed by earthquakes by understanding what happens on a fault during a seismic event. According to Rebecca Harrington, research of tectonic tremor may play an important role understanding fault behavior. “We understand very little about what happens on a fault when it ruptures. The tectonic tremor generated on the deep part of a fault may provide clues about the behavior on the more shallow parts of a fault where more damaging earthquakes occur.”

Geographer charts the next-generation digital Earth

A photograph of Michael F. Goodchild. -  Kimberly Kavish
A photograph of Michael F. Goodchild. – Kimberly Kavish

The world has gotten smaller and more accessible since applications like Google Earth became mainstream, says UC Santa Barbara Professor of Geography Michael Goodchild. However, there is still a long way to go, and there are important steps to take to get there. His perspective, shared with many co-authors around the world, has been published in the Proceedings of the National Academy of Sciences in a paper titled, “Next-generation Digital Earth.”

Based on former vice-president Al Gore’s 1992 vision of a digital replica of Earth, the paper examines the world’s progress to date, and its prospects for the future.

“The point of this paper is to say, ‘Well, how far did we get?'” said first author Goodchild, who specializes in geographic information systems (GIS). The answer? Since Google Earth — the most popular publicly available program for spinning the digital globe — not far enough.

Taken from Gore’s vision, which outlined in his 1992 book, “Earth in the Balance,” and also taken from a Gore speech Goodchild helped to produce for the opening of the California Science Center in 1998, the development of the first iteration of a Digital Earth was rapid, as technology expanded to allow users to view the Earth in a way that had not been possible before. The results fascinated many, who took to maps made by Google and other digital globe-making services — NASA’s WorldWind and Microsoft’s Bing Maps, for instance — to visualize their worlds. Global visualizations and modeling have been responsible for a variety of beneficial efforts, such as the tracking of major weather events and political uprisings, and finding lost people.

But the wider the technology spread, the more obvious certain issues became. For instance, different sources of data provided for these applications resulted in different maps, and different boundaries for the same regions.

“There’s no such thing as a true map,” said Goodchild, pointing out three versions of the boundaries of the Himalayas on Google Maps, in response to requests from the United States, China, and India. Differences in how the applications measure distance are magnified with each new location mapped. These are issues that could make information from digital globes unreliable, even contentious.

Goodchild sees the next generation of Digital Earth moving away from the top-down experience and giving way to the bottom-up perspective.

“I’m more keen on the next generation going local instead of global,” he said. Things that happen to be important to those who live in the area should be part of the area’s maps, according to Goodchild, though they may not be the standard political or topographic fare of the traditional globe. Temporal information — traffic is an example already in use — also proves to be useful and more relevant to users.

“There’s more of a social perspective now, and less emphasis on permanent objects,” he said.

However, to take the next steps effectively, the next generation of Digital Earth has to back away from the “exaggerated precision” of the current generation, allowing for uncertainty, and also for the various contexts and environments that a Digital Earth is able to access. Relationships and linkages between objects need to be developed and refined, and a way of archiving the sheer amounts of data must be developed, says the paper.

Additionally, according to the paper, collaboration between multiple infrastructures and open-source partnerships will be necessary for the next generation Digital Earth, as well as a code of ethics that will allow the technology to strike a balance between universal access and universal protection.

“Privacy is less important to the younger generation,” said Goodchild, pointing to things like Facebook and similar social media engines, “but we need the ability to opt-out or be invisible. It’s getting increasingly difficult.”

Despite the move away from ultra-high precision in mapping, however, there continues to be an overarching need for the next generation Digital Earth to be scientifically accurate, and it’s the scientific community’s job to ensure that accuracy, he said.

“It’s the problem we have when major corporations produce scientific software,” Goodchild said, citing Google Earth’s inclination to satisfy 90 percent of its users. Scientists are part of the remaining 10 percent, he said.

“We ought to insist that scientific standards should be followed,” said Goodchild. “Because if we don’t, they won’t.”

Curvy mountain belts

Mountain belts on Earth are most commonly formed by collision of one or more tectonic plates. The process of collision, uplift, and subsequent erosion of long mountain belts often produces profound global effects, including changes in regional and global climates, as well as the formation of important economic resources, including oil and gas reservoirs and ore deposits. Understanding the formation of mountain belts is thus a very important element of earth science research.

One common but poorly understood aspect of mountain belts are the many examples of curved (arcuate) mountain ranges. The Appalachian range in Pennsylvania, the Rocky Mountains in central Montana, the Blue Mountains in Oregon, the Bolivian Andes of South America, and the Cantabrian Arc in Spain and northern Africa are among many examples of noticeably curved mountain belts.

The cause of these curvy mountains is among the oldest topics of research in geology, and there is still extensive debate on what mechanisms are most important for making a curvy mountain range.

A common question is whether these presently curvy mountain ranges were originally straight and then later bent or whether they were uplifted in more or less their present shape.

Another important aspect of the origin of these curved mountain ranges is the thickness of the rock units involved in their formation. Some workers have proposed that these ranges are composed of relatively thin slices of crustal rocks (limited to several kilometers in thickness), while others have argued that at least some of these curvy ranges involve the entire thickness of the lithospheric plates (30 to 100 km thick). One of the most promising ways to answer these questions utilizes comparisons of the orientation of structural features in rocks (fault planes and joints), records of the ancient magnetic field directions found in rocks, and the timing of deformation and uplift of the mountain belts.

An international group of researchers from Spain, Canada, and the United States, led by Dr. Gabriel Gutiérrez-Alonso, have presented a compelling study of one of the best examples of curved mountain ranges: the Cantabrian Arc in Spain and northern Africa. They have compiled an extensive collection of fault and joint orientation data and directions of the ancient geomagnetic field recorded by Paleozoic rocks collected in Spain.

The Cantabrian Arc was formed during the collision of a southern set of continents (Gondwanaland [present day Africa-South America-Australia-India-Antarctica]) with a northern set of continents (Laurentia [present day North America and Eurasia]) to produce the supercontinent Pangea. In a nutshell, their combined study has found that the curved pattern of the Cantabrian Arc was produced by the bending of an originally straight mountain range.

The main line of evidence supporting this view is the patterns of rotation that are obtained from the directions of the ancient geomagnetic field recorded by the rocks of these mountain ranges. Combined with an analysis of the faults and joints in the rocks, and the ages of rocks that have variations in the amount of rotation indicated by the magnetic directions, the age of the bending of the Cantabrian Arc is confined to a relatively narrow window of geological time between 315 and 300 million years ago.

Gutiérrez-Alonso and colleagues compare the age range of this mountain bending event to the ages of igneous activity and uplift of the region and propose that widespread changes in the deeper (mantle) portion of the lithospheric plate in the area are coeval, and likely linked to, the rotation of the Cantabrian Arc to produce its characteristic sharp curviness. Based on this linkage, they propose that this, and perhaps many other, curvy mountain ranges are produced by rotation of entire portions of the lithosphere of tectonic plates, rather than just thin slices of crustal rocks.

How an ancestral fungus may have influenced coal formation

For want of a nail, the nursery rhyme goes, a kingdom was lost. A similar, seemingly innocuous change-the evolution of a lineage of mushrooms-may have had a massive impact on the carbon cycle, bringing an end to the 60-million year period during which coal deposits were formed.

Coal generated nearly half of the roughly four trillion kilowatt-hours of electricity consumed in the United States in 2010, according to the U.S. Energy Information Administration. This fuel is actually the fossilized remains of plants that lived from around 360 to 300 million years ago. An international team of scientists, including researchers at the U.S. Department of Energy Joint Genome Institute (DOE JGI), has proposed a new factor that may have contributed to the end of the Carboniferous period-named after the large stores of what became coal deposits. The evidence, presented online in the June 29 edition of the journal Science, suggests that the evolution of fungi capable of breaking down the polymer lignin, which helps keep plant cell walls rigid, may have played a key role in ending the development of coal deposits.With the arrival of the new fungi, dead plant matter could be completely broken down into its basic chemical components. Instead of accumulating as peat, which eventually was transformed into coal, the great bulk of plant biomass decayed and was released into the atmosphere as carbon dioxide.

“We’re hoping this will get into the biology and geology textbooks,” said Clark University biologist David Hibbett, senior author of the comprehensive study comparing the complete genomes of dozens of species of fungi, most of which were sequenced at the DOE JGI. “When you read about coal formation it’s usually explained in terms of physical processes, and that the rate of coal deposition just crashed at the end of the Permo-Carboniferous. Why was that? There are various explanations. The evolution of white rot fungi could’ve been a factor – perhaps a major factor. Once you have white rot you can break down lignin, the major precursor of coal. So the evolution of white rot is a very important event in the evolution of carbon cycle.”

“The concept of the invention of an enzyme that can break down the ‘unbreakable’ is really great,” said Kenneth Nealson, Wrigley Chair in Environmental Studies and Professor of Earth Sciences and Biological Sciences at the University of Southern California. “The idea that a stable (inedible) form of organic carbon can become edible (and thus more difficult to bury over time), changes our perspective not only on global energy storage in the past, but on what it means for present day carbon sequestration and storage, in that sense this idea will have a big impact on our thinking about the past and the present.”

For their study, Hibbett and his colleagues focused on Basidiomycetes, which include mushroom species with the familiar cap-and-stem look that most people associate with fungi. Basidiomycetes also include brown rot fungi such as the dry rot that can destroy houses by breaking down the cellulose in the construction wood but leave the lignin untouched and white rot fungi of interest to the pulp and paper industries that can break down both types of polymers. Of the 31 brown rot and white rot fungal genomes that were compared for the study, 26 were sequenced at the DOE JGI, including a dozen that were done specifically for the study to flesh out representation of the fungal orders.

Igor Grigoriev, head of the DOE JGI Fungal Genomics Program noted that the comparative fungal genomics study underscores the abiding interest in support to DOE mission, to harness fungal enzymes for converting biopolymers such as cellulose into simple sugars to optimize biofuels production. In this pursuit, Grigoriev and his colleagues continue to contribute to growing list of firsts. “The first fungus ever sequenced at the DOE JGI was also the first genome of a white rot fungus,” he said. “A few years later, we sequenced the first brown rot fungus. Less than a decade after that first fungal genome, we’re presenting the first large-scale comparison of wood-decaying fungi.”

With multiple fungal genomes on hand, the team compared DNA sequences, searching for gene families that encoded enzymes involved in wood decay. They focused particularly on enzymes called class II fungal peroxidases that turned out to be present in the lineages of white rot fungi but not in brown rot fungi, suggesting they played a role in breaking down the lignin in plants.

The researchers then used molecular clock analyses to track the evolution of the enzymes back through the fungal lineages. The idea is that just as the hands of a clock move at a defined rate around the dial, genes accumulate mutations at a roughly constant rate. This rate of change allows researchers to work backwards, estimating when two lineages last shared a common ancestor based on the amount of divergence.

The comparative analyses suggested that around 290 million years ago, right at the end of the Carboniferous period, a white rot fungal ancestor with the capacity to break down lignin appeared. Prior to that ancestor, fungi did not have that ability and thus the lignin in plant matter was not degraded, allowing these lignin-rich residues to build up in soil over time. Because molecular clock analyses have substantial error, fungal “fossils” are needed for calibration. For this study, the molecular clock analyses were calibrated against three fungal fossils. Hibbett said that more fossils would help improve the age estimate. “Unfortunately,” he added, “fungal fossils are rare and easily overlooked.” He said that his group is interested in trying to reconstruct that ancestral white rot fungal genome. “We’re motivated to understand when this metabolic pathway responsible for lignin degradation came into existence. That’s why we needed to have that many fungal genomes in this study. Up until fairly recently, it was so much work to just get one genome at a time. Now we have comparative fungal genomics projects as we’re transitioning to a cool time with hundreds of fungal genomes.”

Joseph Spatafora, a professor at Oregon State University and co-author on the study, agrees with Hibbett’s assessment that the group’s findings could alter biology and geology texts. “When you look at this particular phenomenon of the decrease of coal deposition, by far the majority of explanations have been abiotic and that doesn’t seem like that should be the entire story,” he said.

Grigoriev said that this paper is the first product of the Genomic Encyclopedia of Fungi, the DOE JGI umbrella project that focuses fungal genome sequencing efforts on DOE-relevant missions in energy and the environment. “This paper is the first chapter in the Encyclopedia,” he said, “The data generated has produced the most comprehensive catalog of lignocellulolytic enzymes yet, which is of interest to industry. We’ve now got the blueprint of all genes across very diverse phylogenies, and we’ll get more. This is a huge step forward. The next milestone is the 1000 Fungal Genomes project to complete the entire diversity in Basidiomycetes.”

As the head of the 1000 Fungal Genomes project, a part of the DOE JGI’s Community Sequencing Program portfolio, Spatafora said that despite the goal of facilitating the sequencing of a thousand fungal genomes, two from each of 500 families, over five years, fungal genomics still has a long way to go. “There’s an estimated 1.5 million species of fungi,” he said. “We have names for about 100,000 species, and we’re looking at 1,000 fungi in this project. This is still the tip of the iceberg in looking at fungal diversity and we’re trying to learn even more to gain a better idea of fungal metabolism and the potential to harness fungi for a number of applications, including bioenergy. It’s a really exciting time in fungal biology, and part of that is due to the technology today that allows us to address the really longstanding questions.”

Earth’s oldest known impact crater found in Greenland

A 100 kilometer-wide crater has been found in Greenland, the result of a massive asteroid or comet impact a billion years before any other known collision on Earth.

The spectacular craters on the Moon formed from impacts with asteroids and comets between 3 and 4 billion years ago. The early Earth, with its far greater gravitational mass, must have experienced even more collisions at this time – but the evidence has been eroded away or covered by younger rocks. The previously oldest known crater on Earth formed 2 billion years ago and the chances of finding an even older impact were thought to be, literally, astronomically low.

Now, a team of scientists from the Geological Survey of Denmark and Greenland (GEUS) in Copenhagen, Cardiff University in Wales, Lund University in Sweden and the Institute of Planetary Science in Moscow has upset these odds. Following a detailed programme of fieldwork, funded by GEUS and the Danish ‘Carlsbergfondet’ (Carlsberg Foundation), the team have discovered the remains of a giant 3 billion year old impact near the Maniitsoq region of West Greenland.

“This single discovery means that we can study the effects of cratering on the Earth nearly a billion years further back in time than was possible before,” according to Dr Iain McDonald of Cardiff University’s School of Earth and Ocean Sciences, who was part of the team.

Finding the evidence was made all the harder because there is no obvious bowl-shaped crater left to find. Over the 3 billion years since the impact, the land has been eroded down to expose deeper crust 25 km below the original surface. All external parts of the impact structure have been removed, but the effects of the intense impact shock wave penetrated deep into the crust – far deeper than at any other known crater – and these remain visible.

However, because the effects of impact at these depths have never been observed before it has taken nearly three years of painstaking work to assemble all the key evidence. “The process was rather like a Sherlock Holmes story,” said Dr McDonald. “We eliminated the impossible in terms of any conventional terrestrial processes, and were left with a giant impact as the only explanation for all of the facts.”

Only around 180 impact craters have ever been discovered on Earth and around 30% of them contain important natural resources of minerals or oil and gas. The largest and oldest known crater prior to this study, the 300 kilometre wide Vredefort crater in South Africa, is 2 billion years in age and heavily eroded.

Dr McDonald added that “It has taken us nearly three years to convince our peers in the scientific community of this but the mining industry was far more receptive. A Canadian exploration company has been using the impact model to explore for deposits of nickel and platinum metals at Maniitsoq since the autumn of 2011.”

The international team was led by Adam A. Garde, senior research scientist at GEUS. The first scientific paper documenting the discovery has just been published in the journal Earth and Planetary Science Letters.

A new method accounts for social factors when assessing the seismic risk of a city

This is a building in the city of Lorca in Murcia after the earthquake. -  Lorca 072
This is a building in the city of Lorca in Murcia after the earthquake. – Lorca 072

Seismic risk not only depends on the magnitude of the tremor itself but also on the resistance of buildings and the social characteristics of its population. A team of Spanish scientists have presented a new method for calculating seismic risk incorporating aspects like social fragility and the chances of collective recovery.

“When faced with the possibility of an earthquake, up until now the physical risk of the city has only ever been evaluated. This, in other words, means damage to buildings and infrastructures taking into consideration the amount of people inside,” as explained to SINC by Liliana Carreño, researcher at the Polytechnic University of Catalonia (UPC). Her team proposes a new method of carrying out an overall assessment of the seismic risk of an urban area, taking into account the social strengths and weakness and the city’s governance.

The system created by Carreño and her team considers values such as “crime rates, whether there are marginalized areas, the number of hospital beds, training of hospital staff, etc, which all constitute factors of fragility and social capacity,” explain the researchers. “This methodology greatly improves on our ability to assess future losses because it takes into account the social condition of the exposed population, which was previously treated as a mere number,” states Carreño.

Published in the Bulletin of Earthquake Engineering, the new approach has another added value: it uses a technique based on ‘fuzzy logic theory’ which allows for the use of qualitative information obtained from expert opinion when the necessary numerical information is lacking.

Translating Opinions to Numbers

“The methods for making a complete risk calculation in a given urban area require great quantities of information that is not always available” highlights the researcher. According to Carreño, seismic risk specialists have always faced complex problems concerning imprecise information. “We can now translate linguistic variables like a lot, a few, slight, severe, scarce and enough into mathematic formalism for their subsequent measurement,” outlines the scientist.

In order to verify the method’s validity, Carreño and her team applied it to the cities of Barcelona and Bogotá (Colombia). She adds that “the Catalan city is a good model since its seismic risk has been subject to study for more than 20 year.” Its results confirmed expected risk levels: medium-high for Bogotá and medium-low for Barcelona.

As Carreño concludes, “Barcelona’s assessment was carried out with the availability of sound information. But, the most important aspect of this model is that it is especially useful when studying an urban space that does not have such an advantage and where information is lacking.”

Scientists compile first study of potential for tsunamis in northwestern California

Using studies that span the last three decades, scientists at UC Santa Barbara have compiled the first evidence-based comprehensive study of the potential for tsunamis in Northwestern California. The paper, “Paleoseismicity of the Southern End of the Cascadia Subduction Zone, Northwestern California,” was co-written by professors Edward Keller and Alexander Simms from UCSB’s Department of Earth Science, and published in a recent issue of the Bulletin of the Seismological Society of America.

The paper is based on the Ph.D. dissertation of David Valentine, a research programmer at the Spatial Information Systems Laboratory at UC San Diego. Valentine, Keller’s former student, completed his doctorate at UCSB in 2002 and is first author of the paper.

The region has long been known to experience large earthquakes, and scientific studies of seismic activity in the southern end of the Cascadia Subduction Zone (CSZ) — which stretches northward from the area of Mendocino, Calif. — have previously appeared in grey literature and in guidebooks. However, comprehensive, reviewed evidence-based work has been lacking, according to Keller.

“Science goes on evidence,” he said, adding that in light of the recent earthquakes in Japan and Chile, the study of the same potential closer to home is “timely.” The authors studied sedimentation patterns in salt marshes, floodplains, and estuaries in the northwestern corner of California for signs of seismic events that could lead to tsunami activity. They combined this with information gathered from numerous studies conducted over nearly 30 years by researchers at Humboldt State University

During an earthquake, the researchers say, there is a tendency for the coastal wetlands to become submerged, with coastal sediments depositing over plants and animals that live there. These become a fossilized record of sea-level change in the area.

The process has preserved a sequence of marsh surfaces and forest soils. Analysis of structure, texture, and organic content, as well as the use of radiocarbon dating to identify the age of the materials, revealed evidence of smaller strong-to-major earthquakes in the area (magnitude 6.5 to 7.2). Larger quakes (greater than magnitude 8.2) that involved the regional subduction zone, were also in evidence.

According to the study, the local California section has experienced three major earthquakes over the last 2000 years, and accompanying local sea-level changes at roughly 300- to 400-year intervals, with the last one occurring 500 to 600 years ago. The researchers also found that the entire CSZ erupted, causing local submergence at least three times in roughly 500- to 600- year intervals, the last activity taking place in 1700 AD.

“It’s not a matter of if, but when,” said Keller, of the potential for the next major earthquake/tsunami event in the region — a great earthquake that would impact not only the Northwest, but also send waves to Japan and Hawaii. The evidence, he said, is leading to far more foresight and planning along the impact areas in the region to avoid catastrophes on a level with the Japan earthquake of 2011 or the Indian Ocean quake of 2004.

Scientists find new primitive mineral in meteorite

Panguite is embedded in a piece of the Allende meteorite. -  Chi Ma / Caltech
Panguite is embedded in a piece of the Allende meteorite. – Chi Ma / Caltech

In 1969, an exploding fireball tore through the sky over Mexico, scattering thousands of pieces of meteorite across the state of Chihuahua. More than 40 years later, the Allende meteorite is still serving the scientific community as a rich source of information about the early stages of our solar system’s evolution. Recently, scientists from the California Institute of Technology (Caltech) discovered a new mineral embedded in the space rock-one they believe to be among the oldest minerals formed in the solar system.

Dubbed panguite, the new titanium oxide is named after Pan Gu, the giant from ancient Chinese mythology who established the world by separating yin from yang to create the earth and the sky. The mineral and the mineral name have been approved by the International Mineralogical Association’s Commission on New Minerals, Nomenclature and Classification. A paper outlining the discovery and the properties of this new mineral will be published in the July issue of the journal American Mineralogist, and is available online now.

“Panguite is an especially exciting discovery since it is not only a new mineral, but also a material previously unknown to science,” says Chi Ma, a senior scientist and director of the Geological and Planetary Sciences division’s Analytical Facility at Caltech and corresponding author on the paper.

The Allende meteorite is the largest carbonaceous chondrite-a diverse class of primitive meteorites-ever found on our planet and is considered by many the best-studied meteorite in history. As a result of an ongoing nanomineralogy investigation of primitive meteorites-which Ma has been leading since 2007-nine new minerals, including panguite, have been found in the Allende meteorite. Some of those new finds include the minerals allendeite, hexamolybdenum, tistarite, and kangite. Nanomineralogy looks at tiny particles of minerals and the minuscule features within those minerals.

“The intensive studies of objects in this meteorite have had a tremendous influence on current thinking about processes, timing, and chemistry in the primitive solar nebula and small planetary bodies,” says coauthor George Rossman, the Eleanor and John R. McMillan Professor of Mineralogy at Caltech.

Panguite was observed first under a scanning electron microscope in an ultra-refractory inclusion embedded in the meteorite. Refractory inclusions are among the first solid objects formed in our solar system, dating back to before the formation of Earth and the other planets. “Refractory” refers to the fact that these inclusions contain minerals that are stable at high temperatures and in extreme environments, which attests to their likely formation as primitive, high-temperature liquids produced by the solar nebula.

According to Ma, studies of panguite and other newly discovered refractory minerals are continuing in an effort to learn more about the conditions under which they formed and subsequently evolved. “Such investigations are essential to understand the origins of our solar system,” he says.

New book looks at hotspots around the world for mega-quakes

This is a high school running track in Taiwan crossed by the Chelungpu fault in an earthquake in September 1999. -  Bob Yeats, Oregon State University.
This is a high school running track in Taiwan crossed by the Chelungpu fault in an earthquake in September 1999. – Bob Yeats, Oregon State University.

At the beginning of 2010, Oregon State University geologist Bob Yeats told a national reporter that Port au Prince, Haiti, was a “time bomb” for a devastating earthquake because of its crowded, poorly constructed buildings and its proximity to the Enriquillo Fault.

One week later, a magnitude 7 earthquake destroyed Port au Prince, killing hundreds of thousands of people and devastating the economy of Haiti.

The clock is ticking on many other earthquake faults throughout the world, Yeats says, and though he did not “predict” the Haiti earthquake, he can point to other places that could face the same fate. He outlines some of these areas in a new book called “Active Faults of the World,” published by Cambridge University Press.

“We are not yet to the point where we can predict earthquakes,” said Yeats, a professor emeritus in Oregon State’s College of Earth, Ocean, and Atmospheric Sciences. “What we can do is tell you where some of the most dangerous faults lie – and where those coincide with crowded cities, few building codes, and a lack of social services, you have a time bomb.

“Unfortunately, we can’t say if an earthquake will strike today, tomorrow or in a hundred years,” he added. “But in all of these locations it will happen someday – and unless something is done to improve conditions, many thousands of people will die.”

In his book, Yeats notes that the greatest migration in human history is of people moving from rural areas to “megacities” in the developing world. People have flocked to these mega-cities where multi-level housing and businesses are rapidly built, and often poorly constructed and poorly inspected. When many of these locations last had a major earthquake, their population was small and a majority of the people was living in one-story dwellings, limiting the loss of life.

Yeats cites as an example Caracas, Venezuela, which has an earthquake plate-boundary fault north of the city. In 1812, a major quake shook Caracas and other Venezuelan cities and killed an estimated 10,000 people – about 10 percent of the population at that time. Today, the population of Caracas is nearly 3 million, but government decision-makers are “not placing earthquake hazards high on their list of priorities,” Yeats said, despite the presence of knowledgeable local experts.

Another city near the top of Yeats’ list of earthquake dangers is Kabul, Afghanistan, which suffered an enormous earthquake in 1505. Because of recent wars, the buildings in Kabul are in poor shape – either poorly constructed, or damaged from bombs. On a visit to Kabul in 2002, Yeats found many families living in the ruins of these buildings.


“If Kabul has a repeat of the 1505 earthquake,” Yeats said, “it could kill more people than have died in all of Afghanistan’s wars in the last 40 years because of the influx of refugees living in crowded, substandard conditions.”

Tehran, Iran, is another heavily populated city situated near a major fault line. Located at the base of the Alborz mountain range, Tehran has some 11 million people in its urban boundaries, and Yeats said they are vulnerable because of poorly constructed housing in many parts of the city – a result of corruption in building construction and building inspection industries.

Other over-populated cities near fault lines with poor building codes on Yeats’ list include Istanbul, Turkey, now under an earthquake hazard warning after a quake of magnitude 7.4 in 1999; Nairobi, Kenya, close to a 7.3 quake in the 1920s; and Guantánamo, Cuba.

“Guantánamo is a bit like Haiti,” Yeats pointed out. “They have a fault just offshore, and yet they have no clue they are at risk because Cuba has not had any catastrophic earthquakes in its 500-year history. The military prison operated by the United States would also be at risk, but as far as I know, the Americans are not contributing their expertise to help Guantånamo prepare for its future earthquake.”

There are many places around the world likely to experience a major earthquake in the future, Yeats says, but the “risk” to human lives may not be as high because of less crowding and better building codes. He points to the 2011 super-quake in Japan, which reached a magnitude of 9.0, yet did not cause nearly as much destruction as the tsunami it triggered.

“The Japanese,” Yeats said, “lead the world in taking earthquake risk seriously.”

Yeats was one of the first geologists to point to the Pacific Northwest as being at risk for a major earthquake, because of its proximity to the Cascadia Subduction Zone. Since he and other OSU scientists first raised awareness of that risk in the 1980s, there has been gradual acceptance that an earthquake will strike in the future.

“But will this acceptance lead to concrete action, such as approving a bond issue for seismic upgrades to old school buildings?” Yeats said. “Will it lead to strengthening communities on the West Coast against tsunamis?”

The OSU professor emeritus hopes his book leads to more awareness of the hundreds of faults around the world – some well-known, and some not. This is the first time someone has attempted to summarize the totality of earthquake faults, and Yeats used his own research and observations, as well as exhaustive literature reviews.

“Knowing about the faults is the first step,” Yeats said, “but preparing for the risk is what really needs to happen. It is kind of interesting that Japan has done a lot of work preparing for an earthquake in their Home Islands, and then one bigger than they expected hits northern Japan, accompanied by a devastating tsunami, whose effects have been felt as far away as Oregon.”

A similar thing happened northeast of Beijing, China, in 1976, when a magnitude 7.8 earthquake struck along a fault line that was not thought to be a major threat, killing more than 200,000 people. And it happened again in 2011 at Christchurch, New Zealand, with an earthquake on a minor fault no one knew about in advance – but still the earthquake produced the greatest losses in New Zealand’s history.

“The lesson there is that you never know which one is going to nail you,” Yeats said, “but it pays to be prepared.”