Hydraulic fracturing linked to earthquakes in Ohio

Hydraulic fracturing triggered a series of small earthquakes in 2013 on a previously unmapped fault in Harrison County, Ohio, according to a study published in the journal Seismological Research Letters (SRL).

Nearly 400 small earthquakes occurred between Oct. 1 and Dec. 13, 2013, including 10 “positive” magnitude earthquake, none of which were reported felt by the public. The 10 positive magnitude earthquakes, which ranged from magnitude 1.7 to 2.2, occurred between Oct. 2 and 19, coinciding with hydraulic fracturing operations at nearby wells.

This series of earthquakes is the first known instance of seismicity in the area.

Hydraulic fracturing, or fracking, is a method for extracting gas and oil from shale rock by injecting a high-pressure water mixture directed at the rock to release the gas inside. The process of hydraulic fracturing involves injecting water, sand and chemicals into the rock under high pressure to create cracks. The process of cracking rocks results in micro-earthquakes. Hydraulic fracturing usually creates only small earthquakes, ones that have magnitude in the range of negative 3 (−3) to negative 1 (-1).

“Hydraulic fracturing has the potential to trigger earthquakes, and in this case, small ones that could not be felt, however the earthquakes were three orders of magnitude larger than normally expected,” said Paul Friberg, a seismologist with Instrumental Software Technologies, Inc. (ISTI) and a co-author of the study.

The earthquakes revealed an east-west trending fault that lies in the basement formation at approximately two miles deep and directly below the three horizontal gas wells. The EarthScope Transportable Array Network Facility identified the first earthquakes on Oct. 2, 2013, locating them south of Clendening Lake near the town of Uhrichsville, Ohio. A subsequent analysis identified 190 earthquakes during a 39-hour period on Oct. 1 and 2, just hours after hydraulic fracturing began on one of the wells.

The micro-seismicity varied, corresponding with the fracturing activity at the wells. The timing of the earthquakes, along with their tight linear clustering and similar waveform signals, suggest a unique source for the cause of the earthquakes — the hydraulic fracturing operation. The fracturing likely triggered slip on a pre-existing fault, though one that is located below the formation expected to confine the fracturing, according to the authors.

“As hydraulic fracturing operations explore new regions, more seismic monitoring will be needed since many faults remain unmapped.” Friberg co-authored the paper with Ilya Dricker, also with ISTI, and Glenda Besana-Ostman originally with Ohio Department of Natural Resources, and now with the Bureau of Reclamation at the U.S. Department of Interior.

New map uncovers thousands of unseen seamounts on ocean floor

This is a gravity model of the North Atlantic; red dots are earthquakes. Quakes are often related to seamounts. -  David Sandwell, SIO
This is a gravity model of the North Atlantic; red dots are earthquakes. Quakes are often related to seamounts. – David Sandwell, SIO

Scientists have created a new map of the world’s seafloor, offering a more vivid picture of the structures that make up the deepest, least-explored parts of the ocean.

The feat was accomplished by accessing two untapped streams of satellite data.

Thousands of previously uncharted mountains rising from the seafloor, called seamounts, have emerged through the map, along with new clues about the formation of the continents.

Combined with existing data and improved remote sensing instruments, the map, described today in the journal Science, gives scientists new tools to investigate ocean spreading centers and little-studied remote ocean basins.

Earthquakes were also mapped. In addition, the researchers discovered that seamounts and earthquakes are often linked. Most seamounts were once active volcanoes, and so are usually found near tectonically active plate boundaries, mid-ocean ridges and subducting zones.

The new map is twice as accurate as the previous version produced nearly 20 years ago, say the researchers, who are affiliated with California’s Scripps Institution of Oceanography (SIO) and other institutions.

“The team has developed and proved a powerful new tool for high-resolution exploration of regional seafloor structure and geophysical processes,” says Don Rice, program director in the National Science Foundation’s Division of Ocean Sciences, which funded the research.

“This capability will allow us to revisit unsolved questions and to pinpoint where to focus future exploratory work.”

Developed using a scientific model that captures gravity measurements of the ocean seafloor, the map extracts data from the European Space Agency’s (ESA) CryoSat-2 satellite.

CryoSat-2 primarily captures polar ice data but also operates continuously over the oceans. Data also came from Jason-1, NASA’s satellite that was redirected to map gravity fields during the last year of its 12-year mission.

“The kinds of things you can see very clearly are the abyssal hills, the most common landform on the planet,” says David Sandwell, lead author of the paper and a geophysicist at SIO.

The paper’s co-authors say that the map provides a window into the tectonics of the deep oceans.

The map also provides a foundation for the upcoming new version of Google’s ocean maps; it will fill large voids between shipboard depth profiles.

Previously unseen features include newly exposed continental connections across South America and Africa and new evidence for seafloor spreading ridges in the Gulf of Mexico. The ridges were active 150 million years ago and are now buried by mile-thick layers of sediment.

“One of the most important uses will be to improve the estimates of seafloor depth in the 80 percent of the oceans that remain uncharted or [where the sea floor] is buried beneath thick sediment,” the authors state.

###

Co-authors of the paper include R. Dietmar Muller of the University of Sydney, Walter Smith of the NOAA Laboratory for Satellite Altimetry Emmanuel Garcia of SIO and Richard Francis of ESA.

The study also was supported by the U.S. Office of Naval Research, the National Geospatial-Intelligence Agency and ConocoPhillips.

Meteorite that doomed the dinosaurs helped the forests bloom

<IMG SRC="/Images/537934362.jpg" WIDTH="350" HEIGHT="233" BORDER="0" ALT="Seen here is a Late Cretaceous specimen from the Hell Creek Formation, morphotype HC62, taxon
''Rhamnus” cleburni. Specimens are housed at the Denver Museum of Nature and Science in
Denver, Colorado. – Image credit: Benjamin Blonder.”>
Seen here is a Late Cretaceous specimen from the Hell Creek Formation, morphotype HC62, taxon
Rhamnus” cleburni. Specimens are housed at the Denver Museum of Nature and Science in
Denver, Colorado. – Image credit: Benjamin Blonder.

66 million years ago, a 10-km diameter chunk of rock hit the Yukatan peninsula near the site of the small town of Chicxulub with the force of 100 teratons of TNT. It left a crater more than 150 km across, and the resulting megatsunami, wildfires, global earthquakes and volcanism are widely accepted to have wiped out the dinosaurs and made way for the rise of the mammals. But what happened to the plants on which the dinosaurs fed?

A new study led by researchers from the University of Arizona reveals that the meteorite impact that spelled doom for the dinosaurs also decimated the evergreen flowering plants to a much greater extent than their deciduous peers. They hypothesize that the properties of deciduous plants made them better able to respond rapidly to chaotically varying post-apocalyptic climate conditions. The results are publishing on September 16 in the open access journal PLOS Biology.

Applying biomechanical formulae to a treasure trove of thousands of fossilized leaves of angiosperms – flowering plants excluding conifers – the team was able to reconstruct the ecology of a diverse plant community thriving during a 2.2 million-year period spanning the cataclysmic impact event, believed to have wiped out more than half of plant species living at the time. The fossilized leaf samples span the last 1,400,000 years of the Cretaceous and the first 800,000 of the Paleogene.

The researchers found evidence that after the impact, fast-growing, deciduous angiosperms had replaced their slow-growing, evergreen peers to a large extent. Living examples of evergreen angiosperms, such as holly and ivy, tend to prefer shade, don’t grow very fast and sport dark-colored leaves.

“When you look at forests around the world today, you don’t see many forests dominated by evergreen flowering plants,” said the study’s lead author, Benjamin Blonder. “Instead, they are dominated by deciduous species, plants that lose their leaves at some point during the year.”

Blonder and his colleagues studied a total of about 1,000 fossilized plant leaves collected from a location in southern North Dakota, embedded in rock layers known as the Hell Creek Formation, which at the end of the Cretaceous was a lowland floodplain crisscrossed by river channels. The collection consists of more than 10,000 identified plant fossils and is housed primarily at the Denver Museum of Nature and Science. “When you hold one of those leaves that is so exquisitely preserved in your hand knowing it’s 66 million years old, it’s a humbling feeling,” said Blonder.

“If you think about a mass extinction caused by catastrophic event such as a meteorite impacting Earth, you might imagine all species are equally likely to die,” Blonder said. “Survival of the fittest doesn’t apply – the impact is like a reset button. The alternative hypothesis, however, is that some species had properties that enabled them to survive.

“Our study provides evidence of a dramatic shift from slow-growing plants to fast-growing species,” he said. “This tells us that the extinction was not random, and the way in which a plant acquires resources predicts how it can respond to a major disturbance. And potentially this also tells us why we find that modern forests are generally deciduous and not evergreen.”

Previously, other scientists found evidence of a dramatic drop in temperature caused by dust from the impact. “The hypothesis is that the impact winter introduced a very variable climate,” Blonder said. “That would have favored plants that grew quickly and could take advantage of changing conditions, such as deciduous plants.”

“We measured the mass of a given leaf in relation to its area, which tells us whether the leaf was a chunky, expensive one to make for the plant, or whether it was a more flimsy, cheap one,” Blonder explained. “In other words, how much carbon the plant had invested in the leaf.” In addition the researchers measured the density of the leaves’ vein networks, a measure of the amount of water a plant can transpire and the rate at which it can acquire carbon.

“There is a spectrum between fast- and slow-growing species,” said Blonder. “There is the ‘live fast, die young’ strategy and there is the ‘slow but steady’ strategy. You could compare it to financial strategies investing in stocks versus bonds.” The analyses revealed that while slow-growing evergreens dominated the plant assemblages before the extinction event, fast-growing flowering species had taken their places afterward.

M 9.0+ possible for subduction zones along ‘Ring of Fire,’ suggests new study

The magnitude of the 2011 Tohoku quake (M 9.0) caught many seismologists by surprise, prompting some to revisit the question of calculating the maximum magnitude earthquake possible for a particular fault. New research offers an alternate view that uses the concept of probable maximum magnitude events over a given period, providing the magnitude and the recurrence rate of extreme events in subduction zones for that period. Most circum Pacific subduction zones can produce earthquakes of magnitude greater than 9.0, suggests the study.

The idea of identifying the maximum magnitude for a fault isn’t new, and its definition varies based on context. This study, published online by the Bulletin of the Seismological Society of America (BSSA), calculates the “probable maximum earthquake magnitude within a time period of interest,” estimating the probable magnitude of subduction zone earthquakes for various time periods, including 250, 500 and 10,000 years.

“Various professionals use the same terminology – maximum magnitude – to mean different things. The most interesting question for us was what was going to be the biggest magnitude earthquake over a given period of time?” said co-author Yufang Rong, a seismologist at the Center for Property Risk Solutions of FM Global, a commercial and industrial property insurer. “Can we know the exact, absolute maximum magnitude? The answer is no, however, we developed a simple methodology to estimate the probable largest magnitude within a specific time frame.”

The study’s results indicated most of the subduction zones can generate M 8.5 or greater over a 250-return period; M 8.8 or greater over 500 years; and M 9.0 or greater over 10,000 years.

“Just because a subduction zone hasn’t produced a magnitude 8.8 in 499 years, that doesn’t mean one will happen next year,” said Rong. “We are talking about probabilities.”

The instrumental and historical earthquake record is brief, complicating any attempt to confirm recurrence rates and estimate with confidence the maximum magnitude of an earthquake in a given period. The authors validated their methodology by comparing their findings to the seismic history of the Cascadia subduction zone, revealed through deposits of marine sediment along the Pacific Northwest coast. While some subduction zones have experienced large events during recent history, the Cascadia subduction zone has remained quiet. Turbidite and onshore paleoseismic studies have documented a rich seismic history, identifying 40 large events over the past 10,000 years.

“Magnitude limits of subduction zone earthquakes” is co-authored by Rong, David Jackson of UCLA, Harold Magistrale of FM Global, and Chris Goldfinger of Oregon State University. The paper will be published online Sept. 16 by BSSA as well as in its October print edition.

Wastewater injection is culprit for most quakes in southern Colorado and northern New Mexico

The deep injection of wastewater underground is responsible for the dramatic rise in the number of earthquakes in Colorado and New Mexico since 2001, according to a study to be published in the Bulletin of the Seismological Society of America (BSSA).

The Raton Basin, which stretches from southern Colorado into northern New Mexico, was seismically quiet until shortly after major fluid injection began in 1999. Since 2001, there have been 16 magnitude > 3.8 earthquakes (including M 5.0 and 5.3), compared to only one (M 4.0) the previous 30 years. The increase in earthquakes is limited to the area of industrial activity and within 5 kilometers (3.1 miles) of wastewater injection wells.

In 1994, energy companies began producing coal-bed methane in Colorado and expanded production to New Mexico in 1999. Along with the production of methane, there is the production of wastewater, which is injected underground in disposal wells and can raise the pore pressure in the surrounding area, inducing earthquakes. Several lines of evidence suggest the earthquakes in the area are directly related to the disposal of wastewater, a by-product of extracting methane, and not to hydraulic fracturing occurring in the area.

Beginning in 2001, the production of methane expanded, with the number of high-volume wastewater disposal wells increasing (21 presently in Colorado and 7 in New Mexico) along with the injection rate. Since mid-2000, the total injection rate across the basin has ranged from 1.5 to 3.6 million barrels per month.

The authors, all scientists with the U.S. Geological Survey, detail several lines of evidence directly linking the injection wells to the seismicity. The timing and location of seismicity correspond to the documented pattern of injected wastewater. Detailed investigations of two seismic sequences (2001 and 2011) places them in proximity to high-volume, high-injection-rate wells, and both sequences occurred after a nearby increase in the rate of injection. A comparison between seismicity and wastewater injection in Colorado and New Mexico reveals similar patterns, suggesting seismicity is initiated shortly after an increase in injection rates.

Seismic gap may be filled by an earthquake near Istanbul

When a segment of a major fault line goes quiet, it can mean one of two things: The “seismic gap” may simply be inactive – the result of two tectonic plates placidly gliding past each other – or the segment may be a source of potential earthquakes, quietly building tension over decades until an inevitable seismic release.

Researchers from MIT and Turkey have found evidence for both types of behavior on different segments of the North Anatolian Fault – one of the most energetic earthquake zones in the world. The fault, similar in scale to California’s San Andreas Fault, stretches for about 745 miles across northern Turkey and into the Aegean Sea.

The researchers analyzed 20 years of GPS data along the fault, and determined that the next large earthquake to strike the region will likely occur along a seismic gap beneath the Sea of Marmara, some five miles west of Istanbul. In contrast, the western segment of the seismic gap appears to be moving without producing large earthquakes.

“Istanbul is a large city, and many of the buildings are very old and not built to the highest modern standards compared to, say, southern California,” says Michael Floyd, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “From an earthquake scientist’s perspective, this is a hotspot for potential seismic hazards.”

Although it’s impossible to pinpoint when such a quake might occur, Floyd says this one could be powerful – on the order of a magnitude 7 temblor, or stronger.

“When people talk about when the next quake will be, what they’re really asking is, ‘When will it be, to within a few hours, so that I can evacuate?’ But earthquakes can’t be predicted that way,” Floyd says. “Ultimately, for people’s safety, we encourage them to be prepared. To be prepared, they need to know what to prepare for – that’s where our work can contribute”

Floyd and his colleagues, including Semih Ergintav of the Kandilli Observatory and Earthquake Research Institute in Istanbul and MIT research scientist Robert Reilinger, have published their seismic analysis in the journal Geophysical Research Letters.

In recent decades, major earthquakes have occurred along the North Anatolian Fault in a roughly domino-like fashion, breaking sequentially from east to west. The most recent quake occurred in 1999 in the city of Izmit, just east of Istanbul. The initial shock, which lasted less than a minute, killed thousands. As Istanbul sits at the fault’s western end, many scientists have thought the city will be near the epicenter of the next major quake.

To get an idea of exactly where the fault may fracture next, the MIT and Turkish researchers used GPS data to measure the region’s ground movement over the last 20 years. The group took data along the fault from about 100 GPS locations, including stations where data are collected continuously and sites where instruments are episodically set up over small markers on the ground, the positions of which can be recorded over time as the Earth slowly shifts.

“By continuously tracking, we can tell which parts of the Earth’s crust are moving relative to other parts, and we can see that this fault has relative motion across it at about the rate at which your fingernail grows,” Floyd says.

From their ground data, the researchers estimate that, for the most part, the North Anatolian Fault must move at about 25 millimeters – or one inch – per year, sliding quietly or slipping in a series of earthquakes.

As there’s currently no way to track the Earth’s movement offshore, the group also used fault models to estimate the motion off the Turkish coast. The team identified a segment of the fault under the Sea of Marmara, west of Istanbul, that is essentially stuck, with the “missing” slip accumulating at 10 to 15 millimeters per year. This section – called the Princes’ Island segment, for a nearby tourist destination – last experienced an earthquake 250 years ago.

Floyd and colleagues calculate that the Princes’ Island segment should have slipped about 8 to 11 feet – but it hasn’t. Instead, strain has likely been building along the segment for the last 250 years. If this tension were to break the fault in one cataclysmic earthquake, the Earth could shift by as much as 11 feet within seconds.

Although such accumulated strain may be released in a series of smaller, less hazardous rumbles, Floyd says that given the historical pattern of major quakes along the North Anatolian Fault, it would be reasonable to expect a large earthquake off the coast of Istanbul within the next few decades.

“Earthquakes are not regular or predictable,” Floyd says. “They’re far more random over the long run, and you can go many lifetimes without experiencing one. But it only takes one to affect many lives. In a location like Istanbul that is known to be subject to large earthquakes, it comes back to the message: Always be prepared.”

Experts defend operational earthquake forecasting, counter critiques

Experts defend operational earthquake forecasting (OEF) in an editorial published in the Seismological Research Letters (SRL), arguing the importance of public communication as part of a suite of activities intended to improve public safety and mitigate damage from earthquakes. In a related article, Italian scientists detail the first official OEF system in Italy.

What is known about the probability of an earthquake on a specific fault varies over time, influenced largely by local seismic activity. OEF is the timely dissemination of authoritative scientific information about earthquake probabilities to the public and policymakers.

After the 2009 L’Aquila earthquake, Italian authorities established the International Commission on Earthquake Forecasting (ICEF), led by Thomas H. Jordan, director of the Southern California Earthquake Center, former president of the Seismological Society of America (SSA) and lead author of the SRL editorial. The commission issued a comprehensive report, published in 2011, which outlined OEF as one component of a larger system for guiding actions to mitigate earthquake risk, based on scientific information about the earthquake threat.

In this editorial, the authors respond to recent critiques suggesting that OEF is ineffective, distracting and dangerous. Citing results from ongoing OEF fieldwork in New Zealand, Italy and the United States, the authors emphasize the utility of OEF information in aiding policy makers and the public in reducing the risk from earthquakes.

“Although we cannot reliably predict large earthquakes with high probability, we do know that earthquake probabilities can change with time by factors of 100 or more. In our view, people deserve all the information that seismology can provide to help them make decisions about working and living with the earthquake threat,” said Jordan.

Concerns that short-term forecasts would cause panic, or lead to user fatigue and inaction, underestimate the general public’s ability to identify authoritative sources of information and make appropriate individual decisions, say the authors. While they acknowledge that communicating OEF uncertainties may be difficult, they conclude that “not communicating is hardly an option.”

Likely near-simultaneous earthquakes complicate seismic hazard planning for Italy

Before the shaking from one earthquake ends, shaking from another might begin, amplifying the effect of ground motion. Such sequences of closely timed, nearly overlapping, consecutive earthquakes account for devastating seismic events in Italy’s history and should be taken into account when building new structures, according to research published in the September issue of the journal Seismological Research Letters (SRL).

“It’s very important to consider this scenario of earthquakes, occurring possibly seconds apart, one immediately after another,” said co-author Anna Tramelli, a seismologist with the Istituto Nazionale di Geofisica e Vulcanologia in Naples, Italy. “Two consecutive mainshocks of magnitude 5.8 could have the effect of a magnitude 6 earthquake in terms of energy release. But the effect on a structure could be even larger than what’s anticipated from a magnitude 6 earthquake due to the longer duration of shaking that would negatively impact the resilience of a structure.”

Historically, multiple triggered mainshocks, with time delays of seconds to days, have caused deadly earthquakes along the Italian Apennine belt, a series of central mountain ranges extending the length of Italy. The 1997-98 Umbria-March seismic sequence numbered six mainshocks of moderate magnitude, ranging M 5.2 – 6.0. The 1980 Irpinia earthquakes included a sequence of three events, occurring at intervals within 20 seconds of each other. The 2012 Emilia sequence started with an M 5.9 event, with the second largest mainshock (M 5.8) occurring nine days later, and included more than 2000 aftershocks.

In this study, Tramelli and her colleagues used the recorded waveforms from the 2012 Emilia seismic sequence to simulate a seismic sequence that triggered end-to-end earthquakes along adjacent fault patches, observing the affect of continuous ruptures on the resulting ground motion and, consequently, its impact on critical structures, such as dams, power plants, hospitals and bridges.

“We demonstrated that consecutively triggered earthquakes can enhance the amount of energy produced by the ruptures, exceeding the design specifications expected for buildings in moderate seismic hazard zones,” said Tramelli, whose analysis suggests that the shaking from multiple magnitude 5.0 earthquakes would be significantly greater than from an individual magnitude 5.0 event.

And back-to-back earthquakes are more than theoretical, say the authors, who note that this worst-case scenario has happened at least once in Italy’s recent history. Previous studies identified three sub-events at intervals of 20 seconds in the seismic signals recorded during the 1980 Irpinia earthquake sequence, whose shared ground motion caused more than 3000 deaths and significant damage to structures.

A “broader and modern approach” to seismic risk mitigation in Italy, suggest the authors, would incorporate the scenario of multiple triggered quakes, along with the present understanding of active fault locations, mechanisms and interaction.

Composition of Earth’s mantle revisited

Research published recently in Science suggested that the makeup of the Earth’s lower mantle, which makes up the largest part of the Earth by volume, is significantly different than previously thought.

Understanding the composition of the mantle is essential to seismology, the study of earthquakes and movement below the Earth’s surface, and should shed light on unexplained seismic phenomena observed there.

Though humans haven’t yet managed to drill further than seven and a half miles into the Earth, we’ve built a comprehensive picture of what’s beneath our feet through calculations and limited observation. We all live atop the crust, the thin outer layer; just beneath is the mantle, outer core and finally inner core. The lower portion of the mantle is the largest layer – stretching from 400 to 1,800 miles below the surface – and gives off the most heat. Until now, the entire lower mantle was thought to be composed of the same mineral throughout: ferromagnesian silicate, arranged in a type of structure called perovskite.

The pressure and heat of the lower mantle is intense – more than 3,500° Fahrenheit. Materials may have very different properties at these conditions; structures may exist there that would collapse at the surface.

To simulate these conditions, researchers use special facilities at the Advanced Photon Source, where they shine high-powered lasers to heat up the sample inside a pressure cell made of a pair of diamonds. Then they aim powerful beams of X-rays at the sample, which hit and scatter in all directions. By gathering the scatter data, scientists can reconstruct how the atoms in the sample were arranged.

The team found that at conditions that exist below about 1,200 miles underground, the ferromagnesian silicate perovskite actually breaks into two separate phases. One contains nearly no iron, while the other is full of iron. The iron-rich phase, called the H-phase, is much more stable under these conditions.

“We still don’t fully understand the chemistry of the H-phase,” said lead author and Carnegie Institution of Washington scientist Li Zhang. “But this finding indicates that all geodynamic models need to be reconsidered to take the H-phase into account. And there could be even more unidentified phases down there in the lower mantle as well, waiting to be identified.”

The facilities at Argonne’s Advanced Photon Source were key to the findings, said Carnegie scientist Yue Meng, also an author on the paper. “Recent technological advances at our beamline allowed us to create the conditions to simulate these intense temperatures and pressures and probe the changes in chemistry and structure of the sample in situ,” she said.

“What distinguished this work was the exceptional attention to detail in every aspect of the research – it demonstrates a new level for high-pressure research,” Meng added.

Pacific plate shrinking as it cools

A map produced by scientists at the University of Nevada, Reno, and Rice University shows predicted velocities for sectors of the Pacific tectonic plate relative to points near the Pacific-Antarctic ridge, which lies in the South Pacific ocean. The researchers show the Pacific plate is contracting as younger sections of the lithosphere cool. -  Corné Kreemer and Richard Gordon
A map produced by scientists at the University of Nevada, Reno, and Rice University shows predicted velocities for sectors of the Pacific tectonic plate relative to points near the Pacific-Antarctic ridge, which lies in the South Pacific ocean. The researchers show the Pacific plate is contracting as younger sections of the lithosphere cool. – Corné Kreemer and Richard Gordon

The tectonic plate that dominates the Pacific “Ring of Fire” is not as rigid as many scientists assume, according to researchers at Rice University and the University of Nevada.

Rice geophysicist Richard Gordon and his colleague, Corné Kreemer, an associate professor at the University of Nevada, Reno, have determined that cooling of the lithosphere — the outermost layer of Earth — makes some sections of the Pacific plate contract horizontally at faster rates than others and cause the plate to deform.

Gordon said the effect detailed this month in Geology is most pronounced in the youngest parts of the lithosphere — about 2 million years old or less — that make up some the Pacific Ocean’s floor. They predict the rate of contraction to be 10 times faster than older parts of the plate that were created about 20 million years ago and 80 times faster than very old parts of the plate that were created about 160 million years ago.

The tectonic plates that cover Earth’s surface, including both land and seafloor, are in constant motion; they imperceptibly surf the viscous mantle below. Over time, the plates scrape against and collide into each other, forming mountains, trenches and other geological features.

On the local scale, these movements cover only inches per year and are hard to see. The same goes for deformations of the type described in the new paper, but when summed over an area the size of the Pacific plate, they become statistically significant, Gordon said.

The new calculations showed the Pacific plate is pulling away from the North American plate a little more — approximately 2 millimeters a year — than the rigid-plate theory would account for, he said. Overall, the plate is moving northwest about 50 millimeters a year.

“The central assumption in plate tectonics is that the plates are rigid, but the studies that my colleagues and I have been doing for the past few decades show that this central assumption is merely an approximation — that is, the plates are not rigid,” Gordon said. “Our latest contribution is to specify or predict the nature and rate of deformation over the entire Pacific plate.”

The researchers already suspected cooling had a role from their observation that the 25 large and small plates that make up Earth’s shell do not fit together as well as the “rigid model” assumption would have it. They also knew that lithosphere as young as 2 million years was more malleable than hardened lithosphere as old as 170 million years.

“We first showed five years ago that the rate of horizontal contraction is inversely proportional to the age of the seafloor,” he said. “So it’s in the youngest lithosphere (toward the east side of the Pacific plate) where you get the biggest effects.”

The researchers saw hints of deformation in a metric called plate circuit closure, which describes the relative motions where at least three plates meet. If the plates were rigid, their angular velocities at the triple junction would have a sum of zero. But where the Pacific, Nazca and Cocos plates meet west of the Galápagos Islands, the nonclosure velocity is 14 millimeters a year, enough to suggest that all three plates are deforming.

“When we did our first global model in 1990, we said to ourselves that maybe when we get new data, this issue will go away,” Gordon said. “But when we updated our model a few years ago, all the places that didn’t have plate circuit closure 20 years ago still didn’t have it.”

There had to be a reason, and it began to become clear when Gordon and his colleagues looked beneath the seafloor. “It’s long been understood that the ocean floor increases in depth with age due to cooling and thermal contraction. But if something cools, it doesn’t just cool in one direction. It’s going to be at least approximately isotropic. It should shrink the same in all directions, not just vertically,” he said.

A previous study by Gordon and former Rice graduate student Ravi Kumar calculated the effect of thermal contraction on vertical columns of oceanic lithosphere and determined its impact on the horizontal plane, but viewing the plate as a whole demanded a different approach. “We thought about the vertically integrated properties of the lithosphere, but once we did that, we realized Earth’s surface is still a two-dimensional problem,” he said.

For the new study, Gordon and Kreemer started by determining how much the contractions would, on average, strain the horizontal surface. They divided the Pacific plate into a grid and calculated the strain on each of the nearly 198,000 squares based on their age, as determined by the seafloor age model published by the National Geophysical Data Center.

“That we could calculate on a laptop,” Gordon said. “If we tried to do it in three dimensions, it would take a high-powered computer cluster.”

The surface calculations were enough to show likely strain fields across the Pacific plate that, when summed, accounted for the deformation. As further proof, the distribution of recent earthquakes in the Pacific plate, which also relieve the strain, showed a greater number occurring in the plate’s younger lithosphere. “In the Earth, those strains are either accommodated by elastic deformation or by little earthquakes that adjust it,” he said.

“The central assumption of plate tectonics assumes the plates are rigid, and this is what we make predictions from,” said Gordon, who was recently honored by the American Geophysical Union for writing two papers about plate movements that are among the top 40 papers ever to appear in one of the organization’s top journals. “Up until now, it’s worked really well.”

“The big picture is that we now have, subject to experimental and observational tests, the first realistic, quantitative estimate of how the biggest oceanic plate departs from that rigid-plate assumption.”