Team achieves petaflop-level earthquake simulations on GPU-powered supercomputers

A team of researchers at the San Diego Supercomputer Center (SDSC) and the Department of Electronic and Computer Engineering at the University of California, San Diego, has developed a highly scalable computer code that promises to dramatically cut both research times and energy costs in simulating seismic hazards throughout California and elsewhere.

The team, led by Yifeng Cui, a computational scientist at SDSC, developed the scalable GPU (graphical processing units) accelerated code for use in earthquake engineering and disaster management through regional earthquake simulations at the petascale level as part of a larger computational effort coordinated by the Southern California Earthquake Center (SCEC). San Diego State University (SDSU) is also part of this collaborative effort in pushing the envelope toward extreme-scale earthquake computing.

“The increased capability of GPUs, combined with the high-level GPU programming language CUDA, has provided tremendous horsepower required for acceleration of numerically intensive 3D simulation of earthquake ground motions,” said Cui, who recently presented the team’s new development at the NVIDIA 2013 GPU Technology Conference (GTC) in San Jose, Calif.

A technical paper based on this work will be presented June 5-7 at the 2013 International Conference on Computational Science Conference in Barcelona, Spain.

The accelerated code, which was done using GPUs as opposed to CPUs, or central processing units, is based on a widely-used wave propagation code called AWP-ODC, which stands for Anelastic Wave Propagation by Olsen, Day and Cui. It was named after Kim Olsen and Steven Day, geological science professors at San Diego State University (SDSU), and SDSC’s Cui. The research team restructured the code to exploit high performance and throughput, memory locality, and overlapping of computation and communication, which made it possible to scale the code linearly to more than 8,000 NVIDIA Kepler GPU accelerators.

Sustained One Petaflop/s Performance

The team performed GPU-based benchmark simulations of the 5.4 magnitude earthquake that occurred in July 2008 below Chino Hills, near Los Angeles. Compute systems included Keeneland, managed by Georgia Tech, Oak Ridge National Laboratory (ORNL) and the National Institute for Computational Sciences (NICS), and also part of the National Science Foundation’s (NSF) eXtreme Science and Engineering Discovery Environment (XSEDE), and Blue Waters, based at the National Center for Supercomputing Applications (NCSA). Also used was the Titan supercomputer, based at ORNL and funded by the U.S. Department of Energy. Titan is equipped with Cray XK7 systems and NIVIDIA’s Tesla K20X GPU accelerators.

The benchmarks, run on Titan, showed a five-fold speedup over the heavily optimized CPU code on the same system, and a sustained performance of one petaflop per second (one quadrillion calculations per second) on the tested system. A previous benchmark of the AWP-ODC code reached only 200 teraflops (trillions of calculations per second) of sustained performance.

By delivering a significantly higher level of computational power, researchers can provide more accurate earthquake predictions with increased physical reality and resolution, with the potential of saving lives and minimizing property damage.

“This is an impressive achievement that has made petascale-level computing a reality for us, opening up some new and really interesting possibilities for earthquake research,” said Thomas Jordan, director of SCEC, which has been collaborating with UC San Diego and SDSU researchers on this and other seismic research projects, such as the simulation of a magnitude 8.0 earthquake, the largest ever simulation to-date.

“Substantially faster and more energy-efficient earthquake codes are urgently needed for improved seismic hazard evaluation,” said Cui, citing the recent destructive earthquakes in China, Haiti, Chile, New Zealand, and Japan.

Next Steps

While the GPU-based AWP-ODC code is already in research use, further enhancements are being planned for use on hybrid heterogeneous architectures such as Titan and Blue Waters.

“One goal going forward is to use this code to calculate an improved probabilistic seismic hazard forecast for the California region under a collaborative effort coordinated by SCEC,” said Cui. “Our ultimate goal is to support development of a CyberShake model that can assimilate information during earthquake cascades so we can improve our operational forecasting and early warning systems.”

CyberShake is a SCEC project focused on developing new approaches to performing seismic hazard analyses using 3D waveform modeling. The GPU-based code has potential to save hundreds of millions of CPU-hours required to complete statewide seismic hazard map calculations in planning.

Additional members on the UC San Diego research team include Jun Zhou and Efecan Poyraz, graduate students with the university’s Department of Electrical and Computer Engineering (Zhou devoted his graduate research to this development work); SDSC researcher Dong Ju Choi; and Clark C. Guest, an associate professor of electrical and computer engineering at UC San Diego’s Jacobs School of Engineering.

Compute resources used for this research are supported by XSEDE under NSF grant number OCI-1053575, while additional funding for research was provided through XSEDE’s Extended Collaborative Support Service (ECSS) program.

“ECSS exists for exactly this reason, to help a research team make significant performance gains and take their simulations to the next level,” said Nancy Wilkins-Diehr, co-director of the ECSS program and SDSC’s associate director. “We’re very pleased with the results we were able to achieve for PI Thomas Jordan and his team. ECSS projects are typically conducted over several months to up to one year. This type of targeted support may be requested by anyone through the XSEDE allocations process.”

Building a full-scale model of a trapped oil reservoir in a laboratory

Getting trapped oil out of porous layers of sandstone and limestone is a tricky and costly operation for energy exploration companies the world over. But now, University of Alberta researchers have developed a way to replicate oil-trapping rock layers in a laboratory and show energy producers the best way to recover every last bit of oil from these reservoirs.

Mechanical engineering professor Sushanta Mitra led a research team that uses core samples from oil drilling sites to make 3-D mathematical models of the porous rock formations that can trap huge quantities of valuable oil.

The process starts with a tiny chip of rock from a core sample where oil has become trapped, That slice of rock is scanned by a Focused Ion Beam-Scanning Electron Microscopy machine, which produces a 3-D copy of the porous rock. The replica is made of a thin layer of silicon and quartz at Nanofab, the U of A’s micro/nanofabrication facility.

The researchers call the finished product a “reservoir on a chip”, or ROC.

The hugely expensive process of recovering oil in the field is recreated right in our laboratory.. The researchers soak the ROC in oil and then water, which is under pressure, is forced into the chip to see how much oil can be pushed through the microscopic channels and recovered.

ROC replicas can be made from core samples from oil-trapping rock anywhere in the world. “Oil exploration companies will be able to use ROC technology to determine what concentration of water and chemicals they’ll need to pump into layers of sandstone or limestone to maximize oil recovery,” said Mitra.

SCEC’s ‘M8′ earthquake simulation breaks computational records, promises better quake models

This image shows detail from the M8 simulation. To view a video simulation go to http://www.scivee.tv/node/21179. -  Southern California Earthquake Center
This image shows detail from the M8 simulation. To view a video simulation go to http://www.scivee.tv/node/21179. – Southern California Earthquake Center

A multi-disciplinary team of researchers has presented the world’s most advanced earthquake shaking simulation at the Supercomputing 2010 (SC10) conference held this week in New Orleans. The research was selected as a finalist for the Gordon Bell prize, awarded at the annual conference for outstanding achievement in high-performance computing applications.

The “M8″ simulation represents how a magnitude 8.0 earthquake on the southern San Andreas Fault will shake a larger area, in greater detail, than previously possible. Perhaps most importantly, the development of the M8 simulation advances the state-of-the-art in terms of the speed and efficiency at which such calculations can be performed.

The Southern California Earthquake Center (SCEC) at the University of Southern California (USC) was the lead coordinator in the project. San Diego Supercomputer Center (SDSC) researchers provided the high-performance computing and scientific visualization expertise for the simulation. Scientific details of the earthquake were developed by scientists at San Diego State University (SDSU). Ohio State University (OSU) researchers were also part of the collaborative effort to improve the efficiency of the software involved.

While this specific earthquake has a low probability of occurrence, the improvements in technology required to produce this simulation will now allow scientists to simulate other more likely earthquakes scenarios in much less time than previously required. Because such simulations are the most important and widespread applications of high performance computing for seismic hazard estimation currently in use, the SCEC team has been focused on optimizing the technologies and codes needed to create them.

The M8 simulation was funded through a number of National Science Foundation (NSF) grants and it was performed using supercomputer resources including NSF’s Kraken supercomputer at National Institute for Computational Science (NICS) and the Department of Energy (DOE) Jaguar supercomputer at the National Center for Computational Science . The SCEC M8 simulation represents the latest in earthquake science and in computations at the petascale level, which refers to supercomputers capable of more than one quadrillion floating point operations (calculations) per second.

“Petascale simulations such as this one are needed to understand the rupture and wave dynamics of the largest earthquakes, at shaking frequencies required to engineer safe structures,” said Thomas Jordan, director of SCEC and Principal Investigator for the project. Previous simulations were useful only for modeling how tall structures will behave in earthquakes, but the new simulation can be used to understand how a broader range of buildings will respond.

“The scientific results of this massive simulation are very interesting, and its level of detail has allowed us to observe things that we were not able to see in the past,” said Kim Olsen, professor of geological sciences at SDSU, and lead seismologist of the study. .

However, given the massive number of calculations required, only the most advanced supercomputers are capable of producing such simulations in a reasonable time period. “This M8 simulation represents a milestone calculation, a breakthrough in seismology both in terms of computational size and scalability,” said Yifeng Cui, a computational scientist at SDSC. “It’s also the largest and most detailed simulation of a major earthquake ever performed in terms of floating point operations, and opens up new territory for earthquake science and engineering with the goal of reducing the potential for loss of life and property.”

Specifically, the M8 simulation is the largest in terms duration of the shaking modeled (six minutes) and the geographical area covered – a rectangular volume approximately 500 miles (810km) long by 250 miles (405 km) wide, by 50 miles (85km) deep. The team’s latest research also set a new record in the number of computer processor cores used, with 223,074 cores sustaining performance of 220 trillion calculations per second for 24 hours on the Jaguar Cray XT5 supercomputer at the Oak Ridge National Laboratory (ORNL) in Tennessee.

“We have come a long way in just six years, doubling the seismic frequencies modeled by our simulations every two to three years, from 0.5 Hertz (or cycles per second) in the TeraShake simulations, to 1.0 Hertz in the ShakeOut simulations, and now to 2.0 Hertz in this latest project,” said Phil Maechling, SCEC’s associate director for Information Technology.

In terms of earthquake science, these simulations can be used to study issues of how earthquake waves travel through structures in the earth’s crust and to improve three-dimensional models of such structures.

“Based on our calculations, we are finding that deep sedimentary basins, such as those in the Los Angeles area, are getting larger shaking than are predicted by the standard methods,” Jordan said. “By improving the predictions, making them more realistic, we can help engineers make new buildings safer.” The simulations are also useful in developing better seismic hazard policies and for improving scenarios used in emergency planning.