Drained lake holds record of ancient Alaska

Mountain lake is telling an interesting story about the Alaskan climate
Mountain lake is telling an interesting story about the Alaskan climate

Not too long ago, a lake sprung a leak in the high country of the Wrangell-St. Elias Mountains. The lake drained away, as glacier-dammed lakes often do, but this lake was a bit different, and seems to be telling a story about a warmer Alaska.

The lake, known as Iceberg Lake to people in McCarthy, about 50 air miles to the north, had been part of the landscape for as long as people could remember. Pinched by glacial ice, the three-mile-long, one-mile-wide lake on the northern boundary of the Bagley Icefield was remote but notable enough that it was the cover photo for a recent book about hiking Wrangell-St. Elias National Park.

When McCarthy guide Richard Villa visited the area with a client in the summer of 1999, he was stunned to see the lake, which had lost much of its water. Villa later told Mike Loso, a Kennicott resident part of the year and now a professor at Alaska Pacific University in Anchorage.

Loso flew to the lake the next summer with Bob Anderson and Dan Doak, scientists who also reside in McCarthy for part of the year. The men saw a muddy lakebed where Iceberg Lake had sat for so long. Streams of meltwater had cut though the mud, making sharp canyons. Loso, Anderson, and Doak hiked into the gullies and saw on the walls many layers of the former lake bottom. They knew that each two layers of sediment–a thinner layer of fine-grained deposits that settled in winter and coarser sand forced in with summer runoff–represented a year in the life of the lake.

“We eyeballed these layers and said ‘Wow, there’s at least 1,000 of these things,'” Loso said.

Scientists often pull plugs of sediment from the bottom of lakes to find an ancient record of pollen, dust, ash, and other things that have drifted, or flowed in over the years, but their records usually don’t go back farther than the Little Ice Age, a cold period from about 1600 to 1850 when many glaciers advanced. Those glaciers plowed over most of the landscape, but Iceberg Lake seemed to have escaped the gradual assault.

“(Iceberg Lake) is pinned between these two glaciers just far enough away that it didn’t get overrun by the Little Ice Age (glaciers),” Loso said.

So instead of having a record of just the last few hundred years, the floor of Iceberg Lake held a continuous record from 1998 back to A.D. 442, a span of more than 1,500 years.

That record is unique in that it seems to preserve a time of warmer temperatures called the Medieval Warm Period that happened before the Little Ice Age.

“It’s the most recent time period warm enough to be comparable to the present,” Loso said.

When Loso and his colleagues used the thickness of layers (called “varves”) to interpret warmth in the area of Iceberg Lake, they found that summer temperatures in that part of the state were warmer in the late 20th century than they were during the Medieval Warm Period.

Not only that, they saw that Iceberg Lake had never drained during the Medieval Warm Period. Since the catastrophic leakage in 1999, the lake has drained of meltwater every year except for 2001. With such erratic behavior after centuries of stability, Iceberg Lake might be saying that Alaska has been warmer recently than it has been in a long, long time.

Studying Rivers for Clues to Carbon Cycle

In the science world, in the media, and recently, in our daily lives, the debate continues over how carbon in the atmosphere is affecting global climate change. Studying just how carbon cycles throughout the Earth is an enormous challenge, but one Northwestern University professor is doing his part by studying one important segment — rivers.

Aaron Packman, associate professor of civil and environmental engineering in Northwestern’s McCormick School of Engineering and Applied Science, is collaborating with ecologists and microbiologists from around the world to study how organic carbon is processed in rivers.

Packman, who specializes in studying how particles and sediment move around in rivers, is co-author of a paper on the topic published online in the journal Nature Geoscience.

The paper evaluates our current understanding of carbon dynamics in rivers and reaches two important conclusions: it argues that carbon processing in rivers is a bigger component of global carbon cycling than people previously thought, and it lays out a framework for how scientists should go about assessing those processes.

Much more is known about carbon cycling in the atmosphere and oceans than in rivers. Evaluating large-scale material cycling in a river provides a challenge — everything is constantly moving, and a lot of it moves in floods. As a result, much of what we know about carbon processing in rivers is based on what flows into the ocean.

“But that’s not really enough,” Packman said. “You miss all this internal cycling.”

In order to understand how carbon cycles around the globe — through the land, freshwater, oceans and atmosphere — scientists need to understand how it moves around, how it’s produced, how it’s retained in different places and how long it stays there.

In rivers, carbon is both transformed and consumed. Microorganisms like algae take carbon out of the atmosphere and incorporate it into their own cells, while bacteria eat dead organic matter and then release CO2 back into the atmosphere.

“It’s been known for a long time that global carbon models don’t really account for all the carbon,” Packman said. “There’s a loss of carbon, and one place that could be occurring is in river systems.” Even though river waters contain a small fraction of the total water on earth, they are such dynamic environments because microorganisms consume and transform carbon at rapid rates.

“We’re evaluating how the structure and transport conditions and the dynamics of rivers create a greater opportunity for microbial processing,” Packman said.

Packman is the first to admit that studying microorganisms, carbon and rivers sounds more like ecology than engineering. But such problems require work from all different areas, he said.

“We’re dealing with such interdisciplinary problems, tough problems, so we have to put fluid mechanics, transport, ecology and microbiology together to find this overall cycling of carbon,” he said. “People might say it’s a natural science paper, but to me it’s a modern engineering paper. To understand what’s going on with these large-scale processes, we have to analyze them quantitatively, and the tools for getting good estimates have been developed in engineering.”

Packman was introduced to the co-authors of the paper — ecologists who study how dead leaves and soil drive stream ecology and who come from as far away as Spain and Austria — about 10 years ago through the activity of the Stroud Water Research Center in Pennsylvania.

Since then, they have collaborated on many similar projects around river structure and transport dynamics. They are currently working on a project funded by the National Science Foundation on the dynamics of organic carbon in rivers and trying to understand how carbon delivered from upstream areas influence the ecology of downstream locations.

“The broadest idea is really part of global change efforts to understand carbon cycling over the whole Earth, which is an enormous challenge,” Packman said.

The lead author of the Nature Geoscience paper is Tom Battin of the Department of Freshwater Ecology at the University of Vienna in Austria. Other authors are Louis A. Kaplan, Stuart Findlay, Charles S. Hopkinson, Eugenia Marti, J. Denis Newbold and Francesc Sabater.

Supercomputer Unleashes Virtual 9.0 Megaquake in Pacific Northwest

Scientists used a supercomputer-driven 'virtual earthquake' to explore likely ground shaking in a magnitude 9.0 megathrust earthquake in the Pacific Northwest. Peak ground velocities are displayed in yellow and red. The legend represents speed in meters per second (m/s) with red equaling 2.3 m/s. Although the largest ground motions occur offshore near the fault and decrease eastward, sedimentary basins lying beneath some cities amplify the shaking in Seattle, Tacoma, Olympia, and Vancouver, increasing the risk of damage. - Credit: Kim Olsen, SDSU
Scientists used a supercomputer-driven ‘virtual earthquake’ to explore likely ground shaking in a magnitude 9.0 megathrust earthquake in the Pacific Northwest. Peak ground velocities are displayed in yellow and red. The legend represents speed in meters per second (m/s) with red equaling 2.3 m/s. Although the largest ground motions occur offshore near the fault and decrease eastward, sedimentary basins lying beneath some cities amplify the shaking in Seattle, Tacoma, Olympia, and Vancouver, increasing the risk of damage. – Credit: Kim Olsen, SDSU

On January 26, 1700, at about 9 p.m. local time, the Juan de Fuca plate beneath the ocean in the Pacific Northwest suddenly moved, slipping some 60 feet eastward beneath the North American plate in a monster quake of approximately magnitude 9, setting in motion large tsunamis that struck the coast of North America and traveled to the shores of Japan.

Since then, the earth beneath the region – which includes the cities of Vancouver, Seattle and Portland — has been relatively quiet. But scientists believe that earthquakes with magnitudes greater than 8, so-called “megathrust events,” occur along this fault on average every 400 to 500 years.

To help prepare for the next megathrust earthquake, a team of researchers led by seismologist Kim Olsen of San Diego State University (SDSU) used a supercomputer-powered “virtual earthquake” program to calculate for the first time realistic three-dimensional simulations that describe the possible impacts of megathrust quakes on the Pacific Northwest region. Also participating in the study were researchers from the San Diego Supercomputer Center at UC San Diego and the U.S. Geological Survey.

What the scientists learned from this simulation is not reassuring, as reported in the Journal of Seismology, particularly for residents of downtown Seattle.

With a rupture scenario beginning in the north and propagating toward the south along the 600-mile long Cascadia Subduction Zone, the ground moved about 1 ½ feet per second in Seattle; nearly 6 inches per second in Tacoma, Olympia and Vancouver; and 3 inches in Portland, Oregon. Additional simulations, especially of earthquakes that begin in the southern part of the rupture zone, suggest that the ground motion under some conditions can be up to twice as large.

“We also found that these high ground velocities were accompanied by significant low-frequency shaking, like what you feel in a roller coaster, that lasted as long as five minutes – and that’s a long time,” said Olsen.

The long-duration shaking, combined with high ground velocities, raises the possibility that such an earthquake could inflict major damage on metropolitan areas — especially on high-rise buildings in downtown Seattle. Compounding the risks, like Los Angeles to the south, Seattle, Tacoma, and Olympia sit on top of sediment-filled geological basins that are prone to greatly amplifying the waves generated by major earthquakes.

“One thing these studies will hopefully do is to raise awareness of the possibility of megathrust earthquakes happening at any given time in the Pacific Northwest,” said Olsen. “Because these events will tend to occur several hundred kilometers from major cities, the study also implies that the region could benefit from an early warning system that can allow time for protective actions before the brunt of the shaking starts.” Depending on how far the earthquake is from a city, early warning systems could give from a few seconds to a few tens of seconds to implement measures, such as automatically stopping trains and elevators.

Added Olsen, “The information from these simulations can also play a role in research into the hazards posed by large tsunamis, which can originate from such megathrust earthquakes like the ones generated in the 2004 Sumatra-Andeman earthquake in Indonesia.” One of the largest earthquakes ever recorded, the magnitude 9.2 Sumatra-Andeman event was felt as far away as Bangladesh, India, and Malaysia, and triggered devastating tsunamis that killed more than 200,000 people.

In addition to increasing scientific understanding of these massive earthquakes, the results of the simulations can also be used to guide emergency planners, to improve building codes, and help engineers design safer structures — potentially saving lives and property in this region of some 9 million people.

Even with the large supercomputing and data resources at SDSC, creating “virtual earthquakes” is a daunting task. The computations to prepare initial conditions were carried out on SDSC’s DataStar supercomputer, and then the resulting information was transferred for the main simulations to the center’s Blue Gene Data supercomputer via SDSC’s advanced virtual file system or GPFS-WAN, which makes data seamlessly available on different – sometimes distant – supercomputers.

Coordinating the simulations required a complex choreography of moving information into and out of the supercomputer as Olsen’s sophisticated “Anelastic Wave Model” simulation code was running. Completing just one of several simulations, running on 2,000 supercomputer processors, required some 80,000 processor hours – equal to running one program continuously on your PC for more than 9 years!

“To solve the new challenges that arise when researchers need to run their codes at the largest scales, and data sets grow to great size, we worked closely with the earthquake scientists through several years of code optimization and modifications,” said SDSC computational scientist Yifeng Cui, who contributed numerous refinements to allow the computer model to “scale up” to capture a magnitude 9 earthquake over such a vast area.

In order to run the simulations, the scientists must recreate in their model the components that encompass all the important aspects of the earthquake. One component is an accurate representation of the earth’s subsurface layering, and how its structure will bend, reflect, and change the size and direction of the traveling earthquake waves. Co-author William Stephenson of the USGS worked with Olsen and Andreas Geisselmeyer, from Ulm University in Germany, to create the first unified “velocity model” of the layering for this entire region, extending from British Columbia to Northern California.

Another component is a model of the earthquake source from the slipping of the Juan de Fuca plate underneath the North American plate. Making use of the extensive measurements of the massive 2004 Sumatra-Andeman earthquake in Indonesia, the scientists developed a model of the earthquake source for similar megathrust earthquakes in the Pacific Northwest.

The sheer physical size of the region in the study was also challenging. The scientists included in their virtual model an immense slab of the earth more than 650 miles long by 340 miles by 30 miles deep — more than 7 million cubic miles — and used a computer mesh spacing of 250 meters to divide the volume into some 2 billion cubes. This mesh size allows the simulations to model frequencies up to 0.5 Hertz, which especially affect tall buildings.

“One of the strengths of an earthquake simulation model is that it lets us run scenarios of different earthquakes to explore how they may affect ground motion,” said Olsen. Because the accumulated stresses or “slip deficit” can be released in either one large event or several smaller events, the scientists ran scenarios for earthquakes of different sizes.

“We found that the magnitude 9 scenarios generate peak ground velocities five to 10 times larger than those from the smaller magnitude 8.5 quakes.”

The researchers are planning to conduct additional simulations to explore the range of impacts that depend on where the earthquake starts, the direction of travel of the rupture along the fault, and other factors that can vary.

This research was supported by the National Science Foundation, the U.S. Geological Survey, the Southern California Earthquake Center, and computing time on an NSF supercomputer at SDSC.

Earthquake theory stretched in Central Asia study

Scientists discover cause of seismic instability in Pakistan
Scientists discover cause of seismic instability in Pakistan

The entrenched political instability in Pakistan and Afghanistan is of grave concern to many in the West – but now geologists at ANU have suggested a new cause for the seismic instability that regularly rocks the region.

Scientists from the Research School of Earth Sciences at ANU argue that the frequent and dramatic earthquakes in the Hindu Kush mountain range are likely to be the result of a slow, elastic stretching of a sub-surface feature called a boudin. Their findings, published in the journal Nature Geoscience today, run contrary to the theory that earthquakes usually result from the abrasive collisions between tectonic plates.

“We’ve always thought of earthquakes as being brittle, but our research that the slow, ductile stretching of certain geological features can build up energy that is then suddenly released, causing major seismic upheaval,” said lead author Professor Gordon Lister.

Using computer modelling, the researchers were able to show that the long, hard boudin that sits vertically beneath the Hindu Kush is being stretched as its lower parts are pulled into the Earth’s mantle. “It’s like a metal rod that is being pulled at both ends,” Professor Lister explained. “Eventually the stretching will suddenly accelerate, releasing energy in the process.”

The boudin is thought to be a remnant of the oceanic plate that was pushed into the Earth’s mantle when India collided with Asia. Professor Lister said that eventually it too will eventually drop into the deeper mantle, but that is likely to take thousands, if not millions, of years.

“This is important work, as it suggests a new way of understanding how earthquakes happen. It feeds into the potential for us to eventually develop new and innovative long-range forecasting techniques” Professor Lister said.

“It’s no accident that nations like Afghanistan and Pakistan are places of unrest, because the people there are living in constant hardship, and this results in part from periodic catastrophe’s they must endure, for example related to earthquakes. If we don’t put more effort into understanding the how and why, and also into how we might eventually better forecast earthquakes, humankind is forever doomed to deal with the consequences.”

The researchers have developed a software program called eQuakes that allows them to model earthquake patterns against geological features.

Surprise On Journey To Center Of The Earth: Light Tectonic Plates Lead The Way

Andes Mountains, Peru. When two tectonic plates collide, with one sliding below the other and sinking into mantle, it can lead to the formation of mountain belts, like the Andes.
Andes Mountains, Peru. When two tectonic plates collide, with one sliding below the other and sinking into mantle, it can lead to the formation of mountain belts, like the Andes.

The first direct evidence of how and when tectonic plates move into the deepest reaches of the Earth has been detailed in Nature. Scientists hope their description of how plates collide with one sliding below the other into the rocky mantle could potentially improve their ability to assess earthquake risks.

The UK and Swiss team found that, contrary to common scientific predictions, dense plates tend to be held in the upper mantle, while younger and lighter plates sink more readily into the lower mantle.

The mantle is a zone underneath the Earth’s crust encompassing its super hot molten core. It is divided into an upper and lower area, and is made up of a 2,900 km circumference of churning, viscous rock. It is constantly fed with new material from parts of tectonic plates which slide down from the surface into it.

The researchers’ numerical models show how old, dense and relatively stiff plates tend to flatten upon reaching the upper-lower mantle boundary, ‘draping’ on top of it. Their models are helping to explain plate movements and earthquakes in the Western Pacific, where old plates currently sink below Tonga, the Mariana Islands and Japan.

By contrast, younger more malleable plates tend to bend and fold above the boundary of the lower mantle for tens of millions of years until they form a critical mass that can sink rapidly into the lower mantle.

When this mass moves into the lower mantle, the part of the plate still at the surface is pulled along at high speed. This explains why plate movements below Central and northern South America are much higher than expected for such young plates.

The scientists came to these conclusions by using a numerical model, originally used to show how buildings buckle and fold, which calculates the brittleness, stiffness and elasticity of tectonic plates alongside how the pressures and stresses inside the mantle would affect the plate on its downward descent.

They then compared the modelling with plate movement data. By comparing the two models, the team was able to build up a clear picture of how plates should move when stalled in the upper mantle and also show, for the first time, how tectonic plate rock is mixing within the mantle.

Commenting about the study,* lead researcher Dr Saskia Goes, from Imperial College London’s Department of Earth Science and Engineering, said: “It is exciting to see direct evidence of plates transiting from the upper and lower mantle. This process has been predicted by models before, but no one has been able to link these predictions with observations, as we now do for plate motions.”

When two tectonic plates collide, with one sliding below the other and sinking into mantle, it can lead to the formation of mountain belts, like the Andes, and island arcs, like Japan and, in some places, cause explosive volcanism and earthquakes. Dr Goes say more research is needed, but believes this study could potentially help scientists determine earthquake risks in parts of these zones where none have ever been recorded before.

“The speed with which the two plates converge, and the force with which they are pushed together, determine the size of the largest earthquakes and time between large tremors. Understanding what forces control the plate motions will ultimately help us determine the chances for large earthquakes in areas where plates converge, in places like the northern U.S., Java and northern Peru, but where no large earthquakes have been recorded in historic times,” she adds.

About tectonic plates

There are 8 major and a further 7 minor tectonic plates which cover the Earth’s surface. These plates move across the surface of the Earth. When some plates meet they undergo a process which pushes them upward to create geological formations like mountain ranges. Some plates pull apart, causing fault lines and others undergo a process known as subduction. Subduction occurs when one plate is pushed underneath another and moves into the Earth’s mantle – a rocky zone underneath the crust.

*Journal reference: Saskia Goes, Fabio A. Capitanio and Gabriele Morra. “Evidence of lower mantle slab penetration phases in plate motions.” Nature, 21 February 2008.

This work was supported by a Schweizerischer Nationalfonds Fo¨rderungsprofessur (to S.G.).

Greenland’s Rising Air Temperatures Drive Ice Loss At Surface And Beyond

Melt water puddles on the Greenland ice sheet and drains through cracks to the surface below. This water lubricates the underlying bedrock, causing the ice to flow faster toward the sea. (Credit: NASA)
Melt water puddles on the Greenland ice sheet and drains through cracks to the surface below. This water lubricates the underlying bedrock, causing the ice to flow faster toward the sea. (Credit: NASA)

A new NASA study confirms that the surface temperature of Greenland’s massive ice sheet has been rising, stoked by warming air temperatures, and fueling loss of the island’s ice at the surface and throughout the mass beneath.

Greenland’s enormous ice sheet is home to enough ice to raise sea level by about 23 feet if the entire ice sheet were to melt into surrounding waters. Though the loss of the whole ice sheet is unlikely, loss from Greenland’s ice mass has already contributed in part to 20th century sea level rise of about two millimeters per year, and future melt has the potential to impact people and economies across the globe. So NASA scientists used state-of-the-art NASA satellite technologies to explore the behavior of the ice sheet, revealing a relationship between changes at the surface and below.

“The relationship between surface temperature and mass loss lends further credence to earlier work showing rapid response of the ice sheet to surface meltwater,” said Dorothy Hall, a senior researcher in Cryospheric Sciences at NASA’s Goddard Space Flight Center, in Greenbelt, Md., and lead author of the study.

A team led by Hall used temperature data captured each day from 2000 through 2006 from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on NASA’s Terra satellite. They measured changes in the surface temperature to within about one degree of accuracy from about 440 miles away in space. They also measured melt area within each of the six major drainage basins of the ice sheet to see whether melt has become more extensive and longer lasting, and to see how the various parts of the ice sheet are reacting to increasing air temperatures.

The team took their research at the ice sheet’s surface a step further, becoming the first to pair the surface temperature data with satellite gravity data to investigate what internal ice changes occur as the surface melts. Geophysicist and co-author, Scott Luthcke, also of NASA Goddard, developed a mathematical solution, using gravity data from NASA’s Gravity Recovery and Climate Experiment (GRACE) twin satellite system. “This solution has permitted greatly-improved detail in both time and space, allowing measurement of mass change at the low-elevation coastal regions of the ice sheet where most of the melting is occurring,” said Luthcke.

The paired surface temperature and gravity data confirm a strong connection between melting on ice sheet surfaces in areas below 6,500 feet in elevation, and ice loss throughout the ice sheet’s giant mass. The result led Hall’s team to conclude that the start of surface melting triggers mass loss of ice over large areas of the ice sheet.

The beginning of mass loss is highly sensitive to even minor amounts of surface melt. Hall and her colleagues showed that when less than two percent of the lower reaches of the ice sheet begins to melt at the surface, mass loss of ice can result. For example, in 2004 and 2005, the GRACE satellites recorded the onset of rapid subsurface ice loss less than 15 days after surface melting was captured by the Terra satellite.

“We’re seeing a close correspondence between the date that surface melting begins, and the date that mass loss of ice begins beneath the surface,” Hall said. “This indicates that the meltwater from the surface must be traveling down to the base of the ice sheet — through over a mile of ice — very rapidly, where its presence allows the ice at the base to slide forward, speeding the flow of outlet glaciers that discharge icebergs and water into the surrounding ocean.”

Hall underscores the importance of combining results from multiple NASA satellites to improve understanding of the ice sheet’s behavior. “We find that when we look at results from different satellite sensors and those results agree, the confidence in the conclusions is very high,” said Hall.

Hall and her colleagues believe that air temperature increases are responsible for increasing ice sheet surface temperatures and thus more-extensive surface melt. “If air temperatures continue rising over Greenland, surface melt will continue to play a large role in the overall loss of ice mass.” She also noted that the team’s detailed study using the high-resolution MODIS data show that various parts of the ice sheet are reacting differently to air temperature increases, perhaps reacting to different climate-driven forces. This is important because much of the southern coastal area of the ice sheet is already near the melting point (0 degrees Celsius) during the summer.

Changes in Greenland’s ice sheet surface temperature have been measured by satellites dating back to 1981. “Earlier work has shown increasing surface temperatures from 1981 to the present,” said Hall. “However, additional years with more accurate and finer resolution data now available using Terra’s imager are providing more information on the surface temperature within individual basins on the ice sheet, and about trends in ice sheet surface temperature. Combining this data with data from GRACE, arms us with better tools to establish the relationship between surface melting and loss of ice mass.”

The new NASA study appears in the January issue of the quarterly Journal of Glaciology.

Cold conspirators: Ice crystals implicated in Arctic pollution

Field of frost flowers
Field of frost flowers

Frost flowers. Diamond dust. Hoarfrost. These poetically named ice crystal forms are part of the stark beauty of the Arctic. But they also play a role in its pollution, a new study by scientists at the University of Michigan, the Cold Regions Research & Engineering Laboratory and the University of Alaska has found.

After collecting and analyzing hundreds of samples from the Alaskan Arctic, the researchers determined that ice crystals that form from vapor clouds billowing up from cracks in sea ice help concentrate mercury from the atmosphere, and that certain types of crystals are more efficient than others. Their results appear in the cover article for the March 1 issue of Environmental Science & Technology.

“Previous measurements had shown that in polar springtime, the normally steady levels of mercury in the atmosphere drop to near zero, and scientists studying this atmospheric phenomenon had analyzed a few snow samples and found very high levels of mercury,” said Joel Blum, the John D. MacArthur Professor of Geological Sciences at U-M. “We wanted to understand what’s controlling this mercury deposition, where it’s occurring and whether mercury concentrations are related to the type and formation of snow and ice crystals.”

Mercury is a naturally occurring element, but some 150 tons of it enter the environment each year from human-generated sources in the United States, such as incinerators, chlorine-producing plants and coal-fired power plants. Precipitation is a major pathway through which mercury and other pollutants travel from the atmosphere to land and water, said lead author Thomas Douglas of the Cold Regions Research & Engineering Laboratory in Fort Wainwright, Alaska.

“Alaska receives air masses originating in Asia, and with China adding a new coal-fired power plant almost every week, it’s not surprising that we find significant amounts of mercury there,” Douglas said. “The concentrations we measured in some snow are far greater than would be found right next to a waste incinerator or power plant in an industrialized location.”

Once mercury from the atmosphere is deposited onto land or into water, micro-organisms convert some of it to methylmercury, a highly toxic form that builds up in fish and the animals that eat them. In wildlife, exposure to methylmercury can interfere with reproduction, growth, development and behavior and may even cause death. Effects on humans include damage to the central nervous system, heart and immune system. The developing brains of young and unborn children are especially vulnerable.

Douglas, Blum and co-workers discovered that certain types of ice crystals-frost flowers and rime ice-contained the highest concentrations of mercury. Because both types of crystal grow directly by water vapor accretion, the scientists reasoned that breaks in the sea ice, where water vapor rises in great clouds, contribute to Arctic mercury deposition.

“The vapor that rises through these openings in the ice brings with it bromine from the sea water. That gets into the atmosphere, where sunlight plus the bromine cause a catalytic reaction which converts mercury gas into a reactive form. If any ice crystals are present, the mercury sticks to them and comes out of the atmosphere,” Blum said.

Close-up of frost flowers
Close-up of frost flowers

The greater the surface area of the crystals, the more mercury they grab, which explains why frost flowers and rime ice, both delicate formations with high surface areas, end up with so much mercury. The mercury-tainted crystals aren’t, however, confined to the edges of breaks in the ice, the researchers determined. Bromine can travel great distances, resulting in mercury deposition in snow throughout the Arctic coastal region.

Collecting the samples was an undertaking that required a spirit of adventure as well as scientific savvy.

“It’s kind of a scary place to work,” Blum said. “It’s freezing cold, and you’re out on the sea ice as it’s breaking and shifting. You can very easily get stuck on the wrong side of the ice and get stranded. Our Inupiat guides would listen and watch, and when they told us things were shifting, we’d get out of there quickly.”

In one experiment the research team used Teflon containers filled with liquid nitrogen, attached to kites or long poles, to collect newly condensed frost over the open water. They also flew a remote-controlled airplane through the vapor cloud and collected ice from its wings.

Even getting out to the ice to do the work was a challenge. After flying to Barrow, Alaska, the northernmost settlement on the North American mainland, the team took off on snowmobiles, led by their Inupiat guides. That may sound like a lark, but traveling over sea ice was not exactly smooth sailing, Blum said. Though the ice freezes flat, it breaks up, smashes back together and refreezes, forming high ridges through which the team had to chip their way with ice-axes to make pathways for their snowmobiles.

But the results are worth the effort and the risks, Douglas said.

“Research like this will help to further the understanding of mercury deposition to a region that is generally considered pristine,” he said. “In the next phase of our work, we are expanding our knowledge by tracking the mercury during and following snow melt and studying its accumulation on the tundra.”

In addition to Blum and Douglas, the paper’s authors are Matthew Sturm of the Cold Regions Research & Engineering Laboratory in Fort Wainwright, Alaska; William R. Simpson and Laura Alvarez-Aviles of the University of Alaska, Fairbanks; Gerald Keeler, director of the U-M Air Quality Laboratory; Donald Perovich of the Cold Regions Research & Engineering Laboratory in Hanover, N.H.; U-M post-doctoral fellow Abir Biswas and U-M graduate student Kelsey Johnson.

The researchers received funding from the National Science Foundation’s Office of Polar Programs Arctic Science Section.

Heavy rainfall on the increase

Scientists at the University of East Anglia (UEA) have found that winter precipitation – such as rain and snow – became more intense in the UK during the last 100 years.

Similar increases in heavy rainfall have now also become evident in spring and, to a lesser extent, autumn.

A previously reported reduction in heavy summer rainfall appears to have ended during the 1990s, with observations for the last decade indicating a return to more typical amounts of intense rainfall in summer.

The results will inform other work currently being carried out on flood risk and the impact of extreme weather events. As surface run-off depends on rainfall intensity and frequency, changes in intense rainfall events will impact strongly on floods.

The UEA study was funded by the Natural Environment Research Council (NERC) as part of the Flood Risk from Extreme Events (FREE) programme, which aims to improve predictions of floods minutes-to-weeks and seasons-to-decades ahead, by using environmental science to investigate the physical processes involved in generating extreme flooding events.

Using data from more than 600 rain gauges around the UK, from as far back as 1900 to as recently as 2006, Douglas Maraun, Tim Osborn and Nathan Gillett, of the university’s Climatic Research Unit, classified every day’s measured precipitation into one of 10 categories of rainfall intensity, from drizzle to a downpour. They then analysed how the amount of precipitation in each category has changed over time. In winter, for example, the amount of precipitation falling in the heaviest category has increased over the last 40 years in all regions of the UK.

The work, published this week in the International Journal of Climatology, updates and extends previous studies by Dr Osborn and colleagues, using five-times as many rain gauges and looking at measurements over a longer time period.

Their classification took into account the typical differences in rainfall between summer and winter and across different regions of the country. In parts of East Anglia, for example, heavy rain meant at least 20mm falling on a single summer day, while in winter, 10mm in a day was sufficient to reach the heaviest category. For some locations in the north-west Highlands of Scotland, rain or snow falls of at least 30mm in summer and even 60mm in winter were the minimum required to count towards the heaviest category.

This new, more extensive study, using up-to-date records, supports the existence of a long-term increase in winter precipitation intensity that is very widespread across the UK. In the late 1960s, about seven per cent of the UK’s winter precipitation came from heavy rain or snow events, while in the last 10 years that figure has been about 12 per cent.

Until the late 1990s, most areas of the UK had seen a decreasing contribution of extreme rainfall during the summer. The updated measurements indicate that this trend towards lighter summer rainfall reversed during the last decade, but it is too early to tell whether this new trend will continue into the future.

“So far it is not clear what causes these trends and variations. In the next stage of our study, we will be looking at possible physical mechanisms and whether man-made global warming is contributing,” explained Dr Maraun.

2007 third-lowest year on record for earthquake activity

The Nevada and eastern California region experienced one of its quietest years on record for earthquake activity, according to a study by the University’s Nevada Seismological Laboratory.

Nevada seismologists located only 4,503 earthquakes during 2007 in their monitoring region that includes Nevada and eastern California; only three of 2007’s earthquakes were magnitude 4.0 or larger. These events occurred in relatively remote areas, with the largest, north of Bridgeport, Calif., on March 9, registering a 4.9. The largest within the borders of Nevada was a magnitude 4.1 earthquake on Jan. 24, 2007, about 25 miles south-southeast of Goldfield.

“Because earthquakes form a complicated pattern in space and time, there is no simple way to rank the calendar years based on the rate of activity,” said John Anderson, director of the Nevada Seismological Laboratory, which is a research and academic unit within the College of Science’s Mackay School of Earth Sciences and Engineering. “But by any ranking, 2007 was one of the quietest on record.”

Since 1932, the average year for the region has had about 18 earthquakes with magnitude 4.0 or greater, making the three in 2007 represent about 17 percent of the average. 2005 was also another quiet year, with only two earthquakes larger than magnitude 4.0.

The numbers for a quiet seismological year are also borne out by looking at earthquakes with magnitude 3.0 or greater. 2007 had 36 such events, which compares to the quietest years on record, 1977 and 2005, when there were only 31 events at 3.0 or greater.

Anderson said it is difficult to forecast what is in store for 2008.

“Low rates of activity in 1945 and 1989 were not immediately followed by exceptionally active years, so the low level of seismicity in 2005-2007 does not necessarily imply that 2008 will be more active,” he said.

The veteran seismologist noted that although short-term prospects for earthquake activity cannot be predicted, Nevada has not had an earthquake with a magnitude greater than 7.0 since 1954. Based on earthquake activity since the mid-1800s, it should be expected that at least one such major event should occur on average every 25 or 35 years, he said.l

“A magnitude-7.0 earthquake could occur almost anyplace in Nevada, but is more likely in areas where smaller earthquakes are more common,” Anderson said, adding that mapping and historical trends indicate that western Nevada has higher rates of smaller earthquakes. “Because we cannot forecast or predict where the next damaging earthquake will occur, Nevadans are always urged to be prepared for potentially damaging events.”

The Nevada Seismological Laboratory is the lead agency in monitoring earthquakes in the western Great Basin, as part of a nationwide monitoring program that is supported by the U.S. Geological Survey. It is part of an exclusive group of universities that contribute in major ways to the nation’s monitoring network.

Earthquake preparedness information is available from the Nevada Seismological Laboratory at their website.

Solar evidence indicates global warming may not be human caused

It’s getting harder and harder to blame mankind for causing the gradual increase in global temperatures that are now being seen in the climate record, scientists said today.

In a symposium today on the potential role of solar variability – increases in heat coming from the sun – held in Boston at the annual meeting of the American Association for the Advancement of Science, experts in solar science, climate modeling, and atmospheric science explored the issues surrounding who or what is to blame for the rapid rate of change.

There are several possibilities, but the most often blamed is that human industry – that is, heating, cooling, automobile exhaust, manufacturing, and power-generation – is the fundamental culprit. Such activities rely heavily on burning gas, oil, and coal on a massive scale, and the end result includes carbon dioxide, a so-called greenhouse gas that traps the heat radiating from the ground, keeping it from escaping back into space.

“I’m looking for the millennial scale of solar variability,” said astronomer Sallie Baliunas, a researcher at the Harvard-Smithsonian Center for Astrophysics in Cambridge. She added that “the records do show variability,” such as changes in radioactive carbon-14 abundance and a beryllium isotope in sediment that suggest changes in solar output. “Did the sun cause what we see on the ground?” she asked. “It doesn’t seem so. But there is some fuzziness in the data, which suggests it could go either way. The answer isn’t known at this time.

What is becoming known, especially from computer models of global climate, is quite gloomy. Warming that was first noticed in the 1960s has increased steadily, and is probably directly linked to human activities.

Scientists suspect the changes in the amount of beryllium-10 and carbon-14 found in various layers of sediment reflect solar activity, because the magnetic disturbances associated with sunspots tend to block the normal flow of cosmic rays reaching the Earth from space. The cosmic rays collide with atoms in the Earth’s atmosphere, creating the unusual isotopes; beryllium and carbon thus serve as a “signature” for cosmic-ray and solar activity.

“Our star, the sun, is a variable star,” said David H. Hathaway, a sunspot specialist from NASA’s Marshall Space Flight Center in Hunstville, Alabama. “It varies by about one-tenth of one percent” in energy output. But “there are suggestions the sun” varies “more than that, because we see it has gone through some periods, such as the Maunder minimum.” During the Maunder minimum, which lasted from 1645 to 1715 and is also known as the Little Ice Age, there was an absence or near-absence of sunpots and northern Europe experienced especially cold winters.

Baliunas has also based her research on studying surface activity that is detectable on distant stars that are reminiscent of the sun. There is considerable variability in the 60 sunlike stars she has examined, she said, depending on how fast each rotates and other factors. Unfortunately, she added, “there is no model to explain [solar surface activity] on the century-to-millennium time scale,” and long-term changes in solar output need further study.

According to Casper M. Ammann, a climate modeler at the National Center for Atmospheric Research in Boulder, Colorado, in the years since 1950, “there is no observed trend” in solar radiation. The 11-year sunspot cycle has not been significantly abnormal. This is just part of the reason for the difficulty of determining the sun’s influence on Earth’s climate. Ammann explained that “for the past 150 years people have tried to see whether the monsoons are linked to the 11-years solar cycle,” but without success.

The Earth’s atmosphere – and its relationship to the sun’s energy output – is so complex that even as warming began, “up until 1960 we couldn’t see it.” But now, he said, since warming has been confirmed, the world’s climate scientists “are probably not overestimating the problem. It’s probably worse than the estimates.”

In fact, he said, global warming is occurring at an incredibly rapid rate, faster than any previous episodes of climate change known from the paleo-climate data.

Ammann did add, however, that there is reason to hope that the most dire consequences can be avoided. Although it’s clearly too late to avoid the heating of the earth’s atmosphere, “we can substantially cut [it]” by severely reducing the amounts of carbon dioxide going into the air. “It is absolutely achievable,” he said – if by mid-century societies can generate enough will to make the necessary changes.