Fountain of youth underlies Antarctic Mountains

Images of the ice-covered Gamburtsev Mountains revealed water-filled valleys, as seen by the cluster of vertical lines in this image. -  Tim Creyts
Images of the ice-covered Gamburtsev Mountains revealed water-filled valleys, as seen by the cluster of vertical lines in this image. – Tim Creyts

Time ravages mountains, as it does people. Sharp features soften, and bodies grow shorter and rounder. But under the right conditions, some mountains refuse to age. In a new study, scientists explain why the ice-covered Gamburtsev Mountains in the middle of Antarctica looks as young as they do.

The Gamburtsevs were discovered in the 1950s, but remained unexplored until scientists flew ice-penetrating instruments over the mountains 60 years later. As this ancient hidden landscape came into focus, scientists were stunned to see the saw-toothed and towering crags of much younger mountains. Though the Gamburtsevs are contemporaries of the largely worn-down Appalachians, they looked more like the Rockies, which are nearly 200 million years younger.

More surprising still, the scientists discovered a vast network of lakes and rivers at the mountains’ base. Though water usually speeds erosion, here it seems to have kept erosion at bay. The reason, researchers now say, has to do with the thick ice that has entombed the Gamburtsevs since Antarctica went into a deep freeze 35 million years ago.

“The ice sheet acts like an anti-aging cream,” said the study’s lead author, Timothy Creyts, a geophysicist at Columbia University’s Lamont-Doherty Earth Observatory. “It triggers a series of thermodynamic processes that have almost perfectly preserved the Gamburtsevs since ice began spreading across the continent.”

The study, which appears in the latest issue of the journal Geophysical Research Letters, explains how the blanket of ice covering the Gamburtsevs has preserved its rugged ridgelines.

Snow falling at the surface of the ice sheet draws colder temperatures down, closer to protruding peaks in a process called divergent cooling. At the same time, heat radiating from bedrock beneath the ice sheet melts ice in the deep valleys to form rivers and lakes. As rivers course along the base of the ice sheet, high pressures from the overlying ice sheet push water up valleys in reverse. This uphill flow refreezes as it meets colder temperature from above. Thus, ridgelines are cryogenically preserved.

The oldest rocks in the Gamburtsevs formed more than a billion years ago, in the collision of several continents. Though these prototype mountains eroded away, a lingering crustal root became reactivated when the supercontinent Gondwana ripped apart, starting about 200 million years ago. Tectonic forces pushed the land up again to form the modern Gamburtsevs, which range across an area the size of the Alps. Erosion again chewed away at the mountains until earth entered a cooling phase 35 million years ago. Expanding outward from the Gamburtsevs, a growing layer of ice joined several other nucleation points to cover the entire continent in ice.

The researchers say that the mechanism that stalled aging of the Gamburtsevs at higher elevations may explain why some ridgelines in the Torngat Mountains on Canada’s Labrador Peninsula and the Scandinavian Mountains running through Norway, Sweden and Finland appear strikingly untouched. Massive ice sheets covered both landscapes during the last ice age, which peaked about 20,000 years ago, but many high-altitude features bear little trace of this event.

“The authors identify a mechanism whereby larger parts of mountains ranges in glaciated regions–not just Antarctica–could be spared from erosion,” said Stewart Jamieson, a glaciologist at Durham University who was not involved in the study. “This is important because these uplands are nucleation centers for ice sheets. If they were to gradually erode during glacial cycles, they would become less effective as nucleation points during later ice ages.”

Ice sheet behavior, then, may influence climate change in ways that scientists and computer models have yet to appreciate. As study coauthor Fausto Ferraccioli, head of the British Antarctic Survey’s airborne geophysics group, put it: “If these mountains in interior East Antarctica had been more significantly eroded then the ice sheet itself
may have had a different history.”

Other Authors


Hugh Carr and Tom Jordan of the British Antarctic Survey; Robin Bell, Michael Wolovick and Nicholas Frearson of Lamont-Doherty; Kathryn Rose of University of Bristol; Detlef Damaske of Germany’s Federal Institute for Geosciences and Natural Resources; David Braaten of Kansas University; and Carol Finn of the U.S. Geological Survey.

Copies of the paper, “Freezing of ridges and water networks preserves the Gamburtsev Subglacial Mountains for millions of years,” are available from the authors.

Scientist Contact


Tim Creyts

845-365-8368

tcreyts@ldeo.columbia.edu

Climate change was not to blame for the collapse of the Bronze Age

Scientists will have to find alternative explanations for a huge population collapse in Europe at the end of the Bronze Age as researchers prove definitively that climate change – commonly assumed to be responsible – could not have been the culprit.

Archaeologists and environmental scientists from the University of Bradford, University of Leeds, University College Cork, Ireland (UCC), and Queen’s University Belfast have shown that the changes in climate that scientists believed to coincide with the fall in population in fact occurred at least two generations later.

Their results, published this week in Proceedings of the National Academy of Sciences, show that human activity starts to decline after 900BC, and falls rapidly after 800BC, indicating a population collapse. But the climate records show that colder, wetter conditions didn’t occur until around two generations later.

Fluctuations in levels of human activity through time are reflected by the numbers of radiocarbon dates for a given period. The team used new statistical techniques to analyse more than 2000 radiocarbon dates, taken from hundreds of archaeological sites in Ireland, to pinpoint the precise dates that Europe’s Bronze Age population collapse occurred.

The team then analysed past climate records from peat bogs in Ireland and compared the archaeological data to these climate records to see if the dates tallied. That information was then compared with evidence of climate change across NW Europe between 1200 and 500 BC.

“Our evidence shows definitively that the population decline in this period cannot have been caused by climate change,” says Ian Armit, Professor of Archaeology at the University of Bradford, and lead author of the study.

Graeme Swindles, Associate Professor of Earth System Dynamics at the University of Leeds, added, “We found clear evidence for a rapid change in climate to much wetter conditions, which we were able to precisely pinpoint to 750BC using statistical methods.”

According to Professor Armit, social and economic stress is more likely to be the cause of the sudden and widespread fall in numbers. Communities producing bronze needed to trade over very large distances to obtain copper and tin. Control of these networks enabled the growth of complex, hierarchical societies dominated by a warrior elite. As iron production took over, these networks collapsed, leading to widespread conflict and social collapse. It may be these unstable social conditions, rather than climate change, that led to the population collapse at the end of the Bronze Age.

According to Katharina Becker, Lecturer in the Department of Archaeology at UCC, the Late Bronze Age is usually seen as a time of plenty, in contrast to an impoverished Early Iron Age. “Our results show that the rich Bronze Age artefact record does not provide the full picture and that crisis began earlier than previously thought,” she says.

“Although climate change was not directly responsible for the collapse it is likely that the poor climatic conditions would have affected farming,” adds Professor Armit. “This would have been particularly difficult for vulnerable communities, preventing population recovery for several centuries.”

The findings have significance for modern day climate change debates which, argues Professor Armit, are often too quick to link historical climate events with changes in population.

“The impact of climate change on humans is a huge concern today as we monitor rising temperatures globally,” says Professor Armit.

“Often, in examining the past, we are inclined to link evidence of climate change with evidence of population change. Actually, if you have high quality data and apply modern analytical techniques, you get a much clearer picture and start to see the real complexity of human/environment relationships in the past.”

Climate capers of the past 600,000 years

The researchers remove samples from a core segment taken from Lake Van at the center for Marine environmental sciences MARUM in Bremen, where all of the cores from the PALEOVAN project are stored. -  Photo: Nadine Pickarski/Uni Bonn
The researchers remove samples from a core segment taken from Lake Van at the center for Marine environmental sciences MARUM in Bremen, where all of the cores from the PALEOVAN project are stored. – Photo: Nadine Pickarski/Uni Bonn

If you want to see into the future, you have to understand the past. An international consortium of researchers under the auspices of the University of Bonn has drilled deposits on the bed of Lake Van (Eastern Turkey) which provide unique insights into the last 600,000 years. The samples reveal that the climate has done its fair share of mischief-making in the past. Furthermore, there have been numerous earthquakes and volcanic eruptions. The results of the drilling project also provide a basis for assessing the risk of how dangerous natural hazards are for today’s population. In a special edition of the highly regarded publication Quaternary Science Reviews, the scientists have now published their findings in a number of journal articles.

In the sediments of Lake Van, the lighter-colored, lime-containing summer layers are clearly distinguishable from the darker, clay-rich winter layers — also called varves. In 2010, from a floating platform an international consortium of researchers drilled a 220 m deep sediment profile from the lake floor at a water depth of 360 m and analyzed the varves. The samples they recovered are a unique scientific treasure because the climate conditions, earthquakes and volcanic eruptions of the past 600,000 years can be read in outstanding quality from the cores.

The team of scientists under the auspices of the University of Bonn has analyzed some 5,000 samples in total. “The results show that the climate over the past hundred thousand years has been a roller coaster. Within just a few decades, the climate could tip from an ice age into a warm period,” says Doctor Thomas Litt of the University of Bonn’s Steinmann Institute and spokesman for the PALEOVAN international consortium of researchers. Unbroken continental climate archives from the ice age which encompass several hundred thousand years are extremely rare on a global scale. “There has never before in all of the Middle East and Central Asia been a continental drilling operation going so far back into the past,” says Doctor Litt. In the northern hemisphere, climate data from ice-cores drilled in Greenland encompass the last 120,000 years. The Lake Van project closes a gap in the scientific climate record.

The sediments reveal six cycles of cold and warm periods


Scientists found evidence for a total of six cycles of warm and cold periods in the sediments of Lake Van. The University of Bonn paleoecologist and his colleagues analyzed the pollen preserved in the sediments. Under a microscope they were able to determine which plants around the eastern Anatolian Lake the pollen came from. “Pollen is amazingly durable and is preserved over very long periods when protected in the sediments,” Doctor Litt explained. Insight into the age of the individual layers was gleaned through radiometric age measurements that use the decay of radioactive elements as a geologic clock. Based on the type of pollen and the age, the scientists were able to determine when oak forests typical of warm periods grew around Lake Van and when ice-age steppe made up of grasses, mugwort and goosefoot surrounded the lake.

Once they determine the composition of the vegetation present and the requirements of the plants, the scientists can reconstruct with a high degree of accuracy the temperature and amount of rainfall during different epochs. These analyses enable the team of researchers to read the varves of Lake Van like thousands of pages of an archive. With these data, the team was able to demonstrate that fluctuations in climate were due in large part to periodic changes in the Earth’s orbit parameters and the commensurate changes in solar insolation levels. However, the influence of North Atlantic currents was also evident. “The analysis of the Lake Van sediments has presented us with an image of how an ecosystem reacts to abrupt changes in climate. This fundamental data will help us to develop potential scenarios of future climate effects,” says Doctor Litt.

Risks of earthquakes and volcanic eruptions in the region of Van

Such risk assessments can also be made for other natural forces. “Deposits of volcanic ash with thicknesses of up to 10 m in the Lake Van sediments show us that approximately 270,000 years ago there was a massive eruption,” the University of Bonn paleoecologist said. The team struck some 300 different volcanic events in its drillings. Statistically, that corresponds to one explosive volcanic eruption in the region every 2000 years. Deformations in the sediment layers show that the area is subject to frequent, strong earthquakes. “The area around Lake Van is very densely populated. The data from the core samples show that volcanic activity and earthquakes present a relatively high risk for the region,” Doctor Litt says. According to media reports, in 2011 a 7.2 magnitude earthquake in the Van province claimed the lives of more than 500 people and injured more than 2,500.

Publication: “Results from the PALEOVAN drilling project: A 600,000 year long continental archive in the Near East”, Quaternary Science Reviews, Volume 104, online publication: (http://dx.doi.org/10.1016/j.quascirev.2014.09.026)

Subtle shifts in the Earth could forecast earthquakes, tsunamis

University of South Florida graduate student Jacob Richardson stands beside a completed installation.  The large white disc is the dual frequency antenna.  A portable solar panel that powers the system is visible in the foreground. -  Photo by Denis Voytenko
University of South Florida graduate student Jacob Richardson stands beside a completed installation. The large white disc is the dual frequency antenna. A portable solar panel that powers the system is visible in the foreground. – Photo by Denis Voytenko

Earthquakes and tsunamis can be giant disasters no one sees coming, but now an international team of scientists led by a University of South Florida professor have found that subtle shifts in the earth’s offshore plates can be a harbinger of the size of the disaster.

In a new paper published today in the Proceedings of the National Academies of Sciences, USF geologist Tim Dixon and the team report that a geological phenomenon called “slow slip events” identified just 15 years ago is a useful tool in identifying the precursors to major earthquakes and the resulting tsunamis. The scientists used high precision GPS to measure the slight shifts on a fault line in Costa Rica, and say better monitoring of these small events can lead to better understanding of maximum earthquake size and tsunami risk.

“Giant earthquakes and tsunamis in the last decade – Sumatra in 2004 and Japan in 2011 – are a reminder that our ability to forecast these destructive events is painfully weak,” Dixon said.

Dixon was involved in the development of high precision GPS for geophysical applications, and has been making GPS measurements in Costa Rica since 1988, in collaboration with scientists at Observatorio Vulcanológico y Sismológico de Costa Rica, the University of California-Santa Cruz, and Georgia Tech. The project is funded by the National Science Foundation.

Slow slip events have some similarities to earthquakes (caused by motion on faults) but release their energy slowly, over weeks or months, and cannot be felt or even recorded by conventional seismographs, Dixon said. Their discovery in 2001 by Canadian scientist Herb Dragert at the Pacific Geoscience Center had to await the development of high precision GPS, which is capable of measuring subtle movements of the Earth.

The scientists studied the Sept. 5, 2012 earthquake on the Costa Rica subduction plate boundary, as well as motions of the Earth in the previous decade. High precision GPS recorded numerous slow slip events in the decade leading up to the 2012 earthquake. The scientists made their measurements from a peninsula overlying the shallow portion of a megathrust fault in northwest Costa Rica.

The 7.6-magnitude quake was one of the strongest earthquakes ever to hit the Central American nation and unleased more than 1,600 aftershocks. Marino Protti, one of the authors of the paper and a resident of Costa Rica, has spent more than two decades warning local populations of the likelihood of a major earthquake in their area and recommending enhanced building codes.

A tsunami warning was issued after the quake, but only a small tsunami occurred. The group’s finding shed some light on why: slow slip events in the offshore region in the decade leading up to the earthquake may have released much of the stress and strain that would normally occur on the offshore fault.

While the group’s findings suggest that slow slip events have limited value in knowing exactly when an earthquake and tsunami will strike, they suggest that these events provide critical hazard assessment information by delineating rupture area and the magnitude and tsunami potential of future earthquakes.

The scientists recommend monitoring slow slip events in order to provide accurate forecasts of earthquake magnitude and tsunami potential.

###

The authors on the paper are Dixon; his former graduate student Yan Jiang, now at the Pacific Geoscience Centre in British Columba, Canada; USF Assistant Professor of Geosciences Rocco Malservisi; Robert McCaffrey of Portland State University; USF doctoral candidate Nicholas Voss; and Protti and Victor Gonzalez of the Observatorio Vulcanológico y Sismológico de Costa Rica, Universidad Nacional.

The University of South Florida is a high-impact, global research university dedicated to student success. USF is a Top 50 research university among both public and private institutions nationwide in total research expenditures, according to the National Science Foundation. Serving nearly 48,000 students, the USF System has an annual budget of $1.5 billion and an annual economic impact of $4.4 billion. USF is a member of the American Athletic Conference.

Adjusting Earth’s thermostat, with caution

David Keith, Gordon McKay Professor of Applied Physics at Harvard SEAS and professor of public policy at Harvard Kennedy School, coauthored several papers on climate engineering with colleagues at Harvard and beyond. -  Eliza Grinnell, SEAS Communications.
David Keith, Gordon McKay Professor of Applied Physics at Harvard SEAS and professor of public policy at Harvard Kennedy School, coauthored several papers on climate engineering with colleagues at Harvard and beyond. – Eliza Grinnell, SEAS Communications.

A vast majority of scientists believe that the Earth is warming at an unprecedented rate and that human activity is almost certainly the dominant cause. But on the topics of response and mitigation, there is far less consensus.

One of the most controversial propositions for slowing the increase in temperatures here on Earth is to manipulate the atmosphere above. Specifically, some scientists believe it should be possible to offset the warming effect of greenhouses gases by reflecting more of the sun’s energy back into space.

The potential risks–and benefits–of solar radiation management (SRM) are substantial. So far, however, all of the serious testing has been confined to laboratory chambers and theoretical models. While those approaches are valuable, they do not capture the full range of interactions among chemicals, the impact of sunlight on these reactions, or multiscale variations in the atmosphere.

Now, a team of researchers from the Harvard School of Engineering and Applied Sciences (SEAS) has outlined how a small-scale “stratospheric perturbation experiment” could work. By proposing, in detail, a way to take the science of geoengineering to the skies, they hope to stimulate serious discussion of the practice by policymakers and scientists.

Ultimately, they say, informed decisions on climate policy will need to rely on the best information available from controlled and cautious field experiments.

The paper is among several published today in a special issue of the Philosophical Transactions of the Royal Society A that examine the nuances, the possible consequences, and the current state of scientific understanding of climate engineering. David Keith, whose work features prominently in the issue, is Gordon McKay Professor of Applied Physics at Harvard SEAS and a professor of public policy at Harvard Kennedy School. His coauthors on the topic of field experiments include James Anderson, Philip S. Weld Professor of Applied Chemistry at Harvard SEAS and in Harvard’s Department of Chemistry and Chemical Biology; and other colleagues at Harvard SEAS.

“The idea of conducting experiments to alter atmospheric processes is justifiably controversial, and our experiment, SCoPEx, is just a proposal,” Keith emphasizes. “It will continue to evolve until it is funded, and we will only move ahead if the funding is substantially public, with a formal approval process and independent risk assessment.”

With so much at stake, Keith believes transparency is essential. But the science of climate engineering is also widely misunderstood.

“People often claim that you cannot test geoengineering except by doing it at full scale,” says Keith. “This is nonsense. It is possible to do a small-scale test, with quite low risks, that measures key aspects of the risk of geoengineering–in this case the risk of ozone loss.”

Such controlled experiments, targeting key questions in atmospheric chemistry, Keith says, would reduce the number of “unknown unknowns” and help to inform science-based policy.

The experiment Keith and Anderson’s team is proposing would involve only a tiny amount of material–a few hundred grams of sulfuric acid, an amount Keith says is roughly equivalent to what a typical commercial aircraft releases in a few minutes while flying in the stratosphere. It would provide important insight into how much SRM would reduce radiative heating, the concentration of water vapor in the stratosphere, and the processes that determine water vapor transport–which affects the concentration of ozone.

In addition to the experiment proposed in that publication, another paper coauthored by Keith and collaborators at the California Institute of Technology (CalTech) collects and reviews a number of other experimental methods, to demonstrate the diversity of possible approaches.

“There is a wide range of experiments that could be done that would significantly reduce our uncertainty about the risks and effectiveness of solar geoengineering,” Keith says. “Many could be done with very small local risks.”

A third paper explores how solar geoengineering might actually be implemented, if an international consensus were reached, and suggests that a gradual implementation that aims to limit the rate of climate change would be a plausible strategy.

“Many people assume that solar geoengineering would be used to suddenly restore the Earth’s climate to preindustrial temperatures,” says Keith, “but it’s very unlikely that it would make any policy sense to try to do so.”

Keith also points to another paper in the Royal Society’s special issue–one by Andy Parker at the Belfer Center for Science and International Affairs at Harvard Kennedy School. Parker’s paper furthers the discussion of governance and good practices in geoengineering research in the absence of both national legislation and international agreement, a topic raised last year in Science by Keith and Edward Parson of UCLA.

“The scientific aspects of geoengineering research must, by necessity, advance in tandem with a thorough discussion of the social science and policy,” Keith warns. “Of course, these risks must also be weighed against the risk of doing nothing.”

For further information, see: “Stratospheric controlled perturbation experiment (SCoPEx): A small-scale experiment to improve understanding of the risks of solar geoengineering” doi: 10.1098/rsta.2014.0059

By John Dykema, project scientist at Harvard SEAS; David Keith, Gordon McKay Professor of Applied Physics at Harvard SEAS and professor of public policy at Harvard Kennedy School; James Anderson, Philip S. Weld Professor of Applied Chemistry at Harvard SEAS and in Harvard’s Department of Chemistry and Chemical Biology; and Debra Weisenstein, research management specialist at Harvard SEAS.

“Field experiments on solar geoengineering: Report of a workshop exploring a representative research portfolio”
doi: 10.1098/rsta.2014.0175

By David Keith; Riley Duren, chief systems engineer at the NASA Jet Propulsion Laboratory at CalTech; and Douglas MacMartin, senior research associate and lecturer at CalTech.

“Solar geoengineering to limit the rate of temperature change”
doi: 10.1098/rsta.2014.0134

By Douglas MacMartin; Ken Caldeira, senior scientist at the Carnegie Institute for Science and professor of environmental Earth system sciences at Stanford University; and David Keith.

“Governing solar geoengineering research as it leaves the laboratory”
doi: 10.1098/rsta.2014.0173

By Andy Parker, associate of the Belfer Center at Harvard Kennedy School.

Clues to one of Earth’s oldest craters revealed

The Sudbury Basin located in Ontario, Canada is one of the largest known impact craters on Earth, as well as one of the oldest due to its formation more than 1.8 billion years ago. Researchers who took samples from the site and subjected them to a detailed geochemical analysis say that a comet may have hit the area to create the crater.

“Our analysis revealed a chondritic platinum group element signature within the crater’s fallback deposits; however, the distribution of these elements within the impact structure and other constraints suggest that the impactor was a comet. Thus, it seems that a comet with a chondritic refractory component may have created the world-famous Sudbury basin,” said Joe Petrus, lead author of the Terra Nova paper.

Groundwater warming up in synch

For their study, the researchers were able to fall back on uninterrupted long-term temperature measurements of groundwater flows around the cities of Cologne and Karlsruhe, where the operators of the local waterworks have been measuring the temperature of the groundwater, which is largely uninfluenced by humans, for forty years. This is unique and a rare commodity for the researchers. “For us, the data was a godsend,” stresses Peter Bayer, a senior assistant at ETH Zurich’s Geological Institute. Even with some intensive research, they would not have been able to find a comparable series of measurements. Evidently, it is less interesting or too costly for waterworks to measure groundwater temperatures systematically for a lengthy period of time. “Or the data isn’t digitalised and only archived on paper,” suspects the hydrogeologist.

Damped image of atmospheric warming

Based on the readings, the researchers were able to demonstrate that the groundwater is not just warming up; the warming stages observed in the atmosphere are also echoed. “Global warming is reflected directly in the groundwater, albeit damped and with a certain time lag,” says Bayer, summarising the main results that the project has yielded. The researchers published their study in the journal Hydrology and Earth System Sciences.

The data also reveals that the groundwater close to the surface down to a depth of around sixty metres has warmed up statistically significantly in the course of global warming over the last forty years. This water heating follows the warming pattern of the local and regional climate, which in turn mirrors that of global warming.

The groundwater reveals how the atmosphere has made several temperature leaps at irregular intervals. These “regime shifts” can also be observed in the global climate, as the researchers write in their study. Bayer was surprised at how quickly the groundwater responded to climate change.

Heat exchange with the subsoil


The earth’s atmosphere has warmed up by an average of 0.13 degrees Celsius per decade in the last fifty years. And this warming doesn’t stop at the subsoil, either, as other climate scientists have demonstrated in the last two decades with drillings all over the world. However, the researchers only tended to consider soils that did not contain any water or where there were no groundwater flow.

While the fact that the groundwater has not escaped climate change was revealed by researchers from Eawag and ETH Zurich in a study published three years ago, it only concerned “artificial” groundwater. In order to enhance it, river water is trickled off in certain areas. The temperature profile of the groundwater generated as a result thus matches that of the river water.

The new study, however, examines groundwater that has barely been influenced by humans. According to Bayer, it is plausible that the natural groundwater flow is also warming up in the course of climate change. “The difference in temperature between the atmosphere and the subsoil balances out naturally.” The energy transfer takes place via thermal conduction and the groundwater flow, much like a heat exchanger, which enables the heat transported to spread in the subsoil and level out.

The consequences of these findings, however, are difficult to gauge. The warmer temperatures might influence subterranean ecosystems on the one hand and groundwater-dependent biospheres on the other, which include cold areas in flowing waters where the groundwater discharges. For cryophilic organisms such as certain fish, groundwater warming could have negative consequences.

Consequences difficult to gauge

Higher groundwater temperatures also influence the water’s chemical composition, especially the chemical equilibria of nitrate or carbonate. After all, chemical reactions usually take place more quickly at higher temperatures. Bacterial activity might also increase at rising water temperatures. If the groundwater becomes warmer, undesirable bacteria such as gastro-intestinal disease pathogens might multiply more effectively. However, the scientists can also imagine positive effects. “The groundwater’s excess heat could be used geothermally for instance,” adds Kathrin Menberg, the first author of the study.

Volcano hazards and the role of westerly wind bursts in El Niño

On June 27, lava from Kīlauea, an active volcano on the island of Hawai'i, began flowing to the northeast, threatening the residents in a community in the District of Puna. -  USGS
On June 27, lava from Kīlauea, an active volcano on the island of Hawai’i, began flowing to the northeast, threatening the residents in a community in the District of Puna. – USGS

On 27 June, lava from Kīlauea, an active volcano on the island of Hawai’i, began flowing to the northeast, threatening the residents in Pāhoa, a community in the District of Puna, as well as the only highway accessible to this area. Scientists from the U.S. Geological Survey’s Hawaiian Volcano Observatory (HVO) and the Hawai’i County Civil Defense have been monitoring the volcano’s lava flow and communicating with affected residents through public meetings since 24 August. Eos recently spoke with Michael Poland, a geophysicist at HVO and a member of the Eos Editorial Advisory Board, to discuss how he and his colleagues communicated this threat to the public.

Drilling a Small Basaltic Volcano to Reveal Potential Hazards


Drilling into the Rangitoto Island Volcano in the Auckland Volcanic Field in New Zealand offers insight into a small monogenetic volcano, and may improve understanding of future hazards.

From AGU’s journals: El Niño fades without westerly wind bursts

The warm and wet winter of 1997 brought California floods, Florida tornadoes, and an ice storm in the American northeast, prompting climatologists to dub it the El Niño of the century. Earlier this year, climate scientists thought the coming winter might bring similar extremes, as equatorial Pacific Ocean conditions resembled those seen in early 1997. But the signals weakened by summer, and the El Niño predictions were downgraded. Menkes et al. used simulations to examine the differences between the two years.

The El Niño-Southern Oscillation is defined by abnormally warm sea surface temperatures in the eastern Pacific Ocean and weaker than usual trade winds. In a typical year, southeast trade winds push surface water toward the western Pacific “warm pool”–a region essential to Earth’s climate. The trade winds dramatically weaken or even reverse in El Niño years, and the warm pool extends its reach east.

Scientists have struggled to predict El Niño due to irregularities in the shape, amplitude, and timing of the surges of warm water. Previous studies suggested that short-lived westerly wind pulses (i.e. one to two weeks long) could contribute to this irregularity by triggering and sustaining El Niño events.

To understand the vanishing 2014 El Niño, the authors used computer simulations and examined the wind’s role. The researchers find pronounced differences between 1997 and 2014. Both years saw strong westerly wind events between January and March, but those disappeared this year as spring approached. In contrast, the westerly winds persisted through summer in 1997.

In the past, it was thought that westerly wind pulses were three times as likely to form if the warm pool extended east of the dateline. That did not occur this year. The team says their analysis shows that El Niño’s strength might depend on these short-lived and possibly unpredictable pulses.

###

The American Geophysical Union is dedicated to advancing the Earth and space sciences for the benefit of humanity through its scholarly publications, conferences, and outreach programs. AGU is a not-for-profit, professional, scientific organization representing more than 62,000 members in 144 countries. Join our conversation on Facebook, Twitter, YouTube, and other social media channels.

Some plants regenerate by duplicating their DNA

Animal biology professor Ken Paige (left) and postdoctoral fellow Daniel Scholes found that a plant's ability to duplicate its genome within individual cells influences its ability to regenerate. -  L. Brian Stauffer
Animal biology professor Ken Paige (left) and postdoctoral fellow Daniel Scholes found that a plant’s ability to duplicate its genome within individual cells influences its ability to regenerate. – L. Brian Stauffer

When munched by grazing animals (or mauled by scientists in the lab), some herbaceous plants overcompensate – producing more plant matter and becoming more fertile than they otherwise would. Scientists say they now know how these plants accomplish this feat of regeneration.

They report their findings in the journal Molecular Ecology.

Their study is the first to show that a plant’s ability to dramatically rebound after being cut down relies on a process called genome duplication, in which individual cells make multiple copies of all of their genetic content.

Genome duplication is not new to science; researchers have known about the phenomenon for decades. But few have pondered its purpose, said University of Illinois animal biology professor Ken Paige, who conducted the study with postdoctoral researcher Daniel Scholes.

“Most herbaceous plants – 90 percent – duplicate their genomes,” Paige said. “We wanted to know what this process was for.”

In a 2011 study, Paige and Scholes demonstrated that plants that engage in rampant genome duplication also rebound more vigorously after being damaged. The researchers suspected that genome duplication was giving the plants the boost they needed to overcome adversity.

That study and the new one focused on Arabidopsis thaliana, a plant in the mustard family that often is used as a laboratory subject. Some Arabidopsis plants engage in genome duplication and others don’t. Those that do can accumulate dozens of copies of all of their chromosomes in individual cells.

In the new study, Scholes crossed Arabidopsis plants that had the ability to duplicate their genomes with those that lacked this ability. If the relationship between DNA duplication and regeneration was mere happenstance, the association between the two should disappear in their offspring, Scholes said.

“But the association persisted in the offspring,” he said. “That’s the first line of evidence that these two traits seem to be influencing each other.”

To further test the hypothesis, Scholes experimentally enhanced an Arabidopsis plant’s ability to duplicate its genome. He chose a line that lacked that ability and that also experienced a major reduction in fertility after being grazed.

As expected, the altered plant gained the ability to vigorously rebound after being damaged, the researchers reported.

“We were able to completely mitigate the otherwise detrimental effects of damage,” Scholes said. “There was no difference in fertility between damaged and undamaged plants.”

Genome duplication enlarges cells and provides more copies of individual genes, likely increasing the production of key proteins and other molecules that drive cell growth, Scholes said. Future studies will test these ideas, he said.

The National Science Foundation and U. of I. Research Board funded this research.

Re-learning how to read a genome

New research has revealed that the initial steps of reading DNA are actually remarkably similar at both the genes that encode proteins (here, on the right) and regulatory elements (on the left). The main differences seem to occur after this initial step. Gene messages are long and stable enough to ensure that genes become proteins, whereas regulatory messages are short and unstable, and are rapidly 'cleaned up' by the cell. -  Adam Siepel, Cold Spring Harbor Laboratory
New research has revealed that the initial steps of reading DNA are actually remarkably similar at both the genes that encode proteins (here, on the right) and regulatory elements (on the left). The main differences seem to occur after this initial step. Gene messages are long and stable enough to ensure that genes become proteins, whereas regulatory messages are short and unstable, and are rapidly ‘cleaned up’ by the cell. – Adam Siepel, Cold Spring Harbor Laboratory

There are roughly 20,000 genes and thousands of other regulatory “elements” stored within the three billion letters of the human genome. Genes encode information that is used to create proteins, while other genomic elements help regulate the activation of genes, among other tasks. Somehow all of this coded information within our DNA needs to be read by complex molecular machinery and transcribed into messages that can be used by our cells.

Usually, reading a gene is thought to be a lot like reading a sentence. The reading machinery is guided to the start of the gene by various sequences in the DNA – the equivalent of a capital letter – and proceeds from left to right, DNA letter by DNA letter, until it reaches a sequence that forms a punctuation mark at the end. The capital letter and punctuation marks that tell the cell where, when, and how to read a gene are known as regulatory elements.

But scientists have recently discovered that genes aren’t the only messages read by the cell. In fact, many regulatory elements themselves are also read and transcribed into messages, the equivalent of pronouncing the words “capital letter,” “comma,” or “period.” Even more surprising, genes are read bi-directionally from so-called “start sites” – in effect, generating messages in both forward and backward directions.

With all these messages, how does the cell know which one encodes the information needed to make a protein? Is there something different about the reading process at genes and regulatory elements that helps avoid confusion? New research, published today in Nature Genetics, has revealed that the initial steps of the reading process itself are actually remarkably similar at both genes and regulatory elements. The main differences seem to occur after this initial step, in the length and stability of the messages. Gene messages are long and stable enough to ensure that genes becomes proteins, whereas regulatory messages are short and unstable, and are rapidly “cleaned up” by the cell.

To make the distinction, the team, which was co-led by CSHL Professor Adam Siepel and Cornell University Professor John Lis, looked for differences between the initial reading processes at genes and a set of regulatory elements called enhancers. “We took advantage of highly sensitive experimental techniques developed in the Lis lab to measure newly made messages in the cell,” says Siepel. “It’s like having a new, more powerful microscope for observing the process of transcription as it occurs in living cells.”

Remarkably, the team found that the reading patterns for enhancer and gene messages are highly similar in many respects, sharing a common architecture. “Our data suggests that the same basic reading process is happening at genes and these non-genic regulatory elements,” explains Siepel. “This points to a unified model for how DNA transcription is initiated throughout the genome.”

Working together, the biochemists from Lis’s laboratory and the computer jockeys from Siepel’s group carefully compared the patterns at enhancers and genes, combining their own data with vast public data sets from the NIH’s Encyclopedia of DNA Elements (ENCODE) project. “By many different measures, we found that the patterns of transcription initiation are essentially the same at enhancers and genes,” says Siepel. “Most RNA messages are rapidly targeted for destruction, but the messages at genes that are read in the right direction – those destined to be a protein – are spared from destruction.” The team was able to devise a model to mathematically explain the difference between stable and unstable transcripts, offering insight into what defines a gene. According to Siepel, “Our analysis shows that the ‘code’ for stability is, in large part, written in the DNA, at enhancers and genes alike.”

This work has important implications for the evolutionary origins of new genes, according to Siepel. “Because DNA is read in both directions from any start site, every one of these sites has the potential to generate two protein-coding genes with just a few subtle changes. The genome is full of potential new genes.”

This work was supported by the National Institutes of Health.

“Analysis of transcription start sites from nascent RNA identifies a unified architecture of initiation regions at mammalian promoters and enhancers.” appears online in Nature Genetics on November 10, 2014. The authors are: Leighton Core, André Martins, Charles Danko, Colin Waters, Adam Siepel, and John Lis. The paper can be obtained online at: http://dx.doi.org/10.1038/ng.3142

About Cold Spring Harbor Laboratory

Founded in 1890, Cold Spring Harbor Laboratory (CSHL) has shaped contemporary biomedical research and education with programs in cancer, neuroscience, plant biology and quantitative biology. CSHL is ranked number one in the world by Thomson Reuters for the impact of its research in molecular biology and genetics. The Laboratory has been home to eight Nobel Prize winners. Today, CSHL’s multidisciplinary scientific community is more than 600 researchers and technicians strong and its Meetings & Courses program hosts more than 12,000 scientists from around the world each year to its Long Island campus and its China center. For more information, visit http://www.cshl.edu.