Abandoned wells can be ‘super-emitters’ of greenhouse gas

One of the wells the researchers tested; this one in the Allegheny National Forest. -  Princeton University
One of the wells the researchers tested; this one in the Allegheny National Forest. – Princeton University

Princeton University researchers have uncovered a previously unknown, and possibly substantial, source of the greenhouse gas methane to the Earth’s atmosphere.

After testing a sample of abandoned oil and natural gas wells in northwestern Pennsylvania, the researchers found that many of the old wells leaked substantial quantities of methane. Because there are so many abandoned wells nationwide (a recent study from Stanford University concluded there were roughly 3 million abandoned wells in the United States) the researchers believe the overall contribution of leaking wells could be significant.

The researchers said their findings identify a need to make measurements across a wide variety of regions in Pennsylvania but also in other states with a long history of oil and gas development such as California and Texas.

“The research indicates that this is a source of methane that should not be ignored,” said Michael Celia, the Theodore Shelton Pitney Professor of Environmental Studies and professor of civil and environmental engineering at Princeton. “We need to determine how significant it is on a wider basis.”

Methane is the unprocessed form of natural gas. Scientists say that after carbon dioxide, methane is the most important contributor to the greenhouse effect, in which gases in the atmosphere trap heat that would otherwise radiate from the Earth. Pound for pound, methane has about 20 times the heat-trapping effect as carbon dioxide. Methane is produced naturally, by processes including decomposition, and by human activity such as landfills and oil and gas production.

While oil and gas companies work to minimize the amount of methane emitted by their operations, almost no attention has been paid to wells that were drilled decades ago. These wells, some of which date back to the 19th century, are typically abandoned and not recorded on official records.

Mary Kang, then a doctoral candidate at Princeton, originally began looking into methane emissions from old wells after researching techniques to store carbon dioxide by injecting it deep underground. While examining ways that carbon dioxide could escape underground storage, Kang wondered about the effect of old wells on methane emissions.

“I was looking for data, but it didn’t exist,” said Kang, now a postdoctoral researcher at Stanford.

In a paper published Dec. 8 in the Proceedings of the National Academy of Sciences, the researchers describe how they chose 19 wells in the adjacent McKean and Potter counties in northwestern Pennsylvania. The wells chosen were all abandoned, and records about the origin of the wells and their conditions did not exist. Only one of the wells was on the state’s list of abandoned wells. Some of the wells, which can look like a pipe emerging from the ground, are located in forests and others in people’s yards. Kang said the lack of documentation made it hard to tell when the wells were originally drilled or whether any attempt had been made to plug them.

“What surprised me was that every well we measured had some methane coming out,” said Celia.

To conduct the research, the team placed enclosures called flux chambers over the tops of the wells. They also placed flux chambers nearby to measure the background emissions from the terrain and make sure the methane was emitted from the wells and not the surrounding area.

Although all the wells registered some level of methane, about 15 percent emitted the gas at a markedly higher level — thousands of times greater than the lower-level wells. Denise Mauzerall, a Princeton professor and a member of the research team, said a critical task is to discover the characteristics of these super-emitting wells.

Mauzerall said the relatively low number of high-emitting wells could offer a workable solution: while trying to plug every abandoned well in the country might be too costly to be realistic, dealing with the smaller number of high emitters could be possible.

“The fact that most of the methane is coming out of a small number of wells should make it easier to address if we can identify the high-emitting wells,” said Mauzerall, who has a joint appointment as a professor of civil and environmental engineering and as a professor of public and international affairs at the Woodrow Wilson School.

The researchers have used their results to extrapolate total methane emissions from abandoned wells in Pennsylvania, although they stress that the results are preliminary because of the relatively small sample. But based on that data, they estimate that emissions from abandoned wells represents as much as 10 percent of methane from human activities in Pennsylvania — about the same amount as caused by current oil and gas production. Also, unlike working wells, which have productive lifetimes of 10 to 15 years, abandoned wells can continue to leak methane for decades.

“This may be a significant source,” Mauzerall said. “There is no single silver bullet but if it turns out that we can cap or capture the methane coming off these really big emitters, that would make a substantial difference.”


Besides Kang, who is the paper’s lead author, Celia and Mauzerall, the paper’s co-authors include: Tullis Onstott, a professor of geosciences at Princeton; Cynthia Kanno, who was a Princeton undergraduate and who is a graduate student at the Colorado School of Mines; Matthew Reid, who was a graduate student at Princeton and is a postdoctoral researcher at EPFL in Luzerne, Switzerland; Xin Zhang, a postdoctoral researcher in the Woodrow Wilson School at Princeton; and Yuheng Chen, an associate research scholar in geosciences at Princeton.

West Antarctic melt rate has tripled: UC Irvine-NASA

A comprehensive, 21-year analysis of the fastest-melting region of Antarctica has found that the melt rate of glaciers there has tripled during the last decade.

The glaciers in the Amundsen Sea Embayment in West Antarctica are hemorrhaging ice faster than any other part of Antarctica and are the most significant Antarctic contributors to sea level rise. This study is the first to evaluate and reconcile observations from four different measurement techniques to produce an authoritative estimate of the amount and the rate of loss over the last two decades.

“The mass loss of these glaciers is increasing at an amazing rate,” said scientist Isabella Velicogna, jointly of the UC Irvine and NASA’s Jet Propulsion Laboratory. Velicogna is a coauthor of a paper on the results, which has been accepted for Dec. 5 publication in the journal Geophysical Research Letters.

Lead author Tyler Sutterley, a UCI doctoral candidate, and his team did the analysis to verify that the melting in this part of Antarctica is shifting into high gear. “Previous studies had suggested that this region is starting to change very dramatically since the 1990s, and we wanted to see how all the different techniques compared,” Sutterley said. “The remarkable agreement among the techniques gave us confidence that we are getting this right.”

The researchers reconciled measurements of the mass balance of glaciers flowing into the Amundsen Sea Embayment. Mass balance is a measure of how much ice the glaciers gain and lose over time from accumulating or melting snow, discharges of ice as icebergs, and other causes. Measurements from all four techniques were available from 2003 to 2009. Combined, the four data sets span the years 1992 to 2013.

The glaciers in the embayment lost mass throughout the entire period. The researchers calculated two separate quantities: the total amount of loss, and the changes in the rate of loss.

The total amount of loss averaged 83 gigatons per year (91.5 billion U.S. tons). By comparison, Mt. Everest weighs about 161 gigatons, meaning the Antarctic glaciers lost a Mt.-Everest’s-worth amount of water weight every two years over the last 21 years.

The rate of loss accelerated an average of 6.1 gigatons (6.7 billion U.S. tons) per year since 1992.

From 2003 to 2009, when all four observational techniques overlapped, the melt rate increased an average of 16.3 gigatons per year — almost three times the rate of increase for the full 21-year period. The total amount of loss was close to the average at 84 gigatons.

The four sets of observations include NASA’s Gravity Recovery and Climate Experiment (GRACE) satellites, laser altimetry from NASA’s Operation IceBridge airborne campaign and earlier ICESat satellite, radar altimetry from the European Space Agency’s Envisat satellite, and mass budget analyses using radars and the University of Utrecht’s Regional Atmospheric Climate Model.

The scientists noted that glacier and ice sheet behavior worldwide is by far the greatest uncertainty in predicting future sea level. “We have an excellent observing network now. It’s critical that we maintain this network to continue monitoring the changes,” Velicogna said, “because the changes are proceeding very fast.”

###

About the University of California, Irvine:

Founded in 1965, UCI is the youngest member of the prestigious Association of American Universities. The campus has produced three Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UCI has more than 30,000 students and offers 192 degree programs. Located in one of the world’s safest and most economically vibrant communities, it’s Orange County’s second-largest employer, contributing $4.8 billion annually to the local economy.

Media access: Radio programs/stations may, for a fee, use an on-campus ISDN line to interview UC Irvine faculty and experts, subject to availability and university approval. For more UC Irvine news, visit news.uci.edu. Additional resources for journalists may be found at communications.uci.edu/for-journalists.

Climate change was not to blame for the collapse of the Bronze Age

Scientists will have to find alternative explanations for a huge population collapse in Europe at the end of the Bronze Age as researchers prove definitively that climate change – commonly assumed to be responsible – could not have been the culprit.

Archaeologists and environmental scientists from the University of Bradford, University of Leeds, University College Cork, Ireland (UCC), and Queen’s University Belfast have shown that the changes in climate that scientists believed to coincide with the fall in population in fact occurred at least two generations later.

Their results, published this week in Proceedings of the National Academy of Sciences, show that human activity starts to decline after 900BC, and falls rapidly after 800BC, indicating a population collapse. But the climate records show that colder, wetter conditions didn’t occur until around two generations later.

Fluctuations in levels of human activity through time are reflected by the numbers of radiocarbon dates for a given period. The team used new statistical techniques to analyse more than 2000 radiocarbon dates, taken from hundreds of archaeological sites in Ireland, to pinpoint the precise dates that Europe’s Bronze Age population collapse occurred.

The team then analysed past climate records from peat bogs in Ireland and compared the archaeological data to these climate records to see if the dates tallied. That information was then compared with evidence of climate change across NW Europe between 1200 and 500 BC.

“Our evidence shows definitively that the population decline in this period cannot have been caused by climate change,” says Ian Armit, Professor of Archaeology at the University of Bradford, and lead author of the study.

Graeme Swindles, Associate Professor of Earth System Dynamics at the University of Leeds, added, “We found clear evidence for a rapid change in climate to much wetter conditions, which we were able to precisely pinpoint to 750BC using statistical methods.”

According to Professor Armit, social and economic stress is more likely to be the cause of the sudden and widespread fall in numbers. Communities producing bronze needed to trade over very large distances to obtain copper and tin. Control of these networks enabled the growth of complex, hierarchical societies dominated by a warrior elite. As iron production took over, these networks collapsed, leading to widespread conflict and social collapse. It may be these unstable social conditions, rather than climate change, that led to the population collapse at the end of the Bronze Age.

According to Katharina Becker, Lecturer in the Department of Archaeology at UCC, the Late Bronze Age is usually seen as a time of plenty, in contrast to an impoverished Early Iron Age. “Our results show that the rich Bronze Age artefact record does not provide the full picture and that crisis began earlier than previously thought,” she says.

“Although climate change was not directly responsible for the collapse it is likely that the poor climatic conditions would have affected farming,” adds Professor Armit. “This would have been particularly difficult for vulnerable communities, preventing population recovery for several centuries.”

The findings have significance for modern day climate change debates which, argues Professor Armit, are often too quick to link historical climate events with changes in population.

“The impact of climate change on humans is a huge concern today as we monitor rising temperatures globally,” says Professor Armit.

“Often, in examining the past, we are inclined to link evidence of climate change with evidence of population change. Actually, if you have high quality data and apply modern analytical techniques, you get a much clearer picture and start to see the real complexity of human/environment relationships in the past.”

Re-learning how to read a genome

New research has revealed that the initial steps of reading DNA are actually remarkably similar at both the genes that encode proteins (here, on the right) and regulatory elements (on the left). The main differences seem to occur after this initial step. Gene messages are long and stable enough to ensure that genes become proteins, whereas regulatory messages are short and unstable, and are rapidly 'cleaned up' by the cell. -  Adam Siepel, Cold Spring Harbor Laboratory
New research has revealed that the initial steps of reading DNA are actually remarkably similar at both the genes that encode proteins (here, on the right) and regulatory elements (on the left). The main differences seem to occur after this initial step. Gene messages are long and stable enough to ensure that genes become proteins, whereas regulatory messages are short and unstable, and are rapidly ‘cleaned up’ by the cell. – Adam Siepel, Cold Spring Harbor Laboratory

There are roughly 20,000 genes and thousands of other regulatory “elements” stored within the three billion letters of the human genome. Genes encode information that is used to create proteins, while other genomic elements help regulate the activation of genes, among other tasks. Somehow all of this coded information within our DNA needs to be read by complex molecular machinery and transcribed into messages that can be used by our cells.

Usually, reading a gene is thought to be a lot like reading a sentence. The reading machinery is guided to the start of the gene by various sequences in the DNA – the equivalent of a capital letter – and proceeds from left to right, DNA letter by DNA letter, until it reaches a sequence that forms a punctuation mark at the end. The capital letter and punctuation marks that tell the cell where, when, and how to read a gene are known as regulatory elements.

But scientists have recently discovered that genes aren’t the only messages read by the cell. In fact, many regulatory elements themselves are also read and transcribed into messages, the equivalent of pronouncing the words “capital letter,” “comma,” or “period.” Even more surprising, genes are read bi-directionally from so-called “start sites” – in effect, generating messages in both forward and backward directions.

With all these messages, how does the cell know which one encodes the information needed to make a protein? Is there something different about the reading process at genes and regulatory elements that helps avoid confusion? New research, published today in Nature Genetics, has revealed that the initial steps of the reading process itself are actually remarkably similar at both genes and regulatory elements. The main differences seem to occur after this initial step, in the length and stability of the messages. Gene messages are long and stable enough to ensure that genes becomes proteins, whereas regulatory messages are short and unstable, and are rapidly “cleaned up” by the cell.

To make the distinction, the team, which was co-led by CSHL Professor Adam Siepel and Cornell University Professor John Lis, looked for differences between the initial reading processes at genes and a set of regulatory elements called enhancers. “We took advantage of highly sensitive experimental techniques developed in the Lis lab to measure newly made messages in the cell,” says Siepel. “It’s like having a new, more powerful microscope for observing the process of transcription as it occurs in living cells.”

Remarkably, the team found that the reading patterns for enhancer and gene messages are highly similar in many respects, sharing a common architecture. “Our data suggests that the same basic reading process is happening at genes and these non-genic regulatory elements,” explains Siepel. “This points to a unified model for how DNA transcription is initiated throughout the genome.”

Working together, the biochemists from Lis’s laboratory and the computer jockeys from Siepel’s group carefully compared the patterns at enhancers and genes, combining their own data with vast public data sets from the NIH’s Encyclopedia of DNA Elements (ENCODE) project. “By many different measures, we found that the patterns of transcription initiation are essentially the same at enhancers and genes,” says Siepel. “Most RNA messages are rapidly targeted for destruction, but the messages at genes that are read in the right direction – those destined to be a protein – are spared from destruction.” The team was able to devise a model to mathematically explain the difference between stable and unstable transcripts, offering insight into what defines a gene. According to Siepel, “Our analysis shows that the ‘code’ for stability is, in large part, written in the DNA, at enhancers and genes alike.”

This work has important implications for the evolutionary origins of new genes, according to Siepel. “Because DNA is read in both directions from any start site, every one of these sites has the potential to generate two protein-coding genes with just a few subtle changes. The genome is full of potential new genes.”

This work was supported by the National Institutes of Health.

“Analysis of transcription start sites from nascent RNA identifies a unified architecture of initiation regions at mammalian promoters and enhancers.” appears online in Nature Genetics on November 10, 2014. The authors are: Leighton Core, André Martins, Charles Danko, Colin Waters, Adam Siepel, and John Lis. The paper can be obtained online at: http://dx.doi.org/10.1038/ng.3142

About Cold Spring Harbor Laboratory

Founded in 1890, Cold Spring Harbor Laboratory (CSHL) has shaped contemporary biomedical research and education with programs in cancer, neuroscience, plant biology and quantitative biology. CSHL is ranked number one in the world by Thomson Reuters for the impact of its research in molecular biology and genetics. The Laboratory has been home to eight Nobel Prize winners. Today, CSHL’s multidisciplinary scientific community is more than 600 researchers and technicians strong and its Meetings & Courses program hosts more than 12,000 scientists from around the world each year to its Long Island campus and its China center. For more information, visit http://www.cshl.edu.

Synthetic biology for space exploration

Synthetic biology could be a key to manned space exploration of Mars. -  Photo courtesy of NASA
Synthetic biology could be a key to manned space exploration of Mars. – Photo courtesy of NASA

Does synthetic biology hold the key to manned space exploration of Mars and the Moon? Berkeley Lab researchers have used synthetic biology to produce an inexpensive and reliable microbial-based alternative to the world’s most effective anti-malaria drug, and to develop clean, green and sustainable alternatives to gasoline, diesel and jet fuels. In the future, synthetic biology could also be used to make manned space missions more practical.

“Not only does synthetic biology promise to make the travel to extraterrestrial locations more practical and bearable, it could also be transformative once explorers arrive at their destination,” says Adam Arkin, director of Berkeley Lab’s Physical Biosciences Division (PBD) and a leading authority on synthetic and systems biology.

“During flight, the ability to augment fuel and other energy needs, to provide small amounts of needed materials, plus renewable, nutritional and taste-engineered food, and drugs-on-demand can save costs and increase astronaut health and welfare,” Arkin says. “At an extraterrestrial base, synthetic biology could even make more effective use of the catalytic activities of diverse organisms.”

Arkin is the senior author of a paper in the Journal of the Royal Society Interface that reports on a techno-economic analysis demonstrating “the significant utility of deploying non-traditional biological techniques to harness available volatiles and waste resources on manned long-duration space missions.” The paper is titled “Towards Synthetic Biological Approaches to Resource Utilization on Space Missions.” The lead and corresponding author is Amor Menezes, a postdoctoral scholar in Arkin’s research group at the University of California (UC) Berkeley. Other co-authors are John Cumbers and John Hogan with the NASA Ames Research Center.

One of the biggest challenges to manned space missions is the expense. The NASA rule-of-thumb is that every unit mass of payload launched requires the support of an additional 99 units of mass, with “support” encompassing everything from fuel to oxygen to food and medicine for the astronauts, etc. Most of the current technologies now deployed or under development for providing this support are abiotic, meaning non-biological. Arkin, Menezes and their collaborators have shown that providing this support with technologies based on existing biological processes is a more than viable alternative.

“Because synthetic biology allows us to engineer biological processes to our advantage, we found in our analysis that technologies, when using common space metrics such as mass, power and volume, have the potential to provide substantial cost savings, especially in mass,” Menezes says.

In their study, the authors looked at four target areas: fuel generation, food production, biopolymer synthesis, and pharmaceutical manufacture. They showed that for a 916 day manned mission to Mars, the use of microbial biomanufacturing capabilities could reduce the mass of fuel manufacturing by 56-percent, the mass of food-shipments by 38-percent, and the shipped mass to 3D-print a habitat for six by a whopping 85-percent. In addition, microbes could also completely replenish expired or irradiated stocks of pharmaceuticals, which would provide independence from unmanned re-supply spacecraft that take up to 210 days to arrive.

“Space has always provided a wonderful test of whether technology can meet strict engineering standards for both effect and safety,” Arkin says. “NASA has worked decades to ensure that the specifications that new technologies must meet are rigorous and realistic, which allowed us to perform up-front techno-economic analysis.”

The big advantage biological manufacturing holds over abiotic manufacturing is the remarkable ability of natural and engineered microbes to transform very simple starting substrates, such as carbon dioxide, water biomass or minerals, into materials that astronauts on long-term missions will need. This capability should prove especially useful for future extraterrestrial settlements.

“The mineral and carbon composition of other celestial bodies is different from the bulk of Earth, but the earth is diverse with many extreme environments that have some relationship to those that might be found at possible bases on the Moon or Mars,” Arkin says. “Microbes could be used to greatly augment the materials available at a landing site, enable the biomanufacturing of food and pharmaceuticals, and possibly even modify and enrich local soils for agriculture in controlled environments.”

The authors acknowledge that much of their analysis is speculative and that their calculations show a number of significant challenges to making biomanufacturing a feasible augmentation and replacement for abiotic technologies. However, they argue that the investment to overcome these barriers offers dramatic potential payoff for future space programs.

“We’ve got a long way to go since experimental proof-of-concept work in synthetic biology for space applications is just beginning, but long-duration manned missions are also a ways off,” says Menezes. “Abiotic technologies were developed for many, many decades before they were successfully utilized in space, so of course biological technologies have some catching-up to do. However, this catching-up may not be that much, and in some cases, the biological technologies may already be superior to their abiotic counterparts.”

###

This research was supported by the National Aeronautics and Space Administration (NASA) and the University of California, Santa Cruz.

Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit http://www.lbl.gov.

Life in Earth’s primordial sea was starved for sulfate

This is a research vessel on Lake Matano, Indonesia -- a modern lake with chemistry similar to Earth's early oceans. -  Sean Crowe, University of British Columbia.
This is a research vessel on Lake Matano, Indonesia — a modern lake with chemistry similar to Earth’s early oceans. – Sean Crowe, University of British Columbia.

The Earth’s ancient oceans held much lower concentrations of sulfate–a key biological nutrient–than previously recognized, according to research published this week in Science.

The findings paint a new portrait of our planet’s early biosphere and primitive marine life. Organisms require sulfur as a nutrient, and it plays a central role in regulating atmospheric chemistry and global climate.

“Our findings are a fraction of previous estimates, and thousands of time lower than current seawater levels,” says Sean Crowe, a lead author of the study and an assistant professor in the Departments of Microbiology and Immunology, and Earth, Ocean and Atmospheric Sciences at the University of British Columbia.

“At these trace amounts, sulfate would have been poorly mixed and short-lived in the oceans–and this sulfate scarcity would have shaped the nature, activity and evolution of early life on Earth.”

UBC, University of Southern Denmark, CalTech, University of Minnesota Duluth, and University of Maryland researchers used new techniques and models to calibrate fingerprints of bacterial sulfur metabolisms in Lake Matano, Indonesia — a modern lake with chemistry similar to Earth’s early oceans.

Measuring these fingerprints in rocks older than 2.5 billion years, they discovered sulfate 80 times lower than previously thought.

The more sensitive fingerprinting provides a powerful tool to search for sulfur metabolisms deep in Earth’s history or on other planets like Mars.

Findings

Previous research has suggested that Archean sulfate levels were as low as 200 micromolar– concentrations at which sulfur would still have been abundantly available to early marine life.

The new results indicate levels were likely less than 2.5 micromolar, thousands of times lower than today.

What the researchers did

Researchers used state-of-the-art mass spectrometric approaches developed at California Institute of Technology to demonstrate that microorganisms fractionate sulfur isotopes at concentrations orders of magnitude lower than previously recognized.

They found that microbial sulfur metabolisms impart large fingerprints even when sulfate is scarce.

The team used the techniques on samples from Lake Matano, Indonesia–a sulfate-poor modern analogue for the Earth’s Archean oceans.

“New measurements in these unique modern environments allow us to use numerical models to reconstruct ancient ocean chemistry with unprecedented resolution” says Sergei Katsev an Associate Professor at the Large Lakes Observatory, University of Minnesota Duluth.

Using models informed by sulfate isotope fractionation in Lake Matano, they established a new calibration for sulfate isotope fractionation that is extensible to the Earth’s oceans throughout history. The researchers then reconstructed Archean seawater sulfate concentrations using these models and an exhaustive compilation of sulfur isotope data from Archean sedimentary rocks.

###

Crowe initiated the research while a post-doctoral fellow with Donald Canfield at the University of Southern Denmark.

Australian volcanic mystery explained: ANU media release

This is Dr. Rhodri Davies in the Raijin Supercomputer at The Australian National University. -  Stuart Hay, ANU
This is Dr. Rhodri Davies in the Raijin Supercomputer at The Australian National University. – Stuart Hay, ANU

Scientists have solved a long-standing mystery surrounding Australia’s only active volcanic area, in the country’s southeast.

The research explains a volcanic region that has seen more than 400 volcanic events in the last four million years. The 500 kilometre long region stretches from Melbourne to the South Australian town of Mount Gambier, which surrounds a dormant volcano that last erupted only 5,000 years ago.

“Volcanoes in this region of Australia are generated by a very different process to most of Earth’s volcanoes, which occur on the edges of tectonic plates, such as the Pacific Rim of Fire”, says lead researcher Dr Rhodri Davies, from the Research School of Earth Sciences.

“We have determined that the volcanism arises from a unique interaction between local variations in the continent’s thickness, which we were able to map for the first time, and its movement, at seven centimetres a year northwards towards New Guinea and Indonesia.

The volcanic area is comparatively shallow, less than 200 kilometres deep, in an area where a 2.5 billion year-old part of the continent meets a thinner, younger section, formed in the past 500 million years or so.

These variations in thickness drive currents within the underlying mantle, which draw heat from deeper up to the surface.

The researchers used state-of-the-art techniques to model these currents on the NCI Supercomputer, Raijin, using more than one million CPU hours.

“This boundary runs the length of eastern Australia, but our computer model demonstrates, for the first time, how Australia’s northward drift results in an isolated hotspot in this region,” Dr Davies said.

Dr Davies will now apply his research technique to other volcanic mysteries around the globe.

“There are around 50 other similarly isolated volcanic regions around the world, several of which we may now be able to explain,” he said.

It is difficult to predict where or when future eruptions might occur, Dr Davies said.

“There hasn’t been an eruption in 5,000 years, so there is no need to panic. However, the region is still active and we can’t rule out any eruptions in the future.”

Researchers turn to 3-D technology to examine the formation of cliffband landscapes

This is a scene from the Colorado Plateau region of Utah. -  Dylan Ward
This is a scene from the Colorado Plateau region of Utah. – Dylan Ward

A blend of photos and technology takes a new twist on studying cliff landscapes and how they were formed. Dylan Ward, a University of Cincinnati assistant professor of geology, will present a case study on this unique technology application at The Geological Society of America’s Annual Meeting & Exposition. The meeting takes place Oct. 19-22, in Vancouver.

Ward is using a method called Structure-From-Motion Photogrammetry – computational photo image processing techniques – to study the formation of cliff landscapes in Colorado and Utah and to understand how the layered rock formations in the cliffs are affected by erosion.

To get an idea of these cliff formations, think of one of the nation’s most spectacular tourist attractions, the Grand Canyon.

“The Colorado plateau, for example, has areas with a very simple, sandstone-over-shale layered stratigraphy. We’re examining how the debris and sediment off that sandstone ends up down in the stream channels on the shale, and affects the erosion by those streams,” explains Ward. “The river cuts down through the rock, creating the cliffs. The cliffs walk back by erosion, so there’s this spectacular staircase of stratigraphy that owes its existence and form to that general process.”

Ward’s research takes a new approach to documenting the topography in very high resolution, using a new method of photogrammetry – measurement in 3-D, based on stereo photographs.

“First, we use a digital camera to take photos of the landscape from different angles. Then, we use a sophisticated imaging processing program than can take that set of photos and find the common points between the photographs. From there, we can build a 3-D computer model of that landscape. Months of fieldwork, in comparison, would only produce a fraction of the data that we produce in the computer model,” says Ward.

Ward says that ultimately, examining this piece of the puzzle will give researchers an idea as to how the broader U.S. landscape was formed.

Geologists dig into science around the globe, on land and at sea

University of Cincinnati geologists will be well represented among geoscientists from around the world at The Geological Society of America’s Annual Meeting and Exposition. The meeting takes place Oct. 19-22, in Vancouver, Canada, and will feature geoscientists representing more than 40 different disciplines. The meeting will feature highlights of UC’s geological research that is taking place globally, from Chile to Costa Rica, Belize, Bulgaria, Scotland, Trinidad and a new project under development in the Canary Islands.

UC faculty and graduate students are lead or supporting authors on more than two dozen Earth Sciences-related research papers and/or PowerPoint and poster exhibitions at the GSA meeting.

The presentations also cover UC’s longtime and extensive exploration and findings in the Cincinnati Arch of the Ohio Valley, world-renowned for its treasure trove of paleontology – plant and animal fossils that were preserved when a shallow sea covered the region 450 million years ago during the Paleozoic Era.

Furthermore, in an effort to diversify the field of researchers in the Earth Sciences, a UC assistant professor of science education and geology, Christopher Atchison, was awarded funding from the National Science Foundation and the Society of Exploration Geophysics to lead a research field trip in Vancouver for students with disabilities. Graduate and undergraduate student participants will conduct the research on Oct. 18 and then join events at the GSA meeting. They’ll be guided by geoscience researchers representing the United Kingdom, New Zealand, Canada and the U.S. Those guides include Atchison and Julie Hendricks, a UC special education major from Batavia, Ohio, who will be using her expertise in American Sign Language (ASL) to assist student researchers representing Deaf and Hard of Hearing communities.

The meeting will also formally introduce Arnold Miller, UC professor of geology, as the new president-elect of the national Paleontological Society Thomas Lowell, professor of geology, is a recently elected Fellow of the Geological Society of America – a recognition for producing a substantial body of research. Lowell joins colleagues Warren Huff, professor of geology, and Lewis Owen, professor and head of the Department of Geology, as GSA Fellows.

Here are highlights of the UC research to be presented at the GSA meeting Oct. 19-22:

Staying Put or Moving On? Researchers Develop Model to Identify Migrating Patterns of Different Species

Are plant and animal species what you might call lifelong residents – they never budge from the same place? That’s a relatively common belief in ecology and paleoecology – that classes of organisms tend to stay put over millions of years and either evolve or go extinct as the environment changes. UC researchers developed a series of numerical models simulating shifting habitats in fossil regions to compare whether species changed environments when factoring geological and other changes in the fossil record. They found that geologically driven changes in the quality of the fossil record did not distort the real ecological signal, and that most species maintained their particular habitat preferences through time. They did not evolve to adapt to changing environments, but rather, they migrated, following their preferred environments. That is to say, they did not stay in place geographically but by moving, they were able to track their favored habitats. Field research for the project was conducted in New York state as well as the paleontological-rich region of Cincinnati; Dayton, Ohio, Lexington, Ky.; and Indiana. Funding for the project was supported by The Paleontological Society; The Geological Society of America; The American Museum of Natural History and the UC Geology Department’s Kenneth E. Caster Memorial Fund.

Presenter: Andrew Zaffos, UC geology doctoral student

Co-authors: Arnold Miller, Carlton Brett

Pioneering Study Provides a Better Understanding of What Southern Ohio and Central Kentucky Looked Like Hundreds of Millions of Years Ago

The end of the Ordovician period resulted in one of the largest mass extinction events in the Earth’s history. T.J. Malgieri, a UC master’s student in geology, led this study examining the limestone and shales of the Upper Ordovician Period – the geologic Grant Lake Formation covering southern Ohio and central Kentucky – to recreate how the shoreline looked some 445 million years ago. In this pioneering study of mud cracks and deposits in the rocks, the researchers discovered that the shoreline existed to the south and that the water became deeper toward the north. By determining these ecological parameters, the ramp study provides a better understanding of environments during a time of significant ecological change. Malgieri says the approach can be applied to other basins throughout the world to create depth indicators in paeloenvironments.

Presenter: T.J. Malgieri, UC geology master’s student

Co-authors: Carlton Brett, Cameron Schalbach, Christopher Aucoin, UC; James Thomka (UC, University of Akron); Benjamin Dattilo, Indiana University Purdue University Ft. Wayne

UC Researchers Take a Unique Approach to Monitoring Groundwater Supplies Near Ohio Fracking Sites

A collaborative research project out of UC is examining effects of fracking on groundwater in the Utica Shale region of eastern Ohio. First launched in Carroll County in 2012, the team of researchers is examining methane levels and origins of methane in private wells and springs before, during and after the onset of fracking. The team travels to the region to take water samples four times a year.

Presenter: Claire Botner, a UC geology master’s student

Co-author: Amy Townsend-Small, UC assistant professor of geology

Sawing Through Seagrass to Reveal Clues to the Past

Kelsy Feser, a UC doctoral student in geology, is working at several sites around St. Croix in the Virgin Islands to see if human developments impact marine life. The research focuses on shells of snails and clams that have piled up on the sea floor for thousands of years. Digging through layers of thick seagrass beds on the ocean floor, Feser can examine deeper shells that were abundant thousands of years ago and compare them to shallower layers that include living clams and snails. Early analysis indicates a greater population of potentially pollution-tolerant mussels in an area near a landfill on the island, compared with shells from much earlier time periods. Feser is doing this sea grass analysis around additional sites including tourist resorts, an oil refinery, a power plant and a marina. Funding for the research is provided by the Paleontological Society, the GSA, the American Museum of Natural History and the UC Geology Department.

Presenter: Kelsy Feser, UC geology doctoral student

Co-authors: Arnold Miller

Turning to the Present to Understand the Past

In order to properly interpret changes in climate, vegetation, or animal populations over time, it is necessary to establish a comparative baseline. Stella Mosher, a UC geology master’s student, is studying stable carbon, nitrogen, sulfur and strontium isotopes in modern vegetation from the Canary Islands in order to quantify modern climatic and environmental patterns. Her findings will provide a crucial foundation for future UC research on regional paleoclimatic and paleoenvironmental shifts.

Presenter: Stella Mosher, graduate student in geology

Co-authors: Brooke Crowley, assistant professor of geology; Yurena Yanes, research assistant professor of geology

A Study on the Impact of Sea Spray

Sulfur is an element of interest in both geology and archaeology, because it can reveal information about the diets of ancient cultures. This study takes a novel approach to studying how sea spray can affect the sulfur isotope values in plants on a small island, focusing on the island of Trinidad. Researchers collected leaves from different plant species to get their sulfur isotope value, exploring whether wind direction played a role in how plants were influenced by the marine water from sea spray. Vegetation was collected from the edges of the island to the deeply forested areas. The study found that sulfur isotope values deeper inland and on the calmer west coast were dramatically lower in indicating marine water than vegetation along the edges and the east coast. The findings can help indicate the foraging activities of humans and animals. Funding for the study was supported by the Geological Society of America, the UC Graduate Student Association and the UC Department of Geology.

Presenter: Janine Sparks, UC geology doctoral student

Co-authors: Brooke Crowley, UC assistant professor, geology/anthropology; William Gilhooly III, assistant professor, Earth Sciences, Indiana University-Purdue University Indianapolis

Proxy Wars – The Paleobiology Data Debate

For the past several decades, paleobiologists have built large databases containing information on fossil plants and animals of all geological ages to investigate the timing and extent of major changes in biodiversity – changes such as mass extinctions that have taken place throughout the history of life. Biodiversity researcher Arnold Miller says that in building these databases, it can be a challenge to accurately identify species in the geological record, so it has been common for researchers to instead study biodiversity trends using data compiled at broader levels of biological classification, including the genus level, under the assumption that these patterns are effective proxies for what would be observed among species if the data were available. Miller has been involved in construction of The Paleobiology Database, an extensive public online resource that contains global genus- and species-level data, now permitting a direct, novel look at the similarities and differences between patterns at these two levels. Miller’s discussion aims to set the record straight as to when researchers can effectively use a genus as a proxy for a species and also when it’s inappropriate. This research is funded by the NASA Astrobiology Program.

Presenter: Arnold Miller, UC professor of geology

A Novel New Method for Examining the Distribution of Pores in Rocks

Oil and gas companies take an interest in the porosity of sedimentary rocks because those open spaces can be filled with fuel resources. Companies involved with hydraulic fracturing (“fracking”) are also interested in porosity because it could be a source for storing wastewater as a result of fracking. In this unique study, UC researchers made pore-size measurements similar to those used in crystal size distribution (CSD) theory to determine distribution of pores as a function of their sizes, using thin sections of rock. In addition to providing accurate porosity distribution at a given depth, their approach can be extended to evaluate variation of pore spaces as a function of depth in a drill core, percent of pores in each size range, and pore types and pore geometry. The Texas Bureau of Economic Geology provided the rock samples used in the study. Funding for the study was supported by the Turkish Petroleum Corporation.

Presenter: Ugurlu Ibrahim, master’s student in geology

Co-author: Attila Kilinc, professor of geology

Researchers Turn to 3-D Technology to Examine the Formation of Cliffband Landscapes

A blend of photos and technology takes a new twist on studying cliff landscapes and how they were formed. The method called Structure-From-Motion Photogrammetry – computational photo image processing techniques – is used to study the formation of cliff landscapes in Colorado and Utah and to understand how the layered rock formations in the cliffs are affected by erosion.

Presenter: Dylan Ward, UC assistant professor of geology

Testing the Links Between Climate and Sedimentation in the Atacama Desert, Northern Chile

The Atacama Desert is used as an analog for understanding the surface of Mars. In some localities, there has been no activity for millions of years. UC researchers have been working along the flank of the Andes Mountains in northern Chile, and this particular examination focuses on the large deposits of sediment that are transported down the plateau and gather at the base. The researchers are finding that their samples are not reflecting the million-year-old relics previously found on such expeditions, but may indicate more youthful activity possibly resulting from climatic events. The research is supported by a $273,634 grant from the National Science Foundation to explore glacio-geomorphic constraints on the climate history of subtropical northern Chile.

Presenter: Jason Cesta, UC geology master’s student

Co-author: Dylan Ward, UC assistant professor of geology

Uncovering the Explosive Mysteries Surrounding the Manganese of Northeast Bulgaria

UC’s geology collections hold minerals from field expeditions around the world, including manganese from the Obrochishte mines of northeastern Bulgaria. Found in the region’s sedimentary rock, manganese can be added to metals such as steel to improve strength. It’s widely believed that these manganese formations were the result of ocean water composition at the time the sediments were deposited in the ocean. In this presentation, UC researchers present new information on why they believe the manganese formations resulted from volcanic eruptions, perhaps during the Rupelian stage of the geologic time scale, when bentonite clay minerals were formed. The presentation evolved from an advance class project last spring under the direction of Warren Huff, a UC professor of geology.

Presenter: Jason Cesta, UC geology master’s student

Co-authors: Warren Huff, UC professor of geology; Christopher Aucoin; Michael Harrell; Thomas Malgieri; Barry Maynard; Cameron Schwalbach; Ibrahim Ugurlu; Antony Winrod

Two UC researchers will chair sessions at the GSA meeting: Doctoral student Gary Motz will chair the session, “Topics in Paleoecology: Modern Analogues and Ancient Systems,” on Oct. 19. Matt Vrazo, also a doctoral student in geology, is chairing “Paleontology: Trace Fossils, Taphonomy and Exceptional Preservation” on Oct. 21, and will present, “Taphonomic and Ecological Controls on Eurypterid Lagerstäten: A Model for Preservation in the Mid-Paleozoic.”

###

UC’s nationally ranked Department of Geology conducts field research around the world in areas spanning paleontology, quaternary geology, geomorphology, sedimentology, stratigraphy, tectonics, environmental geology and biogeochemistry.

The Geological Society of America, founded in 1888, is a scientific society with more than 26,500 members from academia, government, and industry in more than 100 countries. Through its meetings, publications, and programs, GSA enhances the professional growth of its members and promotes the geosciences in the service of humankind.

Early Earth less hellish than previously thought

Calvin Miller is shown at the Kerlingarfjoll volcano in central Iceland. Some geologists have proposed that the early Earth may have resembled regions like this. -  Tamara Carley
Calvin Miller is shown at the Kerlingarfjoll volcano in central Iceland. Some geologists have proposed that the early Earth may have resembled regions like this. – Tamara Carley

Conditions on Earth for the first 500 million years after it formed may have been surprisingly similar to the present day, complete with oceans, continents and active crustal plates.

This alternate view of Earth’s first geologic eon, called the Hadean, has gained substantial new support from the first detailed comparison of zircon crystals that formed more than 4 billion years ago with those formed contemporaneously in Iceland, which has been proposed as a possible geological analog for early Earth.

The study was conducted by a team of geologists directed by Calvin Miller, the William R. Kenan Jr. Professor of Earth and Environmental Sciences at Vanderbilt University, and published online this weekend by the journal Earth and Planetary Science Letters in a paper titled, “Iceland is not a magmatic analog for the Hadean: Evidence from the zircon record.”

From the early 20th century up through the 1980’s, geologists generally agreed that conditions during the Hadean period were utterly hostile to life. Inability to find rock formations from the period led them to conclude that early Earth was hellishly hot, either entirely molten or subject to such intense asteroid bombardment that any rocks that formed were rapidly remelted. As a result, they pictured the surface of the Earth as covered by a giant “magma ocean.”

This perception began to change about 30 years ago when geologists discovered zircon crystals (a mineral typically associated with granite) with ages exceeding 4 billion years old preserved in younger sandstones. These ancient zircons opened the door for exploration of the Earth’s earliest crust. In addition to the radiometric dating techniques that revealed the ages of these ancient zircons, geologists used other analytical techniques to extract information about the environment in which the crystals formed, including the temperature and whether water was present.

Since then zircon studies have revealed that the Hadean Earth was not the uniformly hellish place previously imagined, but during some periods possessed an established crust cool enough so that surface water could form – possibly on the scale of oceans.

Accepting that the early Earth had a solid crust and liquid water (at least at times), scientists have continued to debate the nature of that crust and the processes that were active at that time: How similar was the Hadean Earth to what we see today?

Two schools of thought have emerged: One argues that Hadean Earth was surprisingly similar to the present day. The other maintains that, although it was less hostile than formerly believed, early Earth was nonetheless a foreign-seeming and formidable place, similar to the hottest, most extreme, geologic environments of today. A popular analog is Iceland, where substantial amounts of crust are forming from basaltic magma that is much hotter than the magmas that built most of Earth’s current continental crust.

“We reasoned that the only concrete evidence for what the Hadean was like came from the only known survivors: zircon crystals – and yet no one had investigated Icelandic zircon to compare their telltale compositions to those that are more than 4 billion years old, or with zircon from other modern environments,” said Miller.

In 2009, Vanderbilt doctoral student Tamara Carley, who has just accepted the position of assistant professor at Layfayette College, began collecting samples from volcanoes and sands derived from erosion of Icelandic volcanoes. She separated thousands of zircon crystals from the samples, which cover the island’s regional diversity and represent its 18 million year history.

Working with Miller and doctoral student Abraham Padilla at Vanderbilt, Joe Wooden at Stanford University, Axel Schmitt and Rita Economos from UCLA, Ilya Bindeman at the University of Oregon and Brennan Jordan at the University of South Dakota, Carley analyzed about 1,000 zircon crystals for their age and elemental and isotopic compositions. She then searched the literature for all comparable analyses of Hadean zircon and for representative analyses of zircon from other modern environments.

“We discovered that Icelandic zircons are quite distinctive from crystals formed in other locations on modern Earth. We also found that they formed in magmas that are remarkably different from those in which the Hadean zircons grew,” said Carley.

Most importantly, their analysis found that Icelandic zircons grew from much hotter magmas than Hadean zircons. Although surface water played an important role in the generation of both Icelandic and Hadean crystals, in the Icelandic case the water was extremely hot when it interacted with the source rocks while the Hadean water-rock interactions were at significantly lower temperatures.

“Our conclusion is counterintuitive,” said Miller. “Hadean zircons grew from magmas rather similar to those formed in modern subduction zones, but apparently even ‘cooler’ and ‘wetter’ than those being produced today.”