Abandoned wells can be ‘super-emitters’ of greenhouse gas

One of the wells the researchers tested; this one in the Allegheny National Forest. -  Princeton University
One of the wells the researchers tested; this one in the Allegheny National Forest. – Princeton University

Princeton University researchers have uncovered a previously unknown, and possibly substantial, source of the greenhouse gas methane to the Earth’s atmosphere.

After testing a sample of abandoned oil and natural gas wells in northwestern Pennsylvania, the researchers found that many of the old wells leaked substantial quantities of methane. Because there are so many abandoned wells nationwide (a recent study from Stanford University concluded there were roughly 3 million abandoned wells in the United States) the researchers believe the overall contribution of leaking wells could be significant.

The researchers said their findings identify a need to make measurements across a wide variety of regions in Pennsylvania but also in other states with a long history of oil and gas development such as California and Texas.

“The research indicates that this is a source of methane that should not be ignored,” said Michael Celia, the Theodore Shelton Pitney Professor of Environmental Studies and professor of civil and environmental engineering at Princeton. “We need to determine how significant it is on a wider basis.”

Methane is the unprocessed form of natural gas. Scientists say that after carbon dioxide, methane is the most important contributor to the greenhouse effect, in which gases in the atmosphere trap heat that would otherwise radiate from the Earth. Pound for pound, methane has about 20 times the heat-trapping effect as carbon dioxide. Methane is produced naturally, by processes including decomposition, and by human activity such as landfills and oil and gas production.

While oil and gas companies work to minimize the amount of methane emitted by their operations, almost no attention has been paid to wells that were drilled decades ago. These wells, some of which date back to the 19th century, are typically abandoned and not recorded on official records.

Mary Kang, then a doctoral candidate at Princeton, originally began looking into methane emissions from old wells after researching techniques to store carbon dioxide by injecting it deep underground. While examining ways that carbon dioxide could escape underground storage, Kang wondered about the effect of old wells on methane emissions.

“I was looking for data, but it didn’t exist,” said Kang, now a postdoctoral researcher at Stanford.

In a paper published Dec. 8 in the Proceedings of the National Academy of Sciences, the researchers describe how they chose 19 wells in the adjacent McKean and Potter counties in northwestern Pennsylvania. The wells chosen were all abandoned, and records about the origin of the wells and their conditions did not exist. Only one of the wells was on the state’s list of abandoned wells. Some of the wells, which can look like a pipe emerging from the ground, are located in forests and others in people’s yards. Kang said the lack of documentation made it hard to tell when the wells were originally drilled or whether any attempt had been made to plug them.

“What surprised me was that every well we measured had some methane coming out,” said Celia.

To conduct the research, the team placed enclosures called flux chambers over the tops of the wells. They also placed flux chambers nearby to measure the background emissions from the terrain and make sure the methane was emitted from the wells and not the surrounding area.

Although all the wells registered some level of methane, about 15 percent emitted the gas at a markedly higher level — thousands of times greater than the lower-level wells. Denise Mauzerall, a Princeton professor and a member of the research team, said a critical task is to discover the characteristics of these super-emitting wells.

Mauzerall said the relatively low number of high-emitting wells could offer a workable solution: while trying to plug every abandoned well in the country might be too costly to be realistic, dealing with the smaller number of high emitters could be possible.

“The fact that most of the methane is coming out of a small number of wells should make it easier to address if we can identify the high-emitting wells,” said Mauzerall, who has a joint appointment as a professor of civil and environmental engineering and as a professor of public and international affairs at the Woodrow Wilson School.

The researchers have used their results to extrapolate total methane emissions from abandoned wells in Pennsylvania, although they stress that the results are preliminary because of the relatively small sample. But based on that data, they estimate that emissions from abandoned wells represents as much as 10 percent of methane from human activities in Pennsylvania — about the same amount as caused by current oil and gas production. Also, unlike working wells, which have productive lifetimes of 10 to 15 years, abandoned wells can continue to leak methane for decades.

“This may be a significant source,” Mauzerall said. “There is no single silver bullet but if it turns out that we can cap or capture the methane coming off these really big emitters, that would make a substantial difference.”


Besides Kang, who is the paper’s lead author, Celia and Mauzerall, the paper’s co-authors include: Tullis Onstott, a professor of geosciences at Princeton; Cynthia Kanno, who was a Princeton undergraduate and who is a graduate student at the Colorado School of Mines; Matthew Reid, who was a graduate student at Princeton and is a postdoctoral researcher at EPFL in Luzerne, Switzerland; Xin Zhang, a postdoctoral researcher in the Woodrow Wilson School at Princeton; and Yuheng Chen, an associate research scholar in geosciences at Princeton.

Abandoned wells can be ‘super-emitters’ of greenhouse gas

One of the wells the researchers tested; this one in the Allegheny National Forest. -  Princeton University
One of the wells the researchers tested; this one in the Allegheny National Forest. – Princeton University

Princeton University researchers have uncovered a previously unknown, and possibly substantial, source of the greenhouse gas methane to the Earth’s atmosphere.

After testing a sample of abandoned oil and natural gas wells in northwestern Pennsylvania, the researchers found that many of the old wells leaked substantial quantities of methane. Because there are so many abandoned wells nationwide (a recent study from Stanford University concluded there were roughly 3 million abandoned wells in the United States) the researchers believe the overall contribution of leaking wells could be significant.

The researchers said their findings identify a need to make measurements across a wide variety of regions in Pennsylvania but also in other states with a long history of oil and gas development such as California and Texas.

“The research indicates that this is a source of methane that should not be ignored,” said Michael Celia, the Theodore Shelton Pitney Professor of Environmental Studies and professor of civil and environmental engineering at Princeton. “We need to determine how significant it is on a wider basis.”

Methane is the unprocessed form of natural gas. Scientists say that after carbon dioxide, methane is the most important contributor to the greenhouse effect, in which gases in the atmosphere trap heat that would otherwise radiate from the Earth. Pound for pound, methane has about 20 times the heat-trapping effect as carbon dioxide. Methane is produced naturally, by processes including decomposition, and by human activity such as landfills and oil and gas production.

While oil and gas companies work to minimize the amount of methane emitted by their operations, almost no attention has been paid to wells that were drilled decades ago. These wells, some of which date back to the 19th century, are typically abandoned and not recorded on official records.

Mary Kang, then a doctoral candidate at Princeton, originally began looking into methane emissions from old wells after researching techniques to store carbon dioxide by injecting it deep underground. While examining ways that carbon dioxide could escape underground storage, Kang wondered about the effect of old wells on methane emissions.

“I was looking for data, but it didn’t exist,” said Kang, now a postdoctoral researcher at Stanford.

In a paper published Dec. 8 in the Proceedings of the National Academy of Sciences, the researchers describe how they chose 19 wells in the adjacent McKean and Potter counties in northwestern Pennsylvania. The wells chosen were all abandoned, and records about the origin of the wells and their conditions did not exist. Only one of the wells was on the state’s list of abandoned wells. Some of the wells, which can look like a pipe emerging from the ground, are located in forests and others in people’s yards. Kang said the lack of documentation made it hard to tell when the wells were originally drilled or whether any attempt had been made to plug them.

“What surprised me was that every well we measured had some methane coming out,” said Celia.

To conduct the research, the team placed enclosures called flux chambers over the tops of the wells. They also placed flux chambers nearby to measure the background emissions from the terrain and make sure the methane was emitted from the wells and not the surrounding area.

Although all the wells registered some level of methane, about 15 percent emitted the gas at a markedly higher level — thousands of times greater than the lower-level wells. Denise Mauzerall, a Princeton professor and a member of the research team, said a critical task is to discover the characteristics of these super-emitting wells.

Mauzerall said the relatively low number of high-emitting wells could offer a workable solution: while trying to plug every abandoned well in the country might be too costly to be realistic, dealing with the smaller number of high emitters could be possible.

“The fact that most of the methane is coming out of a small number of wells should make it easier to address if we can identify the high-emitting wells,” said Mauzerall, who has a joint appointment as a professor of civil and environmental engineering and as a professor of public and international affairs at the Woodrow Wilson School.

The researchers have used their results to extrapolate total methane emissions from abandoned wells in Pennsylvania, although they stress that the results are preliminary because of the relatively small sample. But based on that data, they estimate that emissions from abandoned wells represents as much as 10 percent of methane from human activities in Pennsylvania — about the same amount as caused by current oil and gas production. Also, unlike working wells, which have productive lifetimes of 10 to 15 years, abandoned wells can continue to leak methane for decades.

“This may be a significant source,” Mauzerall said. “There is no single silver bullet but if it turns out that we can cap or capture the methane coming off these really big emitters, that would make a substantial difference.”


Besides Kang, who is the paper’s lead author, Celia and Mauzerall, the paper’s co-authors include: Tullis Onstott, a professor of geosciences at Princeton; Cynthia Kanno, who was a Princeton undergraduate and who is a graduate student at the Colorado School of Mines; Matthew Reid, who was a graduate student at Princeton and is a postdoctoral researcher at EPFL in Luzerne, Switzerland; Xin Zhang, a postdoctoral researcher in the Woodrow Wilson School at Princeton; and Yuheng Chen, an associate research scholar in geosciences at Princeton.

West Antarctic melt rate has tripled: UC Irvine-NASA

A comprehensive, 21-year analysis of the fastest-melting region of Antarctica has found that the melt rate of glaciers there has tripled during the last decade.

The glaciers in the Amundsen Sea Embayment in West Antarctica are hemorrhaging ice faster than any other part of Antarctica and are the most significant Antarctic contributors to sea level rise. This study is the first to evaluate and reconcile observations from four different measurement techniques to produce an authoritative estimate of the amount and the rate of loss over the last two decades.

“The mass loss of these glaciers is increasing at an amazing rate,” said scientist Isabella Velicogna, jointly of the UC Irvine and NASA’s Jet Propulsion Laboratory. Velicogna is a coauthor of a paper on the results, which has been accepted for Dec. 5 publication in the journal Geophysical Research Letters.

Lead author Tyler Sutterley, a UCI doctoral candidate, and his team did the analysis to verify that the melting in this part of Antarctica is shifting into high gear. “Previous studies had suggested that this region is starting to change very dramatically since the 1990s, and we wanted to see how all the different techniques compared,” Sutterley said. “The remarkable agreement among the techniques gave us confidence that we are getting this right.”

The researchers reconciled measurements of the mass balance of glaciers flowing into the Amundsen Sea Embayment. Mass balance is a measure of how much ice the glaciers gain and lose over time from accumulating or melting snow, discharges of ice as icebergs, and other causes. Measurements from all four techniques were available from 2003 to 2009. Combined, the four data sets span the years 1992 to 2013.

The glaciers in the embayment lost mass throughout the entire period. The researchers calculated two separate quantities: the total amount of loss, and the changes in the rate of loss.

The total amount of loss averaged 83 gigatons per year (91.5 billion U.S. tons). By comparison, Mt. Everest weighs about 161 gigatons, meaning the Antarctic glaciers lost a Mt.-Everest’s-worth amount of water weight every two years over the last 21 years.

The rate of loss accelerated an average of 6.1 gigatons (6.7 billion U.S. tons) per year since 1992.

From 2003 to 2009, when all four observational techniques overlapped, the melt rate increased an average of 16.3 gigatons per year — almost three times the rate of increase for the full 21-year period. The total amount of loss was close to the average at 84 gigatons.

The four sets of observations include NASA’s Gravity Recovery and Climate Experiment (GRACE) satellites, laser altimetry from NASA’s Operation IceBridge airborne campaign and earlier ICESat satellite, radar altimetry from the European Space Agency’s Envisat satellite, and mass budget analyses using radars and the University of Utrecht’s Regional Atmospheric Climate Model.

The scientists noted that glacier and ice sheet behavior worldwide is by far the greatest uncertainty in predicting future sea level. “We have an excellent observing network now. It’s critical that we maintain this network to continue monitoring the changes,” Velicogna said, “because the changes are proceeding very fast.”

###

About the University of California, Irvine:

Founded in 1965, UCI is the youngest member of the prestigious Association of American Universities. The campus has produced three Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UCI has more than 30,000 students and offers 192 degree programs. Located in one of the world’s safest and most economically vibrant communities, it’s Orange County’s second-largest employer, contributing $4.8 billion annually to the local economy.

Media access: Radio programs/stations may, for a fee, use an on-campus ISDN line to interview UC Irvine faculty and experts, subject to availability and university approval. For more UC Irvine news, visit news.uci.edu. Additional resources for journalists may be found at communications.uci.edu/for-journalists.

West Antarctic melt rate has tripled: UC Irvine-NASA

A comprehensive, 21-year analysis of the fastest-melting region of Antarctica has found that the melt rate of glaciers there has tripled during the last decade.

The glaciers in the Amundsen Sea Embayment in West Antarctica are hemorrhaging ice faster than any other part of Antarctica and are the most significant Antarctic contributors to sea level rise. This study is the first to evaluate and reconcile observations from four different measurement techniques to produce an authoritative estimate of the amount and the rate of loss over the last two decades.

“The mass loss of these glaciers is increasing at an amazing rate,” said scientist Isabella Velicogna, jointly of the UC Irvine and NASA’s Jet Propulsion Laboratory. Velicogna is a coauthor of a paper on the results, which has been accepted for Dec. 5 publication in the journal Geophysical Research Letters.

Lead author Tyler Sutterley, a UCI doctoral candidate, and his team did the analysis to verify that the melting in this part of Antarctica is shifting into high gear. “Previous studies had suggested that this region is starting to change very dramatically since the 1990s, and we wanted to see how all the different techniques compared,” Sutterley said. “The remarkable agreement among the techniques gave us confidence that we are getting this right.”

The researchers reconciled measurements of the mass balance of glaciers flowing into the Amundsen Sea Embayment. Mass balance is a measure of how much ice the glaciers gain and lose over time from accumulating or melting snow, discharges of ice as icebergs, and other causes. Measurements from all four techniques were available from 2003 to 2009. Combined, the four data sets span the years 1992 to 2013.

The glaciers in the embayment lost mass throughout the entire period. The researchers calculated two separate quantities: the total amount of loss, and the changes in the rate of loss.

The total amount of loss averaged 83 gigatons per year (91.5 billion U.S. tons). By comparison, Mt. Everest weighs about 161 gigatons, meaning the Antarctic glaciers lost a Mt.-Everest’s-worth amount of water weight every two years over the last 21 years.

The rate of loss accelerated an average of 6.1 gigatons (6.7 billion U.S. tons) per year since 1992.

From 2003 to 2009, when all four observational techniques overlapped, the melt rate increased an average of 16.3 gigatons per year — almost three times the rate of increase for the full 21-year period. The total amount of loss was close to the average at 84 gigatons.

The four sets of observations include NASA’s Gravity Recovery and Climate Experiment (GRACE) satellites, laser altimetry from NASA’s Operation IceBridge airborne campaign and earlier ICESat satellite, radar altimetry from the European Space Agency’s Envisat satellite, and mass budget analyses using radars and the University of Utrecht’s Regional Atmospheric Climate Model.

The scientists noted that glacier and ice sheet behavior worldwide is by far the greatest uncertainty in predicting future sea level. “We have an excellent observing network now. It’s critical that we maintain this network to continue monitoring the changes,” Velicogna said, “because the changes are proceeding very fast.”

###

About the University of California, Irvine:

Founded in 1965, UCI is the youngest member of the prestigious Association of American Universities. The campus has produced three Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UCI has more than 30,000 students and offers 192 degree programs. Located in one of the world’s safest and most economically vibrant communities, it’s Orange County’s second-largest employer, contributing $4.8 billion annually to the local economy.

Media access: Radio programs/stations may, for a fee, use an on-campus ISDN line to interview UC Irvine faculty and experts, subject to availability and university approval. For more UC Irvine news, visit news.uci.edu. Additional resources for journalists may be found at communications.uci.edu/for-journalists.

Climate change was not to blame for the collapse of the Bronze Age

Scientists will have to find alternative explanations for a huge population collapse in Europe at the end of the Bronze Age as researchers prove definitively that climate change – commonly assumed to be responsible – could not have been the culprit.

Archaeologists and environmental scientists from the University of Bradford, University of Leeds, University College Cork, Ireland (UCC), and Queen’s University Belfast have shown that the changes in climate that scientists believed to coincide with the fall in population in fact occurred at least two generations later.

Their results, published this week in Proceedings of the National Academy of Sciences, show that human activity starts to decline after 900BC, and falls rapidly after 800BC, indicating a population collapse. But the climate records show that colder, wetter conditions didn’t occur until around two generations later.

Fluctuations in levels of human activity through time are reflected by the numbers of radiocarbon dates for a given period. The team used new statistical techniques to analyse more than 2000 radiocarbon dates, taken from hundreds of archaeological sites in Ireland, to pinpoint the precise dates that Europe’s Bronze Age population collapse occurred.

The team then analysed past climate records from peat bogs in Ireland and compared the archaeological data to these climate records to see if the dates tallied. That information was then compared with evidence of climate change across NW Europe between 1200 and 500 BC.

“Our evidence shows definitively that the population decline in this period cannot have been caused by climate change,” says Ian Armit, Professor of Archaeology at the University of Bradford, and lead author of the study.

Graeme Swindles, Associate Professor of Earth System Dynamics at the University of Leeds, added, “We found clear evidence for a rapid change in climate to much wetter conditions, which we were able to precisely pinpoint to 750BC using statistical methods.”

According to Professor Armit, social and economic stress is more likely to be the cause of the sudden and widespread fall in numbers. Communities producing bronze needed to trade over very large distances to obtain copper and tin. Control of these networks enabled the growth of complex, hierarchical societies dominated by a warrior elite. As iron production took over, these networks collapsed, leading to widespread conflict and social collapse. It may be these unstable social conditions, rather than climate change, that led to the population collapse at the end of the Bronze Age.

According to Katharina Becker, Lecturer in the Department of Archaeology at UCC, the Late Bronze Age is usually seen as a time of plenty, in contrast to an impoverished Early Iron Age. “Our results show that the rich Bronze Age artefact record does not provide the full picture and that crisis began earlier than previously thought,” she says.

“Although climate change was not directly responsible for the collapse it is likely that the poor climatic conditions would have affected farming,” adds Professor Armit. “This would have been particularly difficult for vulnerable communities, preventing population recovery for several centuries.”

The findings have significance for modern day climate change debates which, argues Professor Armit, are often too quick to link historical climate events with changes in population.

“The impact of climate change on humans is a huge concern today as we monitor rising temperatures globally,” says Professor Armit.

“Often, in examining the past, we are inclined to link evidence of climate change with evidence of population change. Actually, if you have high quality data and apply modern analytical techniques, you get a much clearer picture and start to see the real complexity of human/environment relationships in the past.”

Climate change was not to blame for the collapse of the Bronze Age

Scientists will have to find alternative explanations for a huge population collapse in Europe at the end of the Bronze Age as researchers prove definitively that climate change – commonly assumed to be responsible – could not have been the culprit.

Archaeologists and environmental scientists from the University of Bradford, University of Leeds, University College Cork, Ireland (UCC), and Queen’s University Belfast have shown that the changes in climate that scientists believed to coincide with the fall in population in fact occurred at least two generations later.

Their results, published this week in Proceedings of the National Academy of Sciences, show that human activity starts to decline after 900BC, and falls rapidly after 800BC, indicating a population collapse. But the climate records show that colder, wetter conditions didn’t occur until around two generations later.

Fluctuations in levels of human activity through time are reflected by the numbers of radiocarbon dates for a given period. The team used new statistical techniques to analyse more than 2000 radiocarbon dates, taken from hundreds of archaeological sites in Ireland, to pinpoint the precise dates that Europe’s Bronze Age population collapse occurred.

The team then analysed past climate records from peat bogs in Ireland and compared the archaeological data to these climate records to see if the dates tallied. That information was then compared with evidence of climate change across NW Europe between 1200 and 500 BC.

“Our evidence shows definitively that the population decline in this period cannot have been caused by climate change,” says Ian Armit, Professor of Archaeology at the University of Bradford, and lead author of the study.

Graeme Swindles, Associate Professor of Earth System Dynamics at the University of Leeds, added, “We found clear evidence for a rapid change in climate to much wetter conditions, which we were able to precisely pinpoint to 750BC using statistical methods.”

According to Professor Armit, social and economic stress is more likely to be the cause of the sudden and widespread fall in numbers. Communities producing bronze needed to trade over very large distances to obtain copper and tin. Control of these networks enabled the growth of complex, hierarchical societies dominated by a warrior elite. As iron production took over, these networks collapsed, leading to widespread conflict and social collapse. It may be these unstable social conditions, rather than climate change, that led to the population collapse at the end of the Bronze Age.

According to Katharina Becker, Lecturer in the Department of Archaeology at UCC, the Late Bronze Age is usually seen as a time of plenty, in contrast to an impoverished Early Iron Age. “Our results show that the rich Bronze Age artefact record does not provide the full picture and that crisis began earlier than previously thought,” she says.

“Although climate change was not directly responsible for the collapse it is likely that the poor climatic conditions would have affected farming,” adds Professor Armit. “This would have been particularly difficult for vulnerable communities, preventing population recovery for several centuries.”

The findings have significance for modern day climate change debates which, argues Professor Armit, are often too quick to link historical climate events with changes in population.

“The impact of climate change on humans is a huge concern today as we monitor rising temperatures globally,” says Professor Armit.

“Often, in examining the past, we are inclined to link evidence of climate change with evidence of population change. Actually, if you have high quality data and apply modern analytical techniques, you get a much clearer picture and start to see the real complexity of human/environment relationships in the past.”

Re-learning how to read a genome

New research has revealed that the initial steps of reading DNA are actually remarkably similar at both the genes that encode proteins (here, on the right) and regulatory elements (on the left). The main differences seem to occur after this initial step. Gene messages are long and stable enough to ensure that genes become proteins, whereas regulatory messages are short and unstable, and are rapidly 'cleaned up' by the cell. -  Adam Siepel, Cold Spring Harbor Laboratory
New research has revealed that the initial steps of reading DNA are actually remarkably similar at both the genes that encode proteins (here, on the right) and regulatory elements (on the left). The main differences seem to occur after this initial step. Gene messages are long and stable enough to ensure that genes become proteins, whereas regulatory messages are short and unstable, and are rapidly ‘cleaned up’ by the cell. – Adam Siepel, Cold Spring Harbor Laboratory

There are roughly 20,000 genes and thousands of other regulatory “elements” stored within the three billion letters of the human genome. Genes encode information that is used to create proteins, while other genomic elements help regulate the activation of genes, among other tasks. Somehow all of this coded information within our DNA needs to be read by complex molecular machinery and transcribed into messages that can be used by our cells.

Usually, reading a gene is thought to be a lot like reading a sentence. The reading machinery is guided to the start of the gene by various sequences in the DNA – the equivalent of a capital letter – and proceeds from left to right, DNA letter by DNA letter, until it reaches a sequence that forms a punctuation mark at the end. The capital letter and punctuation marks that tell the cell where, when, and how to read a gene are known as regulatory elements.

But scientists have recently discovered that genes aren’t the only messages read by the cell. In fact, many regulatory elements themselves are also read and transcribed into messages, the equivalent of pronouncing the words “capital letter,” “comma,” or “period.” Even more surprising, genes are read bi-directionally from so-called “start sites” – in effect, generating messages in both forward and backward directions.

With all these messages, how does the cell know which one encodes the information needed to make a protein? Is there something different about the reading process at genes and regulatory elements that helps avoid confusion? New research, published today in Nature Genetics, has revealed that the initial steps of the reading process itself are actually remarkably similar at both genes and regulatory elements. The main differences seem to occur after this initial step, in the length and stability of the messages. Gene messages are long and stable enough to ensure that genes becomes proteins, whereas regulatory messages are short and unstable, and are rapidly “cleaned up” by the cell.

To make the distinction, the team, which was co-led by CSHL Professor Adam Siepel and Cornell University Professor John Lis, looked for differences between the initial reading processes at genes and a set of regulatory elements called enhancers. “We took advantage of highly sensitive experimental techniques developed in the Lis lab to measure newly made messages in the cell,” says Siepel. “It’s like having a new, more powerful microscope for observing the process of transcription as it occurs in living cells.”

Remarkably, the team found that the reading patterns for enhancer and gene messages are highly similar in many respects, sharing a common architecture. “Our data suggests that the same basic reading process is happening at genes and these non-genic regulatory elements,” explains Siepel. “This points to a unified model for how DNA transcription is initiated throughout the genome.”

Working together, the biochemists from Lis’s laboratory and the computer jockeys from Siepel’s group carefully compared the patterns at enhancers and genes, combining their own data with vast public data sets from the NIH’s Encyclopedia of DNA Elements (ENCODE) project. “By many different measures, we found that the patterns of transcription initiation are essentially the same at enhancers and genes,” says Siepel. “Most RNA messages are rapidly targeted for destruction, but the messages at genes that are read in the right direction – those destined to be a protein – are spared from destruction.” The team was able to devise a model to mathematically explain the difference between stable and unstable transcripts, offering insight into what defines a gene. According to Siepel, “Our analysis shows that the ‘code’ for stability is, in large part, written in the DNA, at enhancers and genes alike.”

This work has important implications for the evolutionary origins of new genes, according to Siepel. “Because DNA is read in both directions from any start site, every one of these sites has the potential to generate two protein-coding genes with just a few subtle changes. The genome is full of potential new genes.”

This work was supported by the National Institutes of Health.

“Analysis of transcription start sites from nascent RNA identifies a unified architecture of initiation regions at mammalian promoters and enhancers.” appears online in Nature Genetics on November 10, 2014. The authors are: Leighton Core, AndrĂ© Martins, Charles Danko, Colin Waters, Adam Siepel, and John Lis. The paper can be obtained online at: http://dx.doi.org/10.1038/ng.3142

About Cold Spring Harbor Laboratory

Founded in 1890, Cold Spring Harbor Laboratory (CSHL) has shaped contemporary biomedical research and education with programs in cancer, neuroscience, plant biology and quantitative biology. CSHL is ranked number one in the world by Thomson Reuters for the impact of its research in molecular biology and genetics. The Laboratory has been home to eight Nobel Prize winners. Today, CSHL’s multidisciplinary scientific community is more than 600 researchers and technicians strong and its Meetings & Courses program hosts more than 12,000 scientists from around the world each year to its Long Island campus and its China center. For more information, visit http://www.cshl.edu.

Re-learning how to read a genome

New research has revealed that the initial steps of reading DNA are actually remarkably similar at both the genes that encode proteins (here, on the right) and regulatory elements (on the left). The main differences seem to occur after this initial step. Gene messages are long and stable enough to ensure that genes become proteins, whereas regulatory messages are short and unstable, and are rapidly 'cleaned up' by the cell. -  Adam Siepel, Cold Spring Harbor Laboratory
New research has revealed that the initial steps of reading DNA are actually remarkably similar at both the genes that encode proteins (here, on the right) and regulatory elements (on the left). The main differences seem to occur after this initial step. Gene messages are long and stable enough to ensure that genes become proteins, whereas regulatory messages are short and unstable, and are rapidly ‘cleaned up’ by the cell. – Adam Siepel, Cold Spring Harbor Laboratory

There are roughly 20,000 genes and thousands of other regulatory “elements” stored within the three billion letters of the human genome. Genes encode information that is used to create proteins, while other genomic elements help regulate the activation of genes, among other tasks. Somehow all of this coded information within our DNA needs to be read by complex molecular machinery and transcribed into messages that can be used by our cells.

Usually, reading a gene is thought to be a lot like reading a sentence. The reading machinery is guided to the start of the gene by various sequences in the DNA – the equivalent of a capital letter – and proceeds from left to right, DNA letter by DNA letter, until it reaches a sequence that forms a punctuation mark at the end. The capital letter and punctuation marks that tell the cell where, when, and how to read a gene are known as regulatory elements.

But scientists have recently discovered that genes aren’t the only messages read by the cell. In fact, many regulatory elements themselves are also read and transcribed into messages, the equivalent of pronouncing the words “capital letter,” “comma,” or “period.” Even more surprising, genes are read bi-directionally from so-called “start sites” – in effect, generating messages in both forward and backward directions.

With all these messages, how does the cell know which one encodes the information needed to make a protein? Is there something different about the reading process at genes and regulatory elements that helps avoid confusion? New research, published today in Nature Genetics, has revealed that the initial steps of the reading process itself are actually remarkably similar at both genes and regulatory elements. The main differences seem to occur after this initial step, in the length and stability of the messages. Gene messages are long and stable enough to ensure that genes becomes proteins, whereas regulatory messages are short and unstable, and are rapidly “cleaned up” by the cell.

To make the distinction, the team, which was co-led by CSHL Professor Adam Siepel and Cornell University Professor John Lis, looked for differences between the initial reading processes at genes and a set of regulatory elements called enhancers. “We took advantage of highly sensitive experimental techniques developed in the Lis lab to measure newly made messages in the cell,” says Siepel. “It’s like having a new, more powerful microscope for observing the process of transcription as it occurs in living cells.”

Remarkably, the team found that the reading patterns for enhancer and gene messages are highly similar in many respects, sharing a common architecture. “Our data suggests that the same basic reading process is happening at genes and these non-genic regulatory elements,” explains Siepel. “This points to a unified model for how DNA transcription is initiated throughout the genome.”

Working together, the biochemists from Lis’s laboratory and the computer jockeys from Siepel’s group carefully compared the patterns at enhancers and genes, combining their own data with vast public data sets from the NIH’s Encyclopedia of DNA Elements (ENCODE) project. “By many different measures, we found that the patterns of transcription initiation are essentially the same at enhancers and genes,” says Siepel. “Most RNA messages are rapidly targeted for destruction, but the messages at genes that are read in the right direction – those destined to be a protein – are spared from destruction.” The team was able to devise a model to mathematically explain the difference between stable and unstable transcripts, offering insight into what defines a gene. According to Siepel, “Our analysis shows that the ‘code’ for stability is, in large part, written in the DNA, at enhancers and genes alike.”

This work has important implications for the evolutionary origins of new genes, according to Siepel. “Because DNA is read in both directions from any start site, every one of these sites has the potential to generate two protein-coding genes with just a few subtle changes. The genome is full of potential new genes.”

This work was supported by the National Institutes of Health.

“Analysis of transcription start sites from nascent RNA identifies a unified architecture of initiation regions at mammalian promoters and enhancers.” appears online in Nature Genetics on November 10, 2014. The authors are: Leighton Core, AndrĂ© Martins, Charles Danko, Colin Waters, Adam Siepel, and John Lis. The paper can be obtained online at: http://dx.doi.org/10.1038/ng.3142

About Cold Spring Harbor Laboratory

Founded in 1890, Cold Spring Harbor Laboratory (CSHL) has shaped contemporary biomedical research and education with programs in cancer, neuroscience, plant biology and quantitative biology. CSHL is ranked number one in the world by Thomson Reuters for the impact of its research in molecular biology and genetics. The Laboratory has been home to eight Nobel Prize winners. Today, CSHL’s multidisciplinary scientific community is more than 600 researchers and technicians strong and its Meetings & Courses program hosts more than 12,000 scientists from around the world each year to its Long Island campus and its China center. For more information, visit http://www.cshl.edu.

Synthetic biology for space exploration

Synthetic biology could be a key to manned space exploration of Mars. -  Photo courtesy of NASA
Synthetic biology could be a key to manned space exploration of Mars. – Photo courtesy of NASA

Does synthetic biology hold the key to manned space exploration of Mars and the Moon? Berkeley Lab researchers have used synthetic biology to produce an inexpensive and reliable microbial-based alternative to the world’s most effective anti-malaria drug, and to develop clean, green and sustainable alternatives to gasoline, diesel and jet fuels. In the future, synthetic biology could also be used to make manned space missions more practical.

“Not only does synthetic biology promise to make the travel to extraterrestrial locations more practical and bearable, it could also be transformative once explorers arrive at their destination,” says Adam Arkin, director of Berkeley Lab’s Physical Biosciences Division (PBD) and a leading authority on synthetic and systems biology.

“During flight, the ability to augment fuel and other energy needs, to provide small amounts of needed materials, plus renewable, nutritional and taste-engineered food, and drugs-on-demand can save costs and increase astronaut health and welfare,” Arkin says. “At an extraterrestrial base, synthetic biology could even make more effective use of the catalytic activities of diverse organisms.”

Arkin is the senior author of a paper in the Journal of the Royal Society Interface that reports on a techno-economic analysis demonstrating “the significant utility of deploying non-traditional biological techniques to harness available volatiles and waste resources on manned long-duration space missions.” The paper is titled “Towards Synthetic Biological Approaches to Resource Utilization on Space Missions.” The lead and corresponding author is Amor Menezes, a postdoctoral scholar in Arkin’s research group at the University of California (UC) Berkeley. Other co-authors are John Cumbers and John Hogan with the NASA Ames Research Center.

One of the biggest challenges to manned space missions is the expense. The NASA rule-of-thumb is that every unit mass of payload launched requires the support of an additional 99 units of mass, with “support” encompassing everything from fuel to oxygen to food and medicine for the astronauts, etc. Most of the current technologies now deployed or under development for providing this support are abiotic, meaning non-biological. Arkin, Menezes and their collaborators have shown that providing this support with technologies based on existing biological processes is a more than viable alternative.

“Because synthetic biology allows us to engineer biological processes to our advantage, we found in our analysis that technologies, when using common space metrics such as mass, power and volume, have the potential to provide substantial cost savings, especially in mass,” Menezes says.

In their study, the authors looked at four target areas: fuel generation, food production, biopolymer synthesis, and pharmaceutical manufacture. They showed that for a 916 day manned mission to Mars, the use of microbial biomanufacturing capabilities could reduce the mass of fuel manufacturing by 56-percent, the mass of food-shipments by 38-percent, and the shipped mass to 3D-print a habitat for six by a whopping 85-percent. In addition, microbes could also completely replenish expired or irradiated stocks of pharmaceuticals, which would provide independence from unmanned re-supply spacecraft that take up to 210 days to arrive.

“Space has always provided a wonderful test of whether technology can meet strict engineering standards for both effect and safety,” Arkin says. “NASA has worked decades to ensure that the specifications that new technologies must meet are rigorous and realistic, which allowed us to perform up-front techno-economic analysis.”

The big advantage biological manufacturing holds over abiotic manufacturing is the remarkable ability of natural and engineered microbes to transform very simple starting substrates, such as carbon dioxide, water biomass or minerals, into materials that astronauts on long-term missions will need. This capability should prove especially useful for future extraterrestrial settlements.

“The mineral and carbon composition of other celestial bodies is different from the bulk of Earth, but the earth is diverse with many extreme environments that have some relationship to those that might be found at possible bases on the Moon or Mars,” Arkin says. “Microbes could be used to greatly augment the materials available at a landing site, enable the biomanufacturing of food and pharmaceuticals, and possibly even modify and enrich local soils for agriculture in controlled environments.”

The authors acknowledge that much of their analysis is speculative and that their calculations show a number of significant challenges to making biomanufacturing a feasible augmentation and replacement for abiotic technologies. However, they argue that the investment to overcome these barriers offers dramatic potential payoff for future space programs.

“We’ve got a long way to go since experimental proof-of-concept work in synthetic biology for space applications is just beginning, but long-duration manned missions are also a ways off,” says Menezes. “Abiotic technologies were developed for many, many decades before they were successfully utilized in space, so of course biological technologies have some catching-up to do. However, this catching-up may not be that much, and in some cases, the biological technologies may already be superior to their abiotic counterparts.”

###

This research was supported by the National Aeronautics and Space Administration (NASA) and the University of California, Santa Cruz.

Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit http://www.lbl.gov.

Synthetic biology for space exploration

Synthetic biology could be a key to manned space exploration of Mars. -  Photo courtesy of NASA
Synthetic biology could be a key to manned space exploration of Mars. – Photo courtesy of NASA

Does synthetic biology hold the key to manned space exploration of Mars and the Moon? Berkeley Lab researchers have used synthetic biology to produce an inexpensive and reliable microbial-based alternative to the world’s most effective anti-malaria drug, and to develop clean, green and sustainable alternatives to gasoline, diesel and jet fuels. In the future, synthetic biology could also be used to make manned space missions more practical.

“Not only does synthetic biology promise to make the travel to extraterrestrial locations more practical and bearable, it could also be transformative once explorers arrive at their destination,” says Adam Arkin, director of Berkeley Lab’s Physical Biosciences Division (PBD) and a leading authority on synthetic and systems biology.

“During flight, the ability to augment fuel and other energy needs, to provide small amounts of needed materials, plus renewable, nutritional and taste-engineered food, and drugs-on-demand can save costs and increase astronaut health and welfare,” Arkin says. “At an extraterrestrial base, synthetic biology could even make more effective use of the catalytic activities of diverse organisms.”

Arkin is the senior author of a paper in the Journal of the Royal Society Interface that reports on a techno-economic analysis demonstrating “the significant utility of deploying non-traditional biological techniques to harness available volatiles and waste resources on manned long-duration space missions.” The paper is titled “Towards Synthetic Biological Approaches to Resource Utilization on Space Missions.” The lead and corresponding author is Amor Menezes, a postdoctoral scholar in Arkin’s research group at the University of California (UC) Berkeley. Other co-authors are John Cumbers and John Hogan with the NASA Ames Research Center.

One of the biggest challenges to manned space missions is the expense. The NASA rule-of-thumb is that every unit mass of payload launched requires the support of an additional 99 units of mass, with “support” encompassing everything from fuel to oxygen to food and medicine for the astronauts, etc. Most of the current technologies now deployed or under development for providing this support are abiotic, meaning non-biological. Arkin, Menezes and their collaborators have shown that providing this support with technologies based on existing biological processes is a more than viable alternative.

“Because synthetic biology allows us to engineer biological processes to our advantage, we found in our analysis that technologies, when using common space metrics such as mass, power and volume, have the potential to provide substantial cost savings, especially in mass,” Menezes says.

In their study, the authors looked at four target areas: fuel generation, food production, biopolymer synthesis, and pharmaceutical manufacture. They showed that for a 916 day manned mission to Mars, the use of microbial biomanufacturing capabilities could reduce the mass of fuel manufacturing by 56-percent, the mass of food-shipments by 38-percent, and the shipped mass to 3D-print a habitat for six by a whopping 85-percent. In addition, microbes could also completely replenish expired or irradiated stocks of pharmaceuticals, which would provide independence from unmanned re-supply spacecraft that take up to 210 days to arrive.

“Space has always provided a wonderful test of whether technology can meet strict engineering standards for both effect and safety,” Arkin says. “NASA has worked decades to ensure that the specifications that new technologies must meet are rigorous and realistic, which allowed us to perform up-front techno-economic analysis.”

The big advantage biological manufacturing holds over abiotic manufacturing is the remarkable ability of natural and engineered microbes to transform very simple starting substrates, such as carbon dioxide, water biomass or minerals, into materials that astronauts on long-term missions will need. This capability should prove especially useful for future extraterrestrial settlements.

“The mineral and carbon composition of other celestial bodies is different from the bulk of Earth, but the earth is diverse with many extreme environments that have some relationship to those that might be found at possible bases on the Moon or Mars,” Arkin says. “Microbes could be used to greatly augment the materials available at a landing site, enable the biomanufacturing of food and pharmaceuticals, and possibly even modify and enrich local soils for agriculture in controlled environments.”

The authors acknowledge that much of their analysis is speculative and that their calculations show a number of significant challenges to making biomanufacturing a feasible augmentation and replacement for abiotic technologies. However, they argue that the investment to overcome these barriers offers dramatic potential payoff for future space programs.

“We’ve got a long way to go since experimental proof-of-concept work in synthetic biology for space applications is just beginning, but long-duration manned missions are also a ways off,” says Menezes. “Abiotic technologies were developed for many, many decades before they were successfully utilized in space, so of course biological technologies have some catching-up to do. However, this catching-up may not be that much, and in some cases, the biological technologies may already be superior to their abiotic counterparts.”

###

This research was supported by the National Aeronautics and Space Administration (NASA) and the University of California, Santa Cruz.

Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit http://www.lbl.gov.