Research links soil mineral surfaces to key atmospheric processes

Pictured are, from left, are David Bish, Melissa Donaldson and Jonathan Raff. -  Indiana University
Pictured are, from left, are David Bish, Melissa Donaldson and Jonathan Raff. – Indiana University

Research by Indiana University scientists finds that soil may be a significant and underappreciated source of nitrous acid, a chemical that plays a pivotal role in atmospheric processes such as the formation of smog and determining the lifetime of greenhouse gases.

The study shows for the first time that the surface acidity of common minerals found in soil determines whether the gas nitrous acid will be released into the atmosphere. The finding could contribute to improved models for understanding and controlling air pollution, a significant public health concern.

“We find that the surfaces of minerals in the soil can be much more acidic than the overall pH of the soil would suggest,” said Jonathan Raff, assistant professor in the School of Public and Environmental Affairs and Department of Chemistry. “It’s the acidity of the soil minerals that acts as a knob or a control lever, and that determines whether nitrous acid outgasses from soil or remains as nitrite.”

The article, “Soil surface acidity plays a determining role in the atmospheric-terrestrial exchange of nitrous acid,” will be published this week in the journal Proceedings of the National Academy of Sciences. Melissa A. Donaldson, a Ph.D. student in the School of Public and Environmental Affairs, is the lead author. Co-authors are Raff and David L. Bish, the Haydn Murray Chair of Applied Clay Mineralogy in the Department of Geological Sciences.

Nitrous acid, or HONO, plays a key role in regulating atmospheric processes. Sunlight causes it to break down into nitric oxide and the hydroxyl radical, OH. The latter controls the atmospheric lifetime of gases important to air quality and climate change and initiates the chemistry leading to the formation of ground-level ozone, a primary component of smog.

Scientists have known about the nitrous acid’s role in air pollution for 40 years, but they haven’t fully understood how it is produced and destroyed or how it interacts with other substances, because HONO is unstable and difficult to measure.

“Only in the last 10 years have we had the technology to study nitrous acid under environmentally relevant conditions,” Raff said.

Recent studies have shown nitrous acid to be emitted from soil in many locations. But this was unexpected because, according to basic chemistry, the reactions that release nitrous acid should take place only in extremely acidic soils, typically found in rain forests or the taiga of North America and Eurasia.

The standard method to determine the acidity of soil is to mix bulk soil with water and measure the overall pH. But the IU researchers show that the crucial factor is not overall pH but the acidity at the surface of soil minerals, especially iron oxides and aluminum oxides. At the molecular level, the water adsorbed directly to these minerals is unusually acidic and facilitates the conversion of nitrite in the soil to nitrous acid, which then volatilizes.

“With the traditional approach of calculating soil pH, we were severely underestimating nitrous acid emissions from soil,” Raff said. “I think the source is going to turn out to be more important than was previously imagined.”

The research was carried out using soil from a farm field near Columbus, Ind. But aluminum and iron oxides are ubiquitous in soil, and the researchers say the results suggest that about 70 percent of Earth’s soils could be sources of nitrous acid.

Ultimately, the research will contribute to a better understanding of how nitrous acid is produced and how it affects atmospheric processes. That in turn will improve the computer models used by the U.S. Environmental Protection Agency and other regulatory agencies to control air pollution, which the World Health Organization estimates contributes to 7 million premature deaths annually.

“With improved models, policymakers can make better judgments about the costs and benefits of regulations,” Raff said. “If we don’t get the chemistry right, we’re not going to get the right answers to our policy questions regarding air pollution.”

Earth’s most abundant mineral finally has a name

An ancient meteorite and high-energy X-rays have helped scientists conclude a half century of effort to find, identify and characterize a mineral that makes up 38 percent of the Earth.

And in doing so, a team of scientists led by Oliver Tschauner, a mineralogist at the University of Las Vegas, clarified the definition of the Earth’s most abundant mineral – a high-density form of magnesium iron silicate, now called Bridgmanite – and defined estimated constraint ranges for its formation. Their research was performed at the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science User Facility located at DOE’s Argonne National Laboratory.

The mineral was named after 1964 Nobel laureate and pioneer of high-pressure research Percy Bridgman. The naming does more than fix a vexing gap in scientific lingo; it also will aid our understanding of the deep Earth.

To determine the makeup of the inner layers of the Earth, scientists need to test materials under extreme pressure and temperatures. For decades, scientists have believed a dense perovskite structure makes up 38 percent of the Earth’s volume, and that the chemical and physical properties of Bridgmanite have a large influence on how elements and heat flow through the Earth’s mantle. But since the mineral failed to survive the trip to the surface, no one has been able to test and prove its existence – a requirement for getting a name by the International Mineralogical Association.

Shock-compression that occurs in collisions of asteroid bodies in the solar system create the same hostile conditions of the deep Earth – roughly 2,100 degrees Celsius (3,800 degrees Farenheit) and pressures of about 240,000 times greater than sea-level air pressure. The shock occurs fast enough to inhibit the Bridgmanite breakdown that takes place when it comes under lower pressure, such as the Earth’s surface. Part of the debris from these collisions falls on Earth as meteorites, with the Bridgmanite “frozen” within a shock-melt vein. Previous tests on meteorites using transmission electron microscopy caused radiation damage to the samples and incomplete results.

So the team decided to try a new tactic: non-destructive micro-focused X-rays for diffraction analysis and novel fast-readout area-detector techniques. Tschauner and his colleagues from Caltech and the GeoSoilEnviroCARS, a University of Chicago-operated X-ray beamline at the APS at Argonne National Laboratory, took advantage of the X-rays’ high energy, which gives them the ability to penetrate the meteorite, and their intense brilliance, which leaves little of the radiation behind to cause damage.

The team examined a section of the highly shocked L-chondrite meteorite Tenham, which crashed in Australia in 1879. The GSECARS beamline was optimal for the study because it is one of the nation’s leading locations for conducting high-pressure research.

Bridgmanite grains are rare in the Tenhma meteorite, and they are smaller than 1 micrometer in diameter. Thus the team had to use a strongly focused beam and conduct highly spatially resolved diffraction mapping until an aggregate of Bridgmanite was identified and characterized by structural and compositional analysis.

This first natural specimen of Bridgmanite came with some surprises: It contains an unexpectedly high amount of ferric iron, beyond that of synthetic samples. Natural Bridgmanite also contains much more sodium than most synthetic samples. Thus the crystal chemistry of natural Bridgmanite provides novel crystal chemical insights. This natural sample of Bridgmanite may serve as a complement to experimental studies of deep mantle rocks in the future.

Prior to this study, knowledge about Bridgmanite’s properties has only been based on synthetic samples because it only remains stable below 660 kilometers (410 miles) depth at pressures of above 230 kbar (23 GPa). When it is brought out of the inner Earth, the lower pressures transform it back into less dense minerals. Some scientists believe that some inclusions on diamonds are the marks left by Bridgmanite that changed as the diamonds were unearthed.

The team’s results were published in the November 28 issue of the journal Science as “Discovery of bridgmanite, the most abundant mineral in Earth, in a shocked meteorite,” by O. Tschauner at University of Nevada in Las Vegas, N.V.; C. Ma; J.R. Beckett; G.R. Rossman at California Institute of Technology in Pasadena, Calif.; C. Prescher; V.B. Prakapenka at University of Chicago in Chicago, IL.

This research was funded by the U.S. Department of Energy, NASA, and NSF.

New study measures methane emissions from natural gas production and offers insights into 2 large sources

A team of researchers from the Cockrell School of Engineering at The University of Texas at Austin and environmental testing firm URS reports that a small subset of natural gas wells are responsible for the majority of methane emissions from two major sources — liquid unloadings and pneumatic controller equipment — at natural gas production sites.

With natural gas production in the United States expected to continue to increase during the next few decades, there is a need for a better understanding of methane emissions during natural gas production. The study team believes this research, published Dec. 9 in Environmental Science & Technology, will help to provide a clearer picture of methane emissions from natural gas production sites.

The UT Austin-led field study closely examined two major sources of methane emissions — liquid unloadings and pneumatic controller equipment — at well pad sites across the United States. Researchers found that 19 percent of the pneumatic devices accounted for 95 percent of the emissions from pneumatic devices, and 20 percent of the wells with unloading emissions that vent to the atmosphere accounted for 65 percent to 83 percent of those emissions.

“To put this in perspective, over the past several decades, 10 percent of the cars on the road have been responsible for the majority of automotive exhaust pollution,” said David Allen, chemical engineering professor at the Cockrell School and principal investigator for the study. “Similarly, a small group of sources within these two categories are responsible for the vast majority of pneumatic and unloading emissions at natural gas production sites.”

Additionally, for pneumatic devices, the study confirmed regional differences in methane emissions first reported by the study team in 2013. The researchers found that methane emissions from pneumatic devices were highest in the Gulf Coast and lowest in the Rocky Mountains.

The study is the second phase of the team’s 2013 study, which included some of the first measurements for methane emissions taken directly at hydraulically fractured well sites. Both phases of the study involved a partnership between the Environmental Defense Fund, participating energy companies, an independent Scientific Advisory Panel and the UT Austin study team.

The unprecedented access to natural gas production facilities and equipment allowed researchers to acquire direct measurements of methane emissions.

Study and Findings on Pneumatic Devices

Pneumatic devices, which use gas pressure to control the opening and closing of valves, emit gas as they operate. These emissions are estimated to be among the larger sources of methane emissions from the natural gas supply chain. The Environmental Protection Agency reports that 477,606 pneumatic (gas actuated) devices are in use at natural gas production sites throughout the U.S.

“Our team’s previous work established that pneumatics are a major contributor to emissions,” Allen said. “Our goal here was to measure a more diverse population of wells to characterize the features of high-emitting pneumatic controllers.”

The research team measured emissions from 377 gas actuated (pneumatic) controllers at natural gas production sites and a small number of oil production sites throughout the U.S.

The researchers sampled all identifiable pneumatic controller devices at each well site, a more comprehensive approach than the random sampling previously conducted. The average methane emissions per pneumatic controller reported in this study are 17 percent higher than the average emissions per pneumatic controller in the 2012 EPA greenhouse gas national emission inventory (released in 2014), but the average from the study is dominated by a small subpopulation of the controllers. Specifically, 19 percent of controllers, with measured emission rates in excess of 6 standard cubic feet per hour (scf/h), accounted for 95 percent of emissions.

The high-emitting pneumatic devices are a combination of devices that are not operating as designed, are used in applications that cause them to release gas frequently or are designed to emit continuously at a high rate.

The researchers also observed regional differences in methane emission levels, with the lowest emissions per device measured in the Rocky Mountains and the highest emissions in the Gulf Coast, similar to the earlier 2013 study. At least some of the regional differences in emission rates can be attributed to the difference in controller type (continuous vent vs. intermittent vent) among regions.

Study and Findings on Liquid Unloadings

After observing variable emissions for liquid unloadings for a limited group of well types in the 2013 study, the research team made more extensive measurements and confirmed that a majority of emissions come from a small fraction of wells that vent frequently. Although it is not surprising to see some correlation between frequency of unloadings and higher annual emissions, the study’s findings indicate that wells with a high frequency of unloadings have annual emissions that are 10 or more times as great as wells that unload less frequently.

The team’s field study, which measured emissions from unloadings from wells at 107 natural gas production wells throughout the U.S., represents the most extensive measurement of emissions associated with liquid unloadings in scientific literature thus far.

A liquid unloading is one method used to clear wells of accumulated liquids to increase production. Because older wells typically produce less gas as they near the end of their life cycle, liquid unloadings happen more often in those wells than in newer wells. The team found a statistical correlation between the age of wells and the frequency of liquid unloadings. The researchers found that the key identifier for high-emitting wells is how many times the well unloads in a given year.

Because liquid unloadings can employ a variety of liquid lifting mechanisms, the study results also reflect differences in liquid unloadings emissions between wells that use two different mechanisms (wells with plunger lifts and wells without plunger lifts). Emissions for unloading events for wells without plunger lifts averaged 21,000 scf (standard cubic feet) to 35,000 scf. For wells with plunger lifts that vent to the atmosphere, emissions averaged 1,000 scf to 10,000 scf of methane per event. Although the emissions per event were higher for wells without plunger lifts, these wells had, on average, fewer events than wells with plunger lifts. Wells without plunger lifts averaged fewer than 10 unloading events per year, and wells with plunger lifts averaged more than 200 events per year.Overall, wells with plunger lifts were estimated to account for 70 percent of emissions from unloadings nationally.

Additionally, researchers found that the Rocky Mountain region, with its large number of wells with a high frequency of unloadings that vent to the atmosphere, accounts for about half of overall emissions from liquid unloadings.

The study team hopes its measurements of liquid unloadings and pneumatic devices will provide a clearer picture of methane emissions from natural gas well sites and about the relationship between well characteristics and emissions.

The study was a cooperative effort involving experts from the Environmental Defense Fund, Anadarko Petroleum Corporation, BG Group PLC, Chevron, ConocoPhillips, Encana Oil & Gas (USA) Inc., Pioneer Natural Resources Company, SWEPI LP (Shell), Statoil, Southwestern Energy and XTO Energy, a subsidiary of ExxonMobil.

The University of Texas at Austin is committed to transparency and disclosure of all potential conflicts of interest of its researchers. Lead researcher David Allen serves as chair of the Environmental Protection Agency’s Science Advisory Board and in this role is a paid Special Governmental Employee. He is also a journal editor for the American Chemical Society and has served as a consultant for multiple companies, including Eastern Research Group, ExxonMobil and the Research Triangle Institute. He has worked on other research projects funded by a variety of governmental, nonprofit and private sector sources including the National Science Foundation, the Environmental Protection Agency, the Texas Commission on Environmental Quality, the American Petroleum Institute and an air monitoring and surveillance project that was ordered by the U.S. District Court for the Southern District of Texas. Adam Pacsi and Daniel Zavala-Araiza, who were graduate students at The University of Texas at the time this work was done, have accepted positions at Chevron Energy Technology Company and the Environmental Defense Fund, respectively.

Financial support for this work was provided by the Environmental Defense Fund (EDF), Anadarko Petroleum Corporation, BG Group PLC, Chevron, ConocoPhillips, Encana Oil & Gas (USA) Inc., Pioneer Natural Resources Company, SWEPI LP (Shell), Statoil, Southwestern Energy and XTO Energy, a subsidiary of ExxonMobil.

Major funding for the EDF’s 30-month methane research series, including their portion of the University of Texas study, is provided for by the following individuals and foundations: Fiona and Stan Druckenmiller, the Heising-Simons Foundation, Bill and Susan Oberndorf, Betsy and Sam Reeves, the Robertson Foundation, TomKat Charitable Trust and the Walton Family Foundation.

Technology-dependent emissions of gas extraction in the US

The KIT measurement instrument on board of a minivan directly measures atmospheric emissions on site with a high temporal resolution. -  Photo: F. Geiger/KIT
The KIT measurement instrument on board of a minivan directly measures atmospheric emissions on site with a high temporal resolution. – Photo: F. Geiger/KIT

Not all boreholes are the same. Scientists of the Karlsruhe Institute of Technology (KIT) used mobile measurement equipment to analyze gaseous compounds emitted by the extraction of oil and natural gas in the USA. For the first time, organic pollutants emitted during a fracking process were measured at a high temporal resolution. The highest values measured exceeded typical mean values in urban air by a factor of one thousand, as was reported in ACP journal. (DOI 10.5194/acp-14-10977-2014)

Emission of trace gases by oil and gas fields was studied by the KIT researchers in the USA (Utah and Colorado) together with US institutes. Background concentrations and the waste gas plumes of single extraction plants and fracking facilities were analyzed. The air quality measurements of several weeks duration took place under the “Uintah Basin Winter Ozone Study” coordinated by the National Oceanic and Atmospheric Administration (NOAA).

The KIT measurements focused on health-damaging aromatic hydrocarbons in air, such as carcinogenic benzene. Maximum concentrations were determined in the waste gas plumes of boreholes. Some extraction plants emitted up to about a hundred times more benzene than others. The highest values of some milligrams of benzene per cubic meter air were measured downstream of an open fracking facility, where returning drilling fluid is stored in open tanks and basins. Much better results were reached by oil and gas extraction plants and plants with closed production processes. In Germany, benzene concentration at the workplace is subject to strict limits: The Federal Emission Control Ordinance gives an annual benzene limit of five micrograms per cubic meter for the protection of human health, which is smaller than the values now measured at the open fracking facility in the US by a factor of about one thousand. The researchers published the results measured in the journal Atmospheric Chemistry and Physics ACP.

“Characteristic emissions of trace gases are encountered everywhere. These are symptomatic of gas and gas extraction. But the values measured for different technologies differ considerably,” Felix Geiger of the Institute of Meteorology and Climate Research (IMK) of KIT explains. He is one of the first authors of the study. By means of closed collection tanks and so-called vapor capture systems, for instance, the gases released during operation can be collected and reduced significantly.

“The gas fields in the sparsely populated areas of North America are a good showcase for estimating the range of impacts of different extraction and fracking technologies,” explains Professor Johannes Orphal, Head of IMK. “In the densely populated Germany, framework conditions are much stricter and much more attention is paid to reducing and monitoring emissions.”

Fracking is increasingly discussed as a technology to extract fossil resources from unconventional deposits. Hydraulic breaking of suitable shale stone layers opens up the fossil fuels stored there and makes them accessible for economically efficient use. For this purpose, boreholes are drilled into these rock formations. Then, they are subjected to high pressure using large amounts of water and auxiliary materials, such as sand, cement, and chemicals. The oil or gas can flow to the surface through the opened microstructures in the rock. Typically, the return flow of the aqueous fracking liquid with the dissolved oil and gas constituents to the surface lasts several days until the production phase proper of purer oil or natural gas. This return flow is collected and then reused until it finally has to be disposed of. Air pollution mainly depends on the treatment of this return flow at the extraction plant. In this respect, currently practiced fracking technologies differ considerably. For the first time now, the resulting local atmospheric emissions were studied at a high temporary resolution. Based on the results, emissions can be assigned directly to the different plant sections of an extraction plant. For measurement, the newly developed, compact, and highly sensitive instrument, a so-called proton transfer reaction mass spectrometer (PTR-MS), of KIT was installed on board of a minivan and driven closer to the different extraction points, the distances being a few tens of meters. In this way, the waste gas plumes of individual extraction sources and fracking processes were studied in detail.

Warneke, C., Geiger, F., Edwards, P. M., Dube, W., Pétron, G., Kofler, J., Zahn, A., Brown, S. S., Graus, M., Gilman, J. B., Lerner, B. M., Peischl, J., Ryerson, T. B., de Gouw, J. A., and Roberts, J. M.: Volatile organic compound emissions from the oil and natural gas industry in the Uintah Basin, Utah: oil and gas well pad emissions compared to ambient air composition, Atmos. Chem. Phys., 14, 10977-10988, doi:10.5194/acp-14-10977-2014, 2014.

Antarctica: Heat comes from the deep

The Antarctic ice sheet is a giant water reservoir. The ice cap on the southern continent is on average 2,100 meters thick and contains about 70 percent of the world’s fresh water. If this ice mass were to melt completely, it could raise the global sea level by 60 meters. Therefore scientists carefully observe changes in the Antarctic. In the renowned international journal Science, researchers from Germany, the UK, the US and Japan are now publishing data according to which water temperatures, in particular on the shallow shelf seas of West Antarctica, are rising. “There are many large glaciers in the area. The elevated temperatures have accelerated the melting and sliding of these glaciers in recent decades and there are no indications that this trend is changing,” says the lead author of the study, Dr. Sunke Schmidtko from GEOMAR Helmholtz Centre for Ocean Research Kiel.

For their study, he and his colleagues of the University of East Anglia, the California Institute of Technology and the University of Hokkaido (Japan) evaluated all oceanographic data from the waters around Antarctica from 1960 to 2014 that were available in public databases. These data show that five decades ago, the water masses in the West Antarctic shelf seas were already warmer than in other parts of Antarctica, for example, in the Weddell Sea. However, the temperature difference is not constant. Since 1960, the temperatures in the West Antarctic Amundsen Sea and the Bellingshausen Sea have been rising. “Based on the data we were able to see that this shelf process is induced from the open ocean,” says Dr. Schmidtko.

Around Antarctica in greater depth along the continental slope water masses with temperatures from 0.5 to 1.5°C (33-35°F) are predominant. These temperatures are very warm for Antarctic conditions. “These waters have warmed in West Antarctica over the past 50 years. And they are significant shallower than 50 years ago,” says Schmidtko. Especially in the Amundsen Sea and Bellingshausen Sea they now increasingly spill onto the shelf and warm the shelf.

“These are the regions in which accelerated glacial melting has been observed for some time. We show that oceanographic changes over the past 50 years have probably caused this melting. If the water continues to warm, the increased penetration of warmer water masses onto the shelf will likely further accelerate this process, with an impact on the rate of global sea level rise ” explains Professor Karen Heywood from the University of East Anglia.

The scientists also draw attention to the rising up of warm water masses in the southwestern Weddell Sea. Here very cold temperatures (less than minus 1.5°C or 29°F) prevail on the shelf and a large-scale melting of shelf ice has not been observed yet. If the shoaling of warm water masses continues, it is expected that there will be major environmental changes with dramatic consequences for the Filchner or Ronne Ice Shelf, too. For the first time glaciers outside the West Antarctic could experience enhanced melting from below.

To what extent the diverse biology of the Southern Ocean is influenced by the observed changes is not fully understood. The shelf areas include spawning areas for the Antarctic krill, a shrimp species widespread in the Southern Ocean, which plays a key role in the Antarctic food chain. Research results have shown that spawning cycles could change in warmer conditions. A final assessment of the impact has not yet been made.

The exact reasons for the increase of the heating and the rising of warm water masses has not yet been completely resolved. “We suspect that they are related to large-scale variations in wind systems over the southern hemisphere. But which processes specifically play a role must be evaluated in more detail.” says Dr. Schmidtko.

ASU, IBM move ultrafast, low-cost DNA sequencing technology a step closer to reality

Led by ASU Regents' professor Stuart Lindsay, a team of scientists from Arizona State University's Biodesign Institute and IBM's T.J. Watson Research Center have developed a prototype DNA reader that could make whole genome profiling an everyday practice in medicine. -  Biodesign Institute at Arizona State University
Led by ASU Regents’ professor Stuart Lindsay, a team of scientists from Arizona State University’s Biodesign Institute and IBM’s T.J. Watson Research Center have developed a prototype DNA reader that could make whole genome profiling an everyday practice in medicine. – Biodesign Institute at Arizona State University

A team of scientists from Arizona State University’s Biodesign Institute and IBM’s T.J. Watson Research Center have developed a prototype DNA reader that could make whole genome profiling an everyday practice in medicine.

“Our goal is to put cheap, simple and powerful DNA and protein diagnostic devices into every single doctor’s office,” said Stuart Lindsay, an ASU physics professor and director of Biodesign’s Center for Single Molecule Biophysics. Such technology could help usher in the age of personalized medicine, where information from an individual’s complete DNA and protein profiles could be used to design treatments specific to their individual makeup.

Such game-changing technology is needed to make genome sequencing a reality. The current hurdle is to do so for less than $1,000, an amount for which insurance companies are more likely to provide reimbursement.

In their latest research breakthrough, the team fashioned a tiny, DNA reading device a thousands of times smaller than width of a single human hair.

The device is sensitive enough to distinguish the individual chemical bases of DNA (known by their abbreviated letters of A, C, T or G) when they are pumped past the reading head.

Proof-of-concept was demonstrated, by using solutions of the individual DNA bases, which gave clear signals sensitive enough to detect tiny amounts of DNA (nanomolar concentrations), even better than today’s state-of-the-art, so called next-generation DNA sequencing technology.

Making the solid-state device is just like making a sandwich, just with ultra high-tech semiconductor tools used to slice and stack the atomic-sized layers of meats and cheeses like the butcher shop’s block. The secret is to make slice and stack the layers just so, to turn the chemical information of the DNA into a change in the electrical signal.

First, they made a “sandwich” composed of two metal electrodes separated by a two-nanometer thick insulating layer (a single nanometer is 10,000 times smaller than a human hair), made by using a semiconductor technology called atomic layer deposition.

Then a hole is cut through the sandwich: DNA bases inside the hole are read as they pass the gap between the metal layers.

“The technology we’ve developed might just be the first big step in building a single-molecule sequencing device based on ordinary computer chip technology,” said Lindsay.

“Previous attempts to make tunnel junctions for reading DNA had one electrode facing another across a small gap between the electrodes, and the gaps had to be adjusted by hand. This made it impossible to use computer chip manufacturing methods to make devices,” said Lindsay.

“Our approach of defining the gap using a thin layer of dielectric (insulating) material between the electrodes and exposing this gap by drilling a hole through the layers is much easier,” he said. “What is more, the recognition tunneling technology we have developed allows us to make a relatively large gap (of two nanometers) compared to the much smaller gaps required previously for tunnel current read-out (which were less than a single nanometer wide). The ability to use larger gaps for tunneling makes the manufacture of the device much easier and gives DNA molecules room to pass the electrodes.”

Specifically, when a current is passed through the nanopore, as the DNA passes through, it causes a spike in the current unique to each chemical base (A, C, T or G) within the DNA molecule. A few more modifications are made to polish and finish the device manufacturing.

The team encountered considerable device-to-device variation, so calibration will be needed to make the technology more robust. And the final big step – of reducing the diameter of the hole through the device to that of a single DNA molecule – has yet to be taken.

But overall, the research team has developed a scalable manufacturing process to make a device that can work reliably for hours at a time, identifying each of the DNA chemical bases while flowing through the two-nanometer gap.

The research team is also working on modifying the technique to read other single molecules, which could be used in an important technology for drug development.

The latest developments could also bring in big business for ASU. Lindsay, dubbed a “serial entrepreneur” by the media, has a new spinout venture, called Recognition Analytix, that hopes to follow the success of Molecular Imaging Corp, a similar instrument company he co-founded in 1993, and sold to Agilent Technologies in 2005.

Climate change was not to blame for the collapse of the Bronze Age

Scientists will have to find alternative explanations for a huge population collapse in Europe at the end of the Bronze Age as researchers prove definitively that climate change – commonly assumed to be responsible – could not have been the culprit.

Archaeologists and environmental scientists from the University of Bradford, University of Leeds, University College Cork, Ireland (UCC), and Queen’s University Belfast have shown that the changes in climate that scientists believed to coincide with the fall in population in fact occurred at least two generations later.

Their results, published this week in Proceedings of the National Academy of Sciences, show that human activity starts to decline after 900BC, and falls rapidly after 800BC, indicating a population collapse. But the climate records show that colder, wetter conditions didn’t occur until around two generations later.

Fluctuations in levels of human activity through time are reflected by the numbers of radiocarbon dates for a given period. The team used new statistical techniques to analyse more than 2000 radiocarbon dates, taken from hundreds of archaeological sites in Ireland, to pinpoint the precise dates that Europe’s Bronze Age population collapse occurred.

The team then analysed past climate records from peat bogs in Ireland and compared the archaeological data to these climate records to see if the dates tallied. That information was then compared with evidence of climate change across NW Europe between 1200 and 500 BC.

“Our evidence shows definitively that the population decline in this period cannot have been caused by climate change,” says Ian Armit, Professor of Archaeology at the University of Bradford, and lead author of the study.

Graeme Swindles, Associate Professor of Earth System Dynamics at the University of Leeds, added, “We found clear evidence for a rapid change in climate to much wetter conditions, which we were able to precisely pinpoint to 750BC using statistical methods.”

According to Professor Armit, social and economic stress is more likely to be the cause of the sudden and widespread fall in numbers. Communities producing bronze needed to trade over very large distances to obtain copper and tin. Control of these networks enabled the growth of complex, hierarchical societies dominated by a warrior elite. As iron production took over, these networks collapsed, leading to widespread conflict and social collapse. It may be these unstable social conditions, rather than climate change, that led to the population collapse at the end of the Bronze Age.

According to Katharina Becker, Lecturer in the Department of Archaeology at UCC, the Late Bronze Age is usually seen as a time of plenty, in contrast to an impoverished Early Iron Age. “Our results show that the rich Bronze Age artefact record does not provide the full picture and that crisis began earlier than previously thought,” she says.

“Although climate change was not directly responsible for the collapse it is likely that the poor climatic conditions would have affected farming,” adds Professor Armit. “This would have been particularly difficult for vulnerable communities, preventing population recovery for several centuries.”

The findings have significance for modern day climate change debates which, argues Professor Armit, are often too quick to link historical climate events with changes in population.

“The impact of climate change on humans is a huge concern today as we monitor rising temperatures globally,” says Professor Armit.

“Often, in examining the past, we are inclined to link evidence of climate change with evidence of population change. Actually, if you have high quality data and apply modern analytical techniques, you get a much clearer picture and start to see the real complexity of human/environment relationships in the past.”

Adjusting Earth’s thermostat, with caution

David Keith, Gordon McKay Professor of Applied Physics at Harvard SEAS and professor of public policy at Harvard Kennedy School, coauthored several papers on climate engineering with colleagues at Harvard and beyond. -  Eliza Grinnell, SEAS Communications.
David Keith, Gordon McKay Professor of Applied Physics at Harvard SEAS and professor of public policy at Harvard Kennedy School, coauthored several papers on climate engineering with colleagues at Harvard and beyond. – Eliza Grinnell, SEAS Communications.

A vast majority of scientists believe that the Earth is warming at an unprecedented rate and that human activity is almost certainly the dominant cause. But on the topics of response and mitigation, there is far less consensus.

One of the most controversial propositions for slowing the increase in temperatures here on Earth is to manipulate the atmosphere above. Specifically, some scientists believe it should be possible to offset the warming effect of greenhouses gases by reflecting more of the sun’s energy back into space.

The potential risks–and benefits–of solar radiation management (SRM) are substantial. So far, however, all of the serious testing has been confined to laboratory chambers and theoretical models. While those approaches are valuable, they do not capture the full range of interactions among chemicals, the impact of sunlight on these reactions, or multiscale variations in the atmosphere.

Now, a team of researchers from the Harvard School of Engineering and Applied Sciences (SEAS) has outlined how a small-scale “stratospheric perturbation experiment” could work. By proposing, in detail, a way to take the science of geoengineering to the skies, they hope to stimulate serious discussion of the practice by policymakers and scientists.

Ultimately, they say, informed decisions on climate policy will need to rely on the best information available from controlled and cautious field experiments.

The paper is among several published today in a special issue of the Philosophical Transactions of the Royal Society A that examine the nuances, the possible consequences, and the current state of scientific understanding of climate engineering. David Keith, whose work features prominently in the issue, is Gordon McKay Professor of Applied Physics at Harvard SEAS and a professor of public policy at Harvard Kennedy School. His coauthors on the topic of field experiments include James Anderson, Philip S. Weld Professor of Applied Chemistry at Harvard SEAS and in Harvard’s Department of Chemistry and Chemical Biology; and other colleagues at Harvard SEAS.

“The idea of conducting experiments to alter atmospheric processes is justifiably controversial, and our experiment, SCoPEx, is just a proposal,” Keith emphasizes. “It will continue to evolve until it is funded, and we will only move ahead if the funding is substantially public, with a formal approval process and independent risk assessment.”

With so much at stake, Keith believes transparency is essential. But the science of climate engineering is also widely misunderstood.

“People often claim that you cannot test geoengineering except by doing it at full scale,” says Keith. “This is nonsense. It is possible to do a small-scale test, with quite low risks, that measures key aspects of the risk of geoengineering–in this case the risk of ozone loss.”

Such controlled experiments, targeting key questions in atmospheric chemistry, Keith says, would reduce the number of “unknown unknowns” and help to inform science-based policy.

The experiment Keith and Anderson’s team is proposing would involve only a tiny amount of material–a few hundred grams of sulfuric acid, an amount Keith says is roughly equivalent to what a typical commercial aircraft releases in a few minutes while flying in the stratosphere. It would provide important insight into how much SRM would reduce radiative heating, the concentration of water vapor in the stratosphere, and the processes that determine water vapor transport–which affects the concentration of ozone.

In addition to the experiment proposed in that publication, another paper coauthored by Keith and collaborators at the California Institute of Technology (CalTech) collects and reviews a number of other experimental methods, to demonstrate the diversity of possible approaches.

“There is a wide range of experiments that could be done that would significantly reduce our uncertainty about the risks and effectiveness of solar geoengineering,” Keith says. “Many could be done with very small local risks.”

A third paper explores how solar geoengineering might actually be implemented, if an international consensus were reached, and suggests that a gradual implementation that aims to limit the rate of climate change would be a plausible strategy.

“Many people assume that solar geoengineering would be used to suddenly restore the Earth’s climate to preindustrial temperatures,” says Keith, “but it’s very unlikely that it would make any policy sense to try to do so.”

Keith also points to another paper in the Royal Society’s special issue–one by Andy Parker at the Belfer Center for Science and International Affairs at Harvard Kennedy School. Parker’s paper furthers the discussion of governance and good practices in geoengineering research in the absence of both national legislation and international agreement, a topic raised last year in Science by Keith and Edward Parson of UCLA.

“The scientific aspects of geoengineering research must, by necessity, advance in tandem with a thorough discussion of the social science and policy,” Keith warns. “Of course, these risks must also be weighed against the risk of doing nothing.”

For further information, see: “Stratospheric controlled perturbation experiment (SCoPEx): A small-scale experiment to improve understanding of the risks of solar geoengineering” doi: 10.1098/rsta.2014.0059

By John Dykema, project scientist at Harvard SEAS; David Keith, Gordon McKay Professor of Applied Physics at Harvard SEAS and professor of public policy at Harvard Kennedy School; James Anderson, Philip S. Weld Professor of Applied Chemistry at Harvard SEAS and in Harvard’s Department of Chemistry and Chemical Biology; and Debra Weisenstein, research management specialist at Harvard SEAS.

“Field experiments on solar geoengineering: Report of a workshop exploring a representative research portfolio”
doi: 10.1098/rsta.2014.0175

By David Keith; Riley Duren, chief systems engineer at the NASA Jet Propulsion Laboratory at CalTech; and Douglas MacMartin, senior research associate and lecturer at CalTech.

“Solar geoengineering to limit the rate of temperature change”
doi: 10.1098/rsta.2014.0134

By Douglas MacMartin; Ken Caldeira, senior scientist at the Carnegie Institute for Science and professor of environmental Earth system sciences at Stanford University; and David Keith.

“Governing solar geoengineering research as it leaves the laboratory”
doi: 10.1098/rsta.2014.0173

By Andy Parker, associate of the Belfer Center at Harvard Kennedy School.

Good vibrations give electrons excitations that rock an insulator to go metallic

Vanadium atoms (blue) have unusually large thermal vibrations that stabilize the metallic state of a vanadium dioxide crystal. Red depicts oxygen atoms. -  ORNL
Vanadium atoms (blue) have unusually large thermal vibrations that stabilize the metallic state of a vanadium dioxide crystal. Red depicts oxygen atoms. – ORNL

For more than 50 years, scientists have debated what turns particular oxide insulators, in which electrons barely move, into metals, in which electrons flow freely. Some scientists sided with Nobel Prize-winning physicist Nevill Mott in thinking direct interactions between electrons were the key. Others believed, as did physicist Rudolf Peierls, that atomic vibrations and distortions trumped all. Now, a team led by the Department of Energy’s Oak Ridge National Laboratory has made an important advancement in understanding a classic transition-metal oxide, vanadium dioxide, by quantifying the thermodynamic forces driving the transformation. The results are published in the Nov. 10 advance online issue of Nature.

“We proved that phonons–the vibrations of the atoms–provide the driving force that stabilizes the metal phase when the material is heated,” said John Budai, who co-led the study with Jiawang Hong, a colleague in ORNL’s Materials Science and Technology Division.

Hong added, “This insight into how lattice vibrations can control phase stability in transition-metal oxides is needed to improve the performance of many multifunctional materials, including colossal magnetoresistors, superconductors and ferroelectrics.”

Today vanadium dioxide improves recording and storage media, strengthens structural alloys, and colors synthetic jewels. Tomorrow it may find its way into nanoscale actuators for switches, optical shutters that turn opaque on satellites to thwart intruding signals, and field-effect transistors to manipulate electronics in semiconductors and spintronics in devices that manipulate magnetic spin.

The next application we see may be energy-efficient “smart windows” coated with vanadium dioxide peppered with an impurity to control the transmission of heat and light. On cool days, windows would be transparent insulators that let in heat. On warm days, they would turn shiny and reflect the outside heat.

Complete thermodynamics


Materials are stabilized by a competition between internal energy and entropy (a measure of disorder that increases with temperature). While Mott and Peierls focused on energy, the ORNL-led team focused on the entropy.

Before the ORNL-led experiments, scientists knew the total amount of heat absorbed during vanadium dioxide’s transition from insulator to metal. But they didn’t know how much entropy was due to electrons and how much was due to atomic vibrations.

“This is the first complete description of thermodynamic forces controlling this archetypical metal-insulator transition,” said Budai.

The team’s current accomplishment was made possible by a novel combination of X-ray and neutron scattering tools, developed within the decade, that enabled lattice dynamics measurements and a calculation technique that Olle Hellman of Linköping University in Sweden recently developed to capture anharmonicity (a measure of nonlinearity in bond forces between atoms). It’s especially important that the calculations, performed by Hong, agree well with experiments because they can now be used to make new predictions for other materials.

The ORNL team came up with the idea to measure “incoherent” neutron scattering (each atom scatters independently) at ORNL’s Spallation Neutron Source (SNS) to determine the phonon spectra at many temperatures, and to measure coherent inelastic and diffuse X-ray scattering at Argonne National Laboratory’s Advanced Photon Source (APS) to probe collective vibrations in pristine crystals. Neutron measurements were enabled by the SNS’s large neutron flux, and X-ray measurements benefited from the high-resolution enabled by the high APS brightness. SNS and APS are DOE Office of Science User Facilities.

Among ORNL collaborators, Robert McQueeney made preliminary X-ray measurements and Lynn Boatner grew crystals for the experiment. Eliot Specht mapped phonon dispersions with diffuse X-ray scattering. Michael Manley and Olivier Delaire determined the phonon spectra using inelastic neutron scattering. Postdoctoral researcher Chen Li helped make experimental measurements and provided neutron expertise. Douglas Abernathy provided expertise with experimental beam lines, as did Argonne’s Ayman Said, Bogdan Leu and Jonathan Tischler.

Their measurements revealed that phonons with unusually large atomic vibrations and strong anharmonicity are responsible for about two-thirds of the total heat that each atom transfers during the lattice’s transition to a metallic phase.

“The entropy of the lattice vibrations competes against and overcomes the electronic energy, and that’s why the metallic phase is stabilized at high temperatures in vanadium dioxide,” Budai summed up. “Using comprehensive measurements and new calculations, we’re the first to close this gap and present convincing arguments for the dominant influence of low-energy, strongly anharmonic phonons.”

Atomic underpinnings


The findings reveal that the vanadium-dioxide lattice is anharmonic in the metal state. Think of atoms connected by bonds in a lattice as masses connected by springs. Pull on a mass and let go; it bounces. If the force is proportional to the distance a mass is pulled, the interaction is harmonic. Vanadium dioxide’s anharmonicity greatly complicates the way the lattice wiggles upon heating.

“A material that only had harmonic connections between atoms would have no thermal expansion; if you heat it up, it would stay the same size,” said Budai. Most materials, it turns out, are somewhat anharmonic. Metals, for example, expand when heated.

When heated to 340 kelvin (just above room temperature), vanadium dioxide turns from insulator to metal. Below 340 K, its lowest-energy lattice configuration is akin to a leaning cardboard box. Above 340 K, where entropy due to phonon vibrations dominates, its preferred state has all bond angles at 90 degrees. The phase change is fully reversible, so cooling a metal below the transition temperature reverts it to an insulator, and heating it past this point turns it metallic.

In metallic vanadium dioxide, each vanadium atom has one electron that is free to roam. In contrast, in insulating vanadium dioxide, that electron gets trapped in a chemical bond that forms vanadium dimers. “For understanding the atomic mechanisms, we needed theory,” Budai said.

That’s where Hong, a theorist at ORNL’s Center for Accelerating Materials Modeling, made critical contributions with quantum molecular dynamics calculations. He ran large-scale simulations at the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility at Lawrence Berkeley National Laboratory, using 1 million computing-core hours to simulate the lattice dynamics of metal and insulator phases of vanadium dioxide. All three types of experiments agreed well with Hong’s simulations. In addition, his calculation further reveals how phonon and electron contributions compete in the different phases.

Predicting new materials


“The theory not only provides us deep understanding of the experimental observations and reveals fundamental principles behind them,” said Hong, “but also gives us predictive modeling, which will accelerate fundamental and technological innovation by giving efficient strategies to design new materials with remarkable properties.”

Many other materials besides vanadium dioxide show a metal-to-insulator transition; however, the detailed role of lattice vibrations in controlling phase stability remains largely unknown. In future studies of other transition metal oxides, the researchers will continue to investigate the impact of anharmonic phonons on physical properties such as electrical conductivity and thermal transport. This fundamental research will help guide the development of improved energy-efficient materials.

Synthetic biology for space exploration

Synthetic biology could be a key to manned space exploration of Mars. -  Photo courtesy of NASA
Synthetic biology could be a key to manned space exploration of Mars. – Photo courtesy of NASA

Does synthetic biology hold the key to manned space exploration of Mars and the Moon? Berkeley Lab researchers have used synthetic biology to produce an inexpensive and reliable microbial-based alternative to the world’s most effective anti-malaria drug, and to develop clean, green and sustainable alternatives to gasoline, diesel and jet fuels. In the future, synthetic biology could also be used to make manned space missions more practical.

“Not only does synthetic biology promise to make the travel to extraterrestrial locations more practical and bearable, it could also be transformative once explorers arrive at their destination,” says Adam Arkin, director of Berkeley Lab’s Physical Biosciences Division (PBD) and a leading authority on synthetic and systems biology.

“During flight, the ability to augment fuel and other energy needs, to provide small amounts of needed materials, plus renewable, nutritional and taste-engineered food, and drugs-on-demand can save costs and increase astronaut health and welfare,” Arkin says. “At an extraterrestrial base, synthetic biology could even make more effective use of the catalytic activities of diverse organisms.”

Arkin is the senior author of a paper in the Journal of the Royal Society Interface that reports on a techno-economic analysis demonstrating “the significant utility of deploying non-traditional biological techniques to harness available volatiles and waste resources on manned long-duration space missions.” The paper is titled “Towards Synthetic Biological Approaches to Resource Utilization on Space Missions.” The lead and corresponding author is Amor Menezes, a postdoctoral scholar in Arkin’s research group at the University of California (UC) Berkeley. Other co-authors are John Cumbers and John Hogan with the NASA Ames Research Center.

One of the biggest challenges to manned space missions is the expense. The NASA rule-of-thumb is that every unit mass of payload launched requires the support of an additional 99 units of mass, with “support” encompassing everything from fuel to oxygen to food and medicine for the astronauts, etc. Most of the current technologies now deployed or under development for providing this support are abiotic, meaning non-biological. Arkin, Menezes and their collaborators have shown that providing this support with technologies based on existing biological processes is a more than viable alternative.

“Because synthetic biology allows us to engineer biological processes to our advantage, we found in our analysis that technologies, when using common space metrics such as mass, power and volume, have the potential to provide substantial cost savings, especially in mass,” Menezes says.

In their study, the authors looked at four target areas: fuel generation, food production, biopolymer synthesis, and pharmaceutical manufacture. They showed that for a 916 day manned mission to Mars, the use of microbial biomanufacturing capabilities could reduce the mass of fuel manufacturing by 56-percent, the mass of food-shipments by 38-percent, and the shipped mass to 3D-print a habitat for six by a whopping 85-percent. In addition, microbes could also completely replenish expired or irradiated stocks of pharmaceuticals, which would provide independence from unmanned re-supply spacecraft that take up to 210 days to arrive.

“Space has always provided a wonderful test of whether technology can meet strict engineering standards for both effect and safety,” Arkin says. “NASA has worked decades to ensure that the specifications that new technologies must meet are rigorous and realistic, which allowed us to perform up-front techno-economic analysis.”

The big advantage biological manufacturing holds over abiotic manufacturing is the remarkable ability of natural and engineered microbes to transform very simple starting substrates, such as carbon dioxide, water biomass or minerals, into materials that astronauts on long-term missions will need. This capability should prove especially useful for future extraterrestrial settlements.

“The mineral and carbon composition of other celestial bodies is different from the bulk of Earth, but the earth is diverse with many extreme environments that have some relationship to those that might be found at possible bases on the Moon or Mars,” Arkin says. “Microbes could be used to greatly augment the materials available at a landing site, enable the biomanufacturing of food and pharmaceuticals, and possibly even modify and enrich local soils for agriculture in controlled environments.”

The authors acknowledge that much of their analysis is speculative and that their calculations show a number of significant challenges to making biomanufacturing a feasible augmentation and replacement for abiotic technologies. However, they argue that the investment to overcome these barriers offers dramatic potential payoff for future space programs.

“We’ve got a long way to go since experimental proof-of-concept work in synthetic biology for space applications is just beginning, but long-duration manned missions are also a ways off,” says Menezes. “Abiotic technologies were developed for many, many decades before they were successfully utilized in space, so of course biological technologies have some catching-up to do. However, this catching-up may not be that much, and in some cases, the biological technologies may already be superior to their abiotic counterparts.”

###

This research was supported by the National Aeronautics and Space Administration (NASA) and the University of California, Santa Cruz.

Lawrence Berkeley National Laboratory addresses the world’s most urgent scientific challenges by advancing sustainable energy, protecting human health, creating new materials, and revealing the origin and fate of the universe. Founded in 1931, Berkeley Lab’s scientific expertise has been recognized with 13 Nobel prizes. The University of California manages Berkeley Lab for the U.S. Department of Energy’s Office of Science. For more, visit http://www.lbl.gov.