Oct 26, 2019

Study casts doubt on carbon capture

One proposed method for reducing carbon dioxide (CO2) levels in the atmosphere -- and reducing the risk of climate change -- is to capture carbon from the air or prevent it from getting there in the first place. However, research from Mark Z. Jacobson at Stanford University, published in Energy and Environmental Science, suggests that carbon capture technologies can cause more harm than good.

"All sorts of scenarios have been developed under the assumption that carbon capture actually reduces substantial amounts of carbon. However, this research finds that it reduces only a small fraction of carbon emissions, and it usually increases air pollution," said Jacobson, who is a professor of civil and environmental engineering. "Even if you have 100 percent capture from the capture equipment, it is still worse, from a social cost perspective, than replacing a coal or gas plant with a wind farm because carbon capture never reduces air pollution and always has a capture equipment cost. Wind replacing fossil fuels always reduces air pollution and never has a capture equipment cost."

Jacobson, who is also a senior fellow at the Stanford Woods Institute for the Environment, examined public data from a coal with carbon capture electric power plant and a plant that removes carbon from the air directly. In both cases, electricity to run the carbon capture came from natural gas. He calculated the net CO2 reduction and total cost of the carbon capture process in each case, accounting for the electricity needed to run the carbon capture equipment, the combustion and upstream emissions resulting from that electricity, and, in the case of the coal plant, its upstream emissions. (Upstream emissions are emissions, including from leaks and combustion, from mining and transporting a fuel such as coal or natural gas.)

Common estimates of carbon capture technologies -- which only look at the carbon captured from energy production at a fossil fuel plant itself and not upstream emissions -- say carbon capture can remediate 85-90 percent of carbon emissions. Once Jacobson calculated all the emissions associated with these plants that could contribute to global warming, he converted them to the equivalent amount of carbon dioxide in order to compare his data with the standard estimate. He found that in both cases the equipment captured the equivalent of only 10-11 percent of the emissions they produced, averaged over 20 years.

This research also looked at the social cost of carbon capture -- including air pollution, potential health problems, economic costs and overall contributions to climate change -- and concluded that those are always similar to or higher than operating a fossil fuel plant without carbon capture and higher than not capturing carbon from the air at all. Even when the capture equipment is powered by renewable electricity, Jacobson concluded that it is always better to use the renewable electricity instead to replace coal or natural gas electricity or to do nothing, from a social cost perspective.

Given this analysis, Jacobson argued that the best solution is to instead focus on renewable options, such as wind or solar, replacing fossil fuels.

Efficiency and upstream emissions

This research is based on data from two real carbon capture plants, which both run on natural gas. The first is a coal plant with carbon capture equipment. The second plant is not attached to any energy-producing counterpart. Instead, it pulls existing carbon dioxide from the air using a chemical process.

Jacobson examined several scenarios to determine the actual and possible efficiencies of these two kinds of plants, including what would happen if the carbon capture technologies were run with renewable electricity rather than natural gas, and if the same amount of renewable electricity required to run the equipment were instead used to replace coal plant electricity.

While the standard estimate for the efficiency of carbon capture technologies is 85-90 percent, neither of these plants met that expectation. Even without accounting for upstream emissions, the equipment associated with the coal plant was only 55.4 percent efficient over 6 months, on average. With the upstream emissions included, Jacobson found that, on average over 20 years, the equipment captured only 10-11 percent of the total carbon dioxide equivalent emissions that it and the coal plant contributed. The air capture plant was also only 10-11 percent efficient, on average over 20 years, once Jacobson took into consideration its upstream emissions and the uncaptured and upstream emissions that came from operating the plant on natural gas.

Due to the high energy needs of carbon capture equipment, Jacobson concluded that the social cost of coal with carbon capture powered by natural gas was about 24 percent higher, over 20 years, than the coal without carbon capture. If the natural gas at that same plant were replaced with wind power, the social cost would still exceed that of doing nothing. Only when wind replaced coal itself did social costs decrease.

For both types of plants this suggests that, even if carbon capture equipment is able to capture 100 percent of the carbon it is designed to offset, the cost of manufacturing and running the equipment plus the cost of the air pollution it continues to allow or increases makes it less efficient than using those same resources to create renewable energy plants replacing coal or gas directly.

"Not only does carbon capture hardly work at existing plants, but there's no way it can actually improve to be better than replacing coal or gas with wind or solar directly," said Jacobson. "The latter will always be better, no matter what, in terms of the social cost. You can't just ignore health costs or climate costs."

This study did not consider what happens to carbon dioxide after it is captured but Jacobson suggests that most applications today, which are for industrial use, result in additional leakage of carbon dioxide back into the air.

Focusing on renewables

People propose that carbon capture could be useful in the future, even after we have stopped burning fossil fuels, to lower atmospheric carbon levels. Even assuming these technologies run on renewables, Jacobson maintains that the smarter investment is in options that are currently disconnected from the fossil fuel industry, such as reforestation -- a natural version of air capture -- and other forms of climate change solutions focused on eliminating other sources of emissions and pollution. These include reducing biomass burning, and reducing halogen, nitrous oxide and methane emissions.

Read more at Science Daily

Skiing, snowboarding injuries more serious -- skull and face fractures -- in younger children

Winter sports like skiing and snowboarding are a great way to keep kids active in the winter, but they are also linked to injuries and for younger children those injuries are more likely to involve fractures to the head or face, according to new research being presented at the American Academy of Pediatrics (AAP) 2019 National Conference & Exhibition.

The research abstract, "Pediatric Snow Sport Injuries Differ By Age," will be presented during the AAP 2019 National Conference & Exhibition.

Researchers looked at a cross-sectional analysis of the 2009 and 2012 Kids' Inpatient Database, examining 845 hospital admissions for snow sport injuries in kids. They found that over half of hospitalized children required major surgical intervention, and elementary school-age children were at significantly greater odds than those older than high school to suffer a skull or facial fracture. Middle school, high school and those older were more likely to experience intra-abdominal injury.

"We were interested to find that the type of injuries children had varied according to their age, and we believe these findings can better inform educational and legislative efforts aimed at reducing injuries in children who participate in winter sports," said Robert J. McLoughlin, MD, MSCI. "These injuries can be very severe and should be a concern to any parent with a child involved in these sports. Almost a quarter -- 23% of children -- suffered intercranial injuries, which we found were more common among young children."

Of the young skiers who were admitted into hospitals in this research, 75.8% were male and 87.4% white. The injuries included: lower extremity fractures (28.7%), intracranial injury (22.7%), splenic injury (15.6%), upper extremity fracture (15.5%), and skull fracture (9.1%).

From Science Daily

Oct 25, 2019

Science reveals improvements in Roman building techniques

The Romans were some of the most sophisticated builders of the ancient world. Over the centuries, they adopted an increasingly advanced set of materials and technologies to create their famous structures. To distinguish the time periods over which these improvements took place, historians and archaeologists typically measure the colours, shapes and consistencies of the bricks and mortar used by the Romans, along with historical sources. In new research published in EPJ Plus, Francesca Rosi and colleagues at the Italian National Research Council improved on these techniques through scientific analysis of the materials used to build the Roman Forum's Atrium Vestae. They found that successive phases of modification to the building saw improvements including higher quality raw materials, higher brick firing temperatures, and better ratios between carbonate and silicate building materials.

The team's analysis could offer important supplements to the techniques currently used by historians and archaeologists. It could also help these academics to end long-standing disputes regarding the time periods of certain building techniques. Since the Atrium Vestae was modified in five distinctive building phases spanning several centuries, the study highlighted technological improvements throughout the Roman age in unprecedented levels of detail.

The techniques employed by Rosi and colleagues included optical and electron microscopy, and measurements of how x-rays were diffracted as they passed through the materials. They also determined the molecular fingerprints, or spectra, of the materials. These are based on the characteristic ways in which their molecules vibrate when illuminated by electromagnetic radiation of specific energies. Using these methods, the team revealed the colours, textures and chemical compositions of Roman building materials on microscopic scales for the first time; clearly revealing technological improvements over the centuries. The findings of Rosi's team are a clear demonstration of the advantages of scientific methods in archaeological analysis. Their techniques could soon be used in future studies to unlock further mysteries concerning the technologies employed by ancient civilisations.

From Science Daily

Memory training builds upon strategy use

Researchers from Åbo Akademi University, Finland, and Umeå University, Sweden, have for the first time obtained clear evidence of the important role strategies have in memory training. Training makes participants adopt various strategies to manage the task, which then affects the outcome of the training.

Strategy acquisition can also explain why the effects of memory training are so limited. Typically, improvements are limited only to tasks that are very similar to the training task -- training has provided ways to handle a given type of task, but not much else.

A newly published study sheds light on the underlying mechanisms of working memory training that have remained unclear. It rejects the original idea that repetitive computerized training can increase working memory capacity. Working memory training should rather be seen as a form of skill learning in which the adoption of task-specific strategies plays an important role. Hundreds of commercial training programs that promise memory improvements are available for the public. However, the effects of the programs do not extend beyond tasks similar to the ones one has been trained on.

The study included 258 adult individuals which were randomized into three groups. Two of the groups completed a four-week working memory training period during which participants completed 3 x 30-minute training sessions per week with a working memory updating task. One group trained with an externally provided strategy instruction, while the other group trained without the strategy instruction. The third group served as controls and only participated in a pretest, intermediate test and posttest. Self-generated strategies were probed with questionnaires at each training session and assessment point. This study was conducted within the BrainTrain project, one of the Research Centers of Excellence 2015-2018 at Åbo Akademi University.

From Science Daily

What 26,000 books reveal when it comes to learning language

What can reading 26,000 books tell researchers about how language environment affects language behavior? Brendan T. Johns, an assistant professor of communicative disorders and sciences in the University at Buffalo's College of Arts and Sciences, has some answers that are helping to inform questions ranging from how we use and process language to better understanding the development of Alzheimer's disease.

But let's be clear: Johns didn't read all of those books. He's an expert in computational cognitive science who has published a computational modeling study that suggests our experience and interaction with specific learning environments, like the characteristics of what we read, leads to differences in language behavior that were once attributed to differences in cognition.

"Previously in linguistics it was assumed a lot of our ability to use language was instinctual and that our environmental experience lacked the depth necessary to fully acquire the necessary skills," says Johns. "The models that we're developing today have us questioning those earlier conclusions. Environment does appear to be shaping behavior."

Johns' findings, with his co-author, Randall K. Jamieson, a professor in the University of Manitoba's Department of Psychology, appear in the journal Behavior Research Methods.

Advances in natural language processing and computational resources allow researchers like Johns and Jamieson to examine once intractable questions.

The models, called distributional models, serve as analogies to the human language learning process. The 26,000 books that support the analysis of this research come from 3,000 different authors (about 2,000 from the U.S. and roughly 500 from the U.K.) who used over 1.3 billion total words.

George Bernard Shaw is often credited with saying Britain and America are two countries separated by a common language. But the languages are not identical, and in order to establish and represent potential cultural differences, the researchers considered where each of the 26,000 books was located in both time (when the author was born) and place (where the book was published).

With that information established, the researchers analyzed data from 10 different studies involving more than 1,000 participants, using multiple psycholinguistic tasks.

"The question this paper tries to answer is, 'If we train a model with similar materials that someone in the U.K. might have read versus what someone in the U.S. might have read, will they become more like these people?'" says Johns. "We found that the environment people are embedded in seems to shape their behavior."

The culture-specific books in this study explain much of the variance in the data, according to Johns.

"It's a huge benefit to have a culture-specific corpus, and an even greater benefit to have a time-specific corpus," says Johns. "The differences we find in language environment and behavior as a function of time and place is what we call the 'selective reading hypothesis.'"

Using these machine-learning approaches demonstrates the richly informative nature of these environments, and Johns has been working toward building machine-learning frameworks to optimize education. This latest paper shows how you can take a person's language behavior and estimate the types of materials they've read.

"We want to take someone's past experience with language and develop a model of what that person knows," says Johns. "That lets us identify which information can maximize that person's learning potential."

But Johns also studies clinical populations, and his work with Alzheimer's patients has him thinking about how to apply his models to potentially help people at risk of developing the disease.

He says some people show slight memory loss without other indications of cognitive decline. These patients with mild cognitive impairment have a 10-15% chance of being diagnosed with Alzheimer's in any given year, compared to 2% of the general population over age 65.

Read more at Science Daily

Did an extraterrestrial impact trigger the extinction of ice-age animals?

A controversial theory that suggests an extraterrestrial body crashing to Earth almost 13,000 years ago caused the extinction of many large animals and a probable population decline in early humans is gaining traction from research sites around the world.

The Younger Dryas Impact Hypothesis, controversial from the time it was presented in 2007, proposes that an asteroid or comet hit the Earth about 12,800 years ago causing a period of extreme cooling that contributed to extinctions of more than 35 species of megafauna including giant sloths, sabre-tooth cats, mastodons and mammoths. It also coincides with a serious decline in early human populations such as the Clovis culture and is believed to have caused massive wildfires that could have blocked sunlight, causing an "impact winter" near the end of the Pleistocene Epoch.

In a new study published this week in Scientific Reports, a publication of Nature, UofSC archaeologist Christopher Moore and 16 colleagues present further evidence of a cosmic impact based on research done at White Pond near Elgin, South Carolina. The study builds on similar findings of platinum spikes -- an element associated with cosmic objects like asteroids or comets -- in North America, Europe, western Asia and recently in Chile and South Africa.

"We continue to find evidence and expand geographically. There have been numerous papers that have come out in the past couple of years with similar data from other sites that almost universally support the notion that there was an extraterrestrial impact or comet airburst that caused the Younger Dryas climate event," Moore says.

Moore also was lead author on a previous paper documenting sites in North America where platinum spikes have been found and a co-author on several other papers that document elevated levels of platinum in archaeological sites, including Pilauco, Chile -- the first discovery of evidence in the Southern Hemisphere.

"First, we thought it was a North American event, and then there was evidence in Europe and elsewhere that it was a Northern Hemisphere event. And now with the research in Chile and South Africa, it looks like it was probably a global event," he says.

In addition, a team of researchers found unusually high concentrations of platinum and iridium in outwash sediments from a recently discovered crater in Greenland that could have been the impact point. Although the crater hasn't been precisely dated yet, Moore says the possibility is good that it could be the "smoking gun" that scientists have been looking for to confirm a cosmic event. Additionally, data from South America and elsewhere suggests the event may have actually included multiple impacts and airbursts over the entire globe.

While the brief return to ice-age conditions during the Younger Dryas period has been well-documented, the reasons for it and the decline of human populations and animals have remained unclear. The impact hypothesis was proposed as a possible trigger for these abrupt climate changes that lasted about 1,400 years.

The Younger Dryas event gets its name from a wildflower, Dryas octopetala, which can tolerate cold conditions and suddenly became common in parts of Europe 12,800 years ago. The Younger Dryas Impact Hypothesis became controversial, Moore says, because the all-encompassing theory that a cosmic impact triggered cascading events leading to extinctions was viewed as improbable by some scientists.

"It was bold in the sense that it was trying to answer a lot of really tough questions that people have been grappling with for a long time in a single blow," he says, adding that some researchers continue to be critical.

The conventional view has been that the failure of glacial ice dams allowed a massive release of freshwater into the north Atlantic, affecting oceanic circulation and causing the Earth to plunge into a cold climate. The Younger Dryas hypothesis simply claims that the cosmic impact was the trigger for the meltwater pulse into the oceans.

In research at White Pond in South Carolina, Moore and his colleagues used a core barrel to extract sediment samples from underneath the pond. The samples, dated to the beginning of the Younger Dryas with radiocarbon, contain a large platinum anomaly, consistent with findings from other sites, Moore says. A large soot anomaly also was found in cores from the site, indicating regional large-scale wildfires in the same time interval.

In addition, fungal spores associated with the dung of large herbivores were found to decrease at the beginning of the Younger Dryas period, suggesting a decline in ice-age megafauna beginning at the time of the impact.

"We speculate that the impact contributed to the extinction, but it wasn't the only cause. Over hunting by humans almost certainly contributed, too, as did climate change," Moore says. "Some of these animals survived after the event, in some cases for centuries. But from the spore data at White Pond and elsewhere, it looks like some of them went extinct at the beginning of the Younger Dryas, probably as a result of the environmental disruption caused by impact-related wildfires and climate change."

Additional evidence found at other sites in support of an extraterrestrial impact includes the discovery of meltglass, microscopic spherical particles and nanodiamonds, indicating enough heat and pressure was present to fuse materials on the Earth's surface. Another indicator is the presence of iridium, an element associated with cosmic objects, that scientists also found in the rock layers dated 65 million years ago from an impact that caused dinosaur extinction.

While no one knows for certain why the Clovis people and iconic ice-age beasts disappeared, research by Moore and others is providing important clues as evidence builds in support of the Younger Dryas Impact Hypothesis.

Read more at Science Daily

New data on the evolution of plants and origin of species

The history and evolution of plants can be traced back by about one billion years.
There are over 500,000 plant species in the world today. They all evolved from a common ancestor. How this leap in biodiversity happened is still unclear. In the upcoming issue of Nature, an international team of researchers, including scientists from Martin Luther University Halle-Wittenberg, presents the results of a unique project on the evolution of plants. Using genetic data from 1,147 species the team created the most comprehensive evolutionary tree for green plants to date.

The history and evolution of plants can be traced back by about one billion years. Algae were the first organisms to harness solar energy with the help of chloroplasts. In other words, they were the first plant organisms to perform photosynthesis. Today, there are over 500,000 plant species, including both aquatic and terrestrial plants. The aim of the new study in Nature was to unravel the genetic foundations for this development. "Some species began to emerge and evolve several hundreds of millions of years ago. However, today we have the tools to look back and see what happened at that time," explains plant physiologist Professor Marcel Quint from the Institute of Agricultural and Nutritional Sciences at MLU.

Quint is leading a sub-project with bioinformatician Professor Ivo Grosse, also from MLU, as part of the "One Thousand Plant Transcriptomes Initiative," a global network of about 200 researchers. The team collected samples of 1,147 land plant and algae species to analyse each organism's genome-wide gene expression patterns (transcriptome). Using these data, the researchers reconstructed the evolutionary development of plants and the emergence of individual species. Their focus was on plant species that, as of yet, have not been studied on this level, including numerous algae, moss and also flowering plants.

"This was a very special project because we did not just analyse individual components, but complete transcriptomes, of over one thousand plants, providing a much broader foundation for our findings," explains Ivo Grosse. The sub-project led by MLU scientists looked at the development and expansion of large gene families in plants. "Some of these gene families have duplicated over the course of millions of years. This process might have been a catalyst for the evolution of plants: Having significantly more genetic material might unleash new capacities and completely new characteristics," says Marcel Quint. One of the main objectives of the project was to identify a potential connection between genetic duplications and key innovations in the plant kingdom, such as the development of flowers and seeds. Quint and Grosse carried out their research in collaboration with scientists from the universities in Marburg, Jena, and Cologne, and the Max Planck Institute for Evolutionary Biology in Plön. The majority of the analyses was conducted by Martin Porsch, a PhD student in the lab of Ivo Grosse.

Read more at Science Daily

Oct 24, 2019

How to spot a wormhole (if they exist)

A new study outlines a method for detecting a speculative phenomenon that has long captured the imagination of sci-fi fans: wormholes, which form a passage between two separate regions of spacetime.

Such pathways could connect one area of our universe to a different time and/or place within our universe, or to a different universe altogether.

Whether wormholes exist is up for debate. But in a paper published on Oct. 10 in Physical Review D, physicists describe a technique for detecting these bridges.

The method focuses on spotting a wormhole around Sagittarius A*, an object that's thought to be a supermassive black hole at the heart of the Milky Way galaxy. While there's no evidence of a wormhole there, it's a good place to look for one because wormholes are expected to require extreme gravitational conditions, such as those present at supermassive black holes.

In the new paper, scientists write that if a wormhole does exist at Sagittarius A*, nearby stars would be influenced by the gravity of stars at the other end of the passage. As a result, it would be possible to detect the presence of a wormhole by searching for small deviations in the expected orbit of stars near Sagittarius A*.

"If you have two stars, one on each side of the wormhole, the star on our side should feel the gravitational influence of the star that's on the other side. The gravitational flux will go through the wormhole," says Dejan Stojkovic, PhD, cosmologist and professor of physics in the University at Buffalo College of Arts and Sciences. "So if you map the expected orbit of a star around Sagittarius A*, you should see deviations from that orbit if there is a wormhole there with a star on the other side."

Stojkovic conducted the study with first author De-Chang Dai, PhD, of Yangzhou University in China and Case Western Reserve University.

A close look at S2, a star orbiting Sagittarius A*

Stojkovic notes that if wormholes are ever discovered, they're not going to be the kind that science fiction often envisions.

"Even if a wormhole is traversable, people and spaceships most likely aren't going to be passing through," he says. "Realistically, you would need a source of negative energy to keep the wormhole open, and we don't know how to do that. To create a huge wormhole that's stable, you need some magic."

Nevertheless, wormholes -- traversable or not -- are an interesting theoretical phenomenon to study. While there is no experimental evidence that these passageways exist, they are possible -- according to theory. As Stojkovic explains, wormholes are "a legitimate solution to Einstein's equations."

The research in Physical Review D focuses on how scientists could hunt for a wormhole by looking for perturbations in the path of S2, a star that astronomers have observed orbiting Sagittarius A*.

While current surveillance techniques are not yet precise enough to reveal the presence of a wormhole, Stojkovic says that collecting data on S2 over a longer period of time or developing techniques to track its movement more precisely would make such a determination possible. These advancements aren't too far off, he says, and could happen within one or two decades.

Stojkovic cautions, however, that while the new method could be used to detect a wormhole if one is there, it will not strictly prove that a wormhole is present.

"When we reach the precision needed in our observations, we may be able to say that a wormhole is the most likely explanation if we detect perturbations in the orbit of S2," he says. "But we cannot say that, 'Yes, this is definitely a wormhole.' There could be some other explanation, something else on our side perturbing the motion of this star."

Though the paper focuses on traversable wormholes, the technique it outlines could indicate the presence of either a traversable or non-traversable wormhole, Stojkovic says. He explains that because gravity is the curvature of spacetime, the effects of gravity are felt on both sides of a wormhole, whether objects can pass through or not.

Read more at Science Daily

New measurement of Hubble constant adds to cosmic mystery

New measurements of the rate of expansion of the universe, led by astronomers at the University of California, Davis, add to a growing mystery: Estimates of a fundamental constant made with different methods keep giving different results.

"There's a lot of excitement, a lot of mystification and from my point of view it's a lot of fun," said Chris Fassnacht, professor of physics at UC Davis and a member of the international SHARP/H0LICOW collaboration, which made the measurement using the W.M. Keck telescopes in Hawaii.

A paper about the work is published by the Monthly Notices of the Royal Astronomical Society.

The Hubble constant describes the expansion of the universe, expressed in kilometers per second per megaparsec. It allows astronomers to figure out the size and age of the universe and the distances between objects.

Graduate student Geoff Chen, Fassnacht and colleagues looked at light from extremely distant galaxies that is distorted and split into multiple images by the lensing effect of galaxies (and their associated dark matter) between the source and Earth. By measuring the time delay for light to make its way by different routes through the foreground lens, the team could estimate the Hubble constant.

Using adaptive optics technology on the W.M. Keck telescopes in Hawaii, they arrived at an estimate of 76.8 kilometers per second per megaparsec. As a parsec is a bit over 30 trillion kilometers and a megaparsec is a million parsecs, that is an excruciatingly precise measurement. In 2017, the H0LICOW team published an estimate of 71.9, using the same method and data from the Hubble Space Telescope.

Hints of new physics


The new SHARP/H0LICOW estimates are comparable to that by a team led by Adam Reiss of Johns Hopkins University, 74.03, using measurements of a set of variable stars called the Cepheids. But it's quite a lot different from estimates of the Hubble constant from an entirely different technique based on the cosmic microwave background. That method, based on the afterglow of the Big Bang, gives a Hubble constant of 67.4, assuming the standard cosmological model of the universe is correct.

An estimate by Wendy Freedman and colleagues at the University of Chicago comes close to bridging the gap, with a Hubble constant of 69.8 based on the luminosity of distant red giant stars and supernovae.

A difference of 5 or 6 kilometers per second over a distance of over 30 million trillion kilometers might not seem like a lot, but it's posing a challenge to astronomers. It might provide a hint to a possible new physics beyond the current understanding of our universe.

On the other hand, the discrepancy could be due to some unknown bias in the methods. Some scientists had expected that the differences would disappear as estimates got better, but the difference between the Hubble constant measured from distant objects and that derived from the cosmic microwave background seems to be getting more and more robust.

Read more at Science Daily

Martian landslides not conclusive evidence of ice

Detailed three-dimensional images of an extensive landslide on Mars, which spans an area more than 55 kilometres wide, have been analysed to understand how the unusually large and long ridges and furrows formed about 400 million years ago.

The findings, published today in Nature Communications, show for the first time that the unique structures on Martian landslides from mountains several kilometres high could have formed at high speeds of up to 360 kilometres per hour due to underlying layers of unstable, fragmented rocks.

This challenges the idea that underlying layers of slippery ice can only explain such long vast ridges, which are found on landslides throughout the Solar System.

First author, PhD student Giulia Magnarini (UCL Earth Sciences), said: "Landslides on Earth, particularly those on top of glaciers, have been studied by scientists as a proxy for those on Mars because they show similarly shaped ridges and furrows, inferring that Martian landslides also depended on an icy substrate.

"However, we've shown that ice is not a prerequisite for such geological structures on Mars, which can form on rough, rocky surfaces. This helps us better understand the shaping of Martian landscapes and has implications for how landslides form on other planetary bodies including Earth and the Moon."

The team, from UCL, the Natural History Museum (London), Ben Gurion University of Negev (Israel) and University of Wisconsin Madison (USA), used images taken by NASA's Mars Reconnaissance Orbiter to analyse some of the best-defined landslides remotely.

Cross-sections of the Martian surface in the Coprates Chasma in the Valles Marineris were analysed to investigate the relationship between the height of the ridges and width of the furrows compared to the thickness of the landslide deposit.

The structures were found to display the same ratios as those commonly seen in fluid dynamics experiments using sand, suggesting an unstable and dry rocky base layer is as feasible as an icy one in creating the vast formations.

Where landslide deposits are thickest, ridges form 60 metres high and furrows are as wide as eight Olympic-sized swimming pools end-to-end. The structures change as deposits thin out towards the edges of the landslide. Here, ridges are shallow at 10 metres high and sit closer together.

Co-author, Dr Tom Mitchell, Associate Professor of Earthquake Geology and Rock Physics (UCL Earth Sciences), said: "The Martian landslide we studied covers an area larger than Greater London and the structures within it are huge. Earth might harbour comparable structures but they are harder to see and our landforms erode much faster than those on Mars due to rain.

"While we aren't ruling out the presence of ice, we know is that ice wasn't needed to form the long run-outs we analysed on Mars. The vibrations of rock particles initiate a convection process that caused upper denser and heavier layers of rock to fall and lighter rocks to rise, similar to what happens in your home where warmed less dense air rises above the radiator. This mechanism drove the flow of deposits up to 40 km away from the mountain source and at phenomenally high speeds."

The research team includes Apollo 17 astronaut, Professor Harrison Schmitt (University of Wisconsin Madison), who walked on the Moon in December 1972 and completed geologic fieldwork while on the lunar surface.

Professor Schmitt, said: "This work on Martian landslides relates to further understanding of lunar landslides such as the Light Mantle Avalanche I studied in the valley of Taurus-Littrow during Apollo 17 exploration and have continued to examine using images and data collected more recently from lunar orbit. Flow initiation and mechanisms on the Moon may be very different from Mars; however, comparisons often help geologists to understand comparable features.

"As on the Earth, the lunar meteor impact environment has modified the surface features of the Light Mantle Avalanche of the 75+ million years since it occurred. The impact redistribution of materials in the lunar environment has modified features that ultimately may be found to resemble those documented in the Martian landslide study.

Read more at Science Daily

Imperfect diamonds paved road to historic Deep Earth discoveries

Thousands of diamonds, formed hundreds of kilometers deep inside the planet, paved the road to some of the 10-year Deep Carbon Observatory program's most historic accomplishments and discoveries, being celebrated Oct. 24-26 at the US National Academy of Sciences.

Unsightly black, red, green, and brown specks of minerals, and microscopic pockets of fluid and gas encapsulated by diamonds as they form in Deep Earth, record the elemental surroundings and reactions taking place within Earth at a specific depth and time, divulging some of the planet's innermost secrets.

Hydrogen and oxygen, for example, trapped inside diamonds from a layer 410 to 660 kilometers below Earth's surface, reveal the subterranean existence of oceans' worth of H2O -- far more in mass than all the water in every ocean in the surface world.

This massive amount of water may have been brought to Deep Earth from the surface by the movement of the great continental and oceanic plates which, as they separate and move, collide with one another and overlap. This subduction of slabs also buries carbon from the surface back into the depths, a process fundamental to Earth's natural carbon balance, and therefore to life.

Knowledge of Deep Earth's water content is critical to understanding the diversity and melting behaviors of materials at the planet's different depths, the creation and flows of hydrocarbons (e.g. petroleum and natural gas) and other materials, as well as the planet's deep subterranean electrical conductivity.

By dating the pristine fragments of material trapped inside other super-deep diamond "inclusions," DCO researchers could put an approximate time stamp on the start of plate tectonics -- "one of the planet's greatest innovations," in the words of DCO Executive Director Robert Hazen of the Carnegie Institution for Science. It started roughly 3 billion years ago, when the Earth was a mere 1.5 billion years old.

Diamond research accelerated dramatically thanks to the creation of DCO's global network of researchers and led to some of the program's most intriguing discoveries and achievements.

Diamonds from the deepest depths, often small with poor clarity, are not generally used as gemstones by Tiffany's but are amazingly complex, robust and priceless in research. Inclusions offered DCO scientists samples of minerals that exist only at extreme high subterranean pressure, and suggested three ways in which diamonds form.

While as many as 90% of analyzed diamonds were composed of carbon scientists expected in the mantle, some "relatively young" diamonds (up to a few hundred million years old) appear to include carbon from once-living sources; in other words, they are made of carbon returned to Deep Earth from the surface world.

Diamonds also revealed unambiguous evidence that some hydrocarbons form hundreds of miles down, well beyond the realm of living cells: abiotic energy.

Unravelling the mystery of deep abiotic methane and other energy sources helps explain how deep life in the form of microbes and bacteria is nourished, and fuels the proposition that life first originated and evolved far below (rather than migrating down from) the surface world.

Diamonds also enabled DCO scientists to simulate the extreme conditions of Earth's interior.

DCO's Extreme Physics and Chemistry community scientists used diamond anvil cells -- a tool that can squeeze a sample tremendously between the tips of two diamonds, coupled with lasers that heat the compressed crystals -- to simulate deep Earth's almost unimaginable extreme temperatures and pressures.

Using a variety of advanced techniques, they analyzed the compressed samples, identified 100 new carbon-bearing crystal structures and documented their intriguing properties and behaviors.

The work provided insights into how carbon atoms in Deep Earth "find one another," aggregate, and assemble to form diamonds and other material.

Development of new materials; potential carbon capture and storage strategies

DCO's discoveries and research are important and applicable in many ways, including the development of new materials and potential carbon capture and storage strategies.

DCO scientists are studying, for example, how the natural timescale for sequestration of carbon might be shortened.

The weathering of and microbial life inside Oman's Samail Ophiolite -- an unusual, large slab pushed up from Earth's upper mantle long ago -- offers a tutorial in nature's carbon sequestration techniques, knowledge that might help offset carbon emissions caused by humans.

In Iceland, another DCO natural sequestration project, CarbFix, involves injecting carbon-bearing fluids into basalt and observing their conversion to solids.

A Decade of Discovery

Hundreds of scientists from around the world meet in Washington DC Oct. 24 to 26 to share and celebrate results of the wide-ranging, decade-long Deep Carbon Observatory -- one of the largest global research collaborations in Earth sciences ever undertaken.

With its Secretariat at the Carnegie Institution for Science in Washington DC, and $50 million in core support from the Alfred P. Sloan Foundation, multiplied many times by additional investment worldwide, a multidisciplinary group of 1,200 researchers from 55 nations worked for 10 years in four interconnected scientic "communities" to explore Earth's fundamental workings, including:

  • How carbon moves between Earth's interior, surface and atmosphere
  • Where Earth's deep carbon came from, how much exists and in what forms
  • How life began, and the limits -- such as temperature and pressure -- to Earth's deep microbial life

They met the challenge of investigating Earth's interior in several ways, producing 1,400 peer-reviewed papers while pursuing 268 projects that involved, for example:

  • Studying diamonds, volcanoes, and core samples obtained by drilling on land and at sea
  • Conducting lab experiments to mimic the extreme temperatures and pressures of Earth's interior, and through theoretical modeling of carbon's evolution and movements over deep time, and
  • Developing new high tech instruments

DCO scientists conducted field measurements in remote and inhospitable regions of the world: ocean floors, on top of active volcanoes, and in the deserts of the Middle East.

Where instrumentation and models were lacking, DCO scientists developed new tools and models to meet the challenge. Throughout these studies, DCO invested in the next generation of deep carbon researchers, students and early career scientists, who will carry on the tradition of exploration and discovery for decades to come.

Key discoveries during the 10-year Deep Carbon Observatory program

In addition to insights from its diamond research above, the program's top discoveries include:

The deep biosphere is one of Earth's largest ecosystems

Life in the deep subsurface totals 15,000 to 23,000 megatonnes (million metric tons) of carbon, about 250 to 400 times greater than the carbon mass of all humans. The immense Deep Earth biosphere occupies a space nearly twice as large as all the world's oceans.

DCO scientists explored how microbes draw sustenance from "abiotic" methane and other energy sources -- fuel that wasn't derived from biotic life above.

If microbes can eek out a living using chemical energy from rocks in Earth's deep subsurface, that may hold true on other planetary bodies.

This knowledge about the types of environments that can sustain life, particularly those where energy is limited, can guide the search for life on other planets. In the outer solar system, for example, energy from the sun is scarce, as it is in Earth's subsurface environment.

DCO researchers also found the deepest, lowest-density, and longest-lived subseafloor microbial ecosystem ever recorded and changed our understanding of the limits of life at extremes of pressure, temperature, and depth.

Rocks and fluids in Earth's crust provide clues to the origins of life on this planet, and where to look for life on others


DCO scientists found amino acids and complex organic molecules in rocks on the seafloor. These molecules, the building blocks of life, were formed by abiotic synthesis and had never before been observed in the geologic record.

They also found pockets of ancient salty fluids rich in hydrogen, methane, and helium many kilometers deep, providing evidence of early, protected environments capable of harboring life.

Abiotic methane forms in the crust and mantle of Earth

When water meets the ubiquitous mineral olivine under pressure, the rock reacts with oxygen atoms from the H2O and transforms into another mineral, serpentine -- characterized by a scaly, green-brown, snake skin-like appearance.

This process of "serpentinization" leads to the formation of "abiotic" methane in many different environments on Earth. DCO scientists developed and used sophisticated analytical equipment to differentiate between biotic (derived from ancient plants and animals) and abiotic formation of methane.

DCO field and laboratory studies of rocks from the upper mantle document a new high-pressure serpentinization process that produces abiotic methane and other forms of hydrocarbons.

The formation of methane and hydrocarbons through these geologic, abiotic processes provides fuel and sustenance for microbial life.

Atmospheric CO2 has been relatively stable over the eons but huge, occasional catastrophic carbon disturbances have taken place

DCO scientists have reconstructed Earth's deep carbon cycle over eons to the present day. This new, more complete picture of the planetary ingassing and outgassing of carbon shows a remarkably stable system over hundreds of millions of years, with a few notable episodic exceptions.

Continental breakup and associated volcanic activity are the dominant causes of natural planetary outgassing. DCO scientists added to this picture by investigating rare episodes of massive volcanic eruptions and asteroid impacts to learn how Earth and its climate responds to such catastrophic carbon disturbances.

Plate tectonics modeling using DCO's new GPlates platform made it possible to reconstruct the Earth's carbon cycle through geologic time.

Much of the carbon outgassed from Deep Earth seeps from fractures and faults unassociated with eruptions

Volcanoes and volcanic regions outgas carbon dioxide (CO2) into the ocean / atmosphere system at a rate of 280-360 megatonnes per year. This includes both emissions during volcanic eruptions and degassing of CO2 out of diffuse fractures and faults in volcanic regions worldwide and the mid-ocean ridge system.

Human activities, such as burning fossil fuels, are responsible for about 100 times more CO2 emissions than all volcanic and tectonic region sources combined.

The changing ratio of CO2 to SO2 emitted by volcanoes may help forecast eruptions

The volume of outgassed CO2 relative to SO2 increases for some volcanoes days to weeks before an eruption, raising the possibility of improved forecasting and mitigating danger to humans.

DCO researchers measured volcanic outputs around the globe. Italy's Mount Etna, for example, one of Earth's most active volcanoes, typically spewed 5 to 8 times more CO2 than usual about two weeks before a large eruption.

Fluids move and transform carbon deep within Earth

Experiments and new theoretical work led to a revolutionary new DCO model of water in deep Earth and the discovery that diamonds can easily form through water-rock interactions involving organic and inorganic carbon.

This model predicted the changing chemistry of water found in fluid inclusions in diamonds and yields new insights into the amounts of carbon and nitrogen available for return to Earth's atmosphere over deep time.

DCO scientists also discovered that the solubility of carbon-bearing minerals, including carbonates, graphite, and diamond, is much higher than previously thought in water-rock systems in the mantle.

31 new carbon-bearing minerals found in four years

After cataloguing known carbon-bearing minerals at Earth's surface, their composition and where they are found, DCO researchers discovered statistical relationships between mineral localities and the frequency of their occurrence. With that model they predicted 145 yet-to-be-discovered species and in 2015 challenged citizen scientists to help find them.

Of the 31 new-to-science minerals turned up during the Carbon Mineral Challenge, two had been predicted, including triazolite, discovered in Chile and thought to have derived in part from cormorant guano. Photo below.

Meanwhile, scientists led by DCO Executive Director Robert Hazen, established an entirely new mineral classification system.

Through experiment and observation, DCO scientists discovered new forms of carbon deep in Earth's mantle, shedding new light on the carbon "storage capacity" of the deep mantle, and on the role of subduction in recycling surface carbon back to Earth's interior.

Studies also cast new light on the record of major changes in our planet's history such as the rise of oxygen and the waxing and waning of supercontinents.

Read more at Science Daily

Oct 23, 2019

New study underpins the idea of a sudden impact killing off dinosaurs and much of the other life

Fossil remains of tiny calcareous algae not only provide information about the end of the dinosaurs, but also show how the oceans recovered after the fatal asteroid impact. Experts agree that a collision with an asteroid caused a mass extinction on our planet, but there were hypotheses that ecosystems were already under pressure from increasing volcanism. "Our data speak against a gradual deterioration in environmental conditions 66 million years ago," says Michael Henehan of the GFZ German Research Centre for Geosciences. Together with colleagues from the University of Yale, he published a study in the scientific journal "Proceedings of the National Academy of Sciences" (PNAS) that describes ocean acidification during this period.

He investigated isotopes of the element boron in the calcareous shells of plankton (foraminifera). According to the findings, there was a sudden impact that led to massive ocean acidification. It took millions of years for the oceans to recover from acidification. "Before the impact event, we could not detect any increasing acidification of the oceans," says Henehan.

The impact of a celestial body left traces: the "Chicxulub crater" in the Gulf of Mexico and tiny amounts of iridium in sediments. Up to 75 percent of all animal species went extinct at the time. The impact marks the boundary of two geological eras -- the Cretaceous and the Palaeogene (formerly known as the Cretaceous-Tertiary boundary).

Henehan and his team at Yale University reconstructed the environmental conditions in the oceans using fossils from deep-sea drill cores and from rocks formed at that time. According to this, after the impact, the oceans became so acidic that organisms that made their shells from calcium carbonate could not survive. Because of this, as life forms in the upper layers of the oceans became extinct, carbon uptake by photosynthesis in the oceans was reduced by half. This state lasted several tens of thousands of years before calcareous algae spread again. However, it took several million years until the fauna and flora had recovered and the carbon cycle had reached a new equilibrium.

The researchers found decisive data for this during an excursion to the Netherlands, where a particularly thick layer of rock from the Cretaceous-Palaeogene boundary is preserved in a cave. "In this cave, an especially thick layer of clay from the immediate aftermath of the impact accumulated, which is really quite rare" says Henehan. In most settings, sediment accumulates so slowly that such a rapid event such as an asteroid impact is hard to resolve in the rock record. "Because so much sediment was laid down there at once, it meant we could extract enough fossils to analyse, and we were able to capture the transition," says Henehan.

Read more at Science Daily

Monstrous galaxy from dawn of the universe accidentally discovered

Astronomers accidentally discovered the footprints of a monster galaxy in the early universe that has never been seen before. Like a cosmic Yeti, the scientific community generally regarded these galaxies as folklore, given the lack of evidence of their existence, but astronomers in the United States and Australia managed to snap a picture of the beast for the first time.

Published in the Astrophysical Journal, the discovery provides new insights into the first growing steps of some of the biggest galaxies in the universe.

University of Arizona astronomer Christina Williams, lead author of the study, noticed a faint light blob in new sensitive observations using the Atacama Large Millimeter Array, or ALMA, a collection of 66 radio telescopes high in the Chilean mountains. Strangely enough, the shimmering seemed to be coming out of nowhere, like a ghostly footstep in a vast dark wilderness.

"It was very mysterious because the light seemed not to be linked to any known galaxy at all," said Williams, a National Science Foundation postdoctoral fellow at the Steward Observatory. "When I saw this galaxy was invisible at any other wavelength, I got really excited because it meant that it was probably really far away and hidden by clouds of dust."

The researchers estimate that the signal came from so far away that it took 12.5 billion years to reach Earth, therefore giving us a view of the universe in its infancy. They think the observed emission is caused by the warm glow of dust particles heated by stars forming deep inside a young galaxy. The giant clouds of dust conceal the light of the stars themselves, rendering the galaxy completely invisible.

Study co-author Ivo Labbé, of the Swinburne University of Technology, Melbourne, Australia, said: "We figured out that the galaxy is actually a massive monster galaxy with as many stars as our Milky Way, but brimming with activity, forming new stars at 100 times the rate of our own galaxy."

The discovery may solve a long-standing question in astronomy, the authors said. Recent studies found that some of the biggest galaxies in the young universe grew up and came of age extremely quickly, a result that is not understood theoretically. Massive mature galaxies are seen when the universe was only a cosmic toddler at 10% of its current age. Even more puzzling is that these mature galaxies appear to come out of nowhere: astronomers never seem to catch them while they are forming.

Smaller galaxies have been seen in the early universe with the Hubble Space Telescope, but such creatures are not growing fast enough to solve the puzzle. Other monster galaxies have also been previously reported, but those sightings have been far too rare for a satisfying explanation.

"Our hidden monster galaxy has precisely the right ingredients to be that missing link," Williams explains, "because they are probably a lot more common."

An open question is exactly how many of them there are. The observations for the current study were made in a tiny part of the sky, less than 1/100th the disc of the full moon. Like the Yeti, finding footprints of the mythical creature in a tiny strip of wilderness would either be a sign of incredible luck or a sign that monsters are literally lurking everywhere.

Williams said researchers are eagerly awaiting the March 2021 scheduled launch of NASA's James Webb Space Telescope to investigate these objects in more detail.

"JWST will be able to look through the dust veil so we can learn how big these galaxies really are and how fast they are growing, to better understand why models fail in explaining them."

Read more at Science Daily

Bacterial lifestyle alters the evolution of antibiotic resistance

How bacteria live -- whether as independent cells or in a communal biofilm -- determines how they evolve antibiotic resistance, which could lead to more personalized approaches to antimicrobial therapy and infection control.

University of Pittsburgh School of Medicine researchers repeatedly exposed bacteria to the antibiotic ciprofloxacin to force rapid evolution. As expected, the bacteria developed resistance to the drug, but surprisingly, their lifestyle affected which specific adaptations emerged, according to a study published today in eLife.

"What we're simulating in the lab is happening in the wild, in the clinic, during the development of drug resistance," said senior author Vaughn Cooper, Ph.D., director of the Center for Evolutionary Biology and Medicine at Pitt. "Our results show that biofilm growth shapes the way drug resistance evolves." According to study lead author Alfonso Santos-Lopez, Ph.D., a postdoctoral researcher in Cooper's lab, this finding could uncover vulnerabilities that may prove useful when treating drug-resistant infections.

"Antibiotic resistance is one of our main problems in medicine," Santos-Lopez said. "We have to develop new treatments, and one idea is to take advantage of what the field calls 'collateral sensitivity.' When bacteria evolve resistance to one drug, it can expose a vulnerability to a different class of antibiotics that can effectively kill the bacteria."

Knowing these evolutionary push-and-pull relationships could take the guesswork out of prescribing antibiotics, Santos-Lopez said.

In this experiment, when the biofilm evolved resistance to ciprofloxacin, it became defenseless against cephalosporins. The free-floating bacteria did not develop this same chink in their armor, even though they became 128 times more resistant to ciprofloxacin than the biofilm-grown bacteria.

According to study coauthor Michelle Scribner, a doctoral student in Cooper's lab, these findings highlight the importance of studying bacteria as they naturally occur, in biofilms.

"Biofilms are a more clinically relevant lifestyle," Scribner said. "They're thought to be the primary mode of growth for bacteria living in the body. Most infections are caused by biofilms on surfaces."

From Science Daily

Looking inside the body with inderect light

Light provides all our visual information, but it reaches our eyes in different ways. Direct light comes unperturbed, coming straight from the source, whereas indirect light bounces off different surfaces, such as walls or ceilings, before entering our eyes. Extracting information from these two pathways has significant implications in diagnostic imaging and other applications. A multinational collaboration led by Nara Institute of Science and Technology Assistant Professor Hiroyuki Kubo, Arizona State University Assistant Professor Suren Jayasuriya, and Carnegie Mellon University Professor Srinivasa G. Narasimhan has successfully captured and analyzed indirect light with commercially available cameras to create images of blood vessels in living human beings in real time at extraordinary resolution.

While direct light provides information like depth and colour, allowing us to see a person's skin and other superficial features, indirect light, explains Kubo, can reveal otherwise unseen details just under the surface.

"Light on skin has strong subsurface scattering. By capturing indirect light, we can see details of invisible objects underneath the skin like blood vessels," he says.

The new technique exploits the epipolar geometry on which light travels by creating a synchronization delay between the light source and camera. A new nonlinear algorithm developed by the researchers then demultiplexes the information to provide a sharper image.

"Our imaging system illuminates the scene with epipolar planes corresponding to projector rows. We vary two key parameters to capture the light transport: the offset between the projecting row and camera row in the rolling shutter, or synchronization delay, and the exposure of the camera row," explains Kubo.

The result was the imaging of blood vessels at a much finer resolution than standard medical instruments. It also extracted details in circumstances that have been challenging in medical imaging, such as darker toned or hairy skin.

"We captured the basilic and cephalic veins in the inner forearm, which are about 1-5 mm deep," observes Kubo.

The hardware comes at a low cost and is portable. That and the ability to reveal new details of blood vessels non-invasively will improve the quality of care to a wider patient population such as patients who cannot have agents injected into their blood vessels, like children or the elderly.

Overall, Kubo is optimistic that this technique adds to vascular image quality in just about any condition seen in clinics.

Read more at Science Daily

Achieving quantum supremacy

Quantum computing concept illustration
Researchers in UC Santa Barbara/Google scientist John Martinis' group have made good on their claim to quantum supremacy. Using 53 entangled quantum bits ("qubits"), their Sycamore computer has taken on -- and solved -- a problem considered intractable for classical computers.

"A computation that would take 10,000 years on a classical supercomputer took 200 seconds on our quantum computer," said Brooks Foxen, a graduate student researcher in the Martinis Group. "It is likely that the classical simulation time, currently estimated at 10,000 years, will be reduced by improved classical hardware and algorithms, but, since we are currently 1.5 trillion times faster, we feel comfortable laying claim to this achievement."

The feat is outlined in a paper in the journal Nature.

The milestone comes after roughly two decades of quantum computing research conducted by Martinis and his group, from the development of a single superconducting qubit to systems including architectures of 72 and, with Sycamore, 54 qubits (one didn't perform) that take advantage of the both awe-inspiring and bizarre properties of quantum mechanics.

"The algorithm was chosen to emphasize the strengths of the quantum computer by leveraging the natural dynamics of the device," said Ben Chiaro, another graduate student researcher in the Martinis Group. That is, the researchers wanted to test the computer's ability to hold and rapidly manipulate a vast amount of complex, unstructured data.

"We basically wanted to produce an entangled state involving all of our qubits as quickly as we can," Foxen said, "and so we settled on a sequence of operations that produced a complicated superposition state that, when measured, returns bitstring with a probability determined by the specific sequence of operations used to prepare that particular superposition. The exercise, which was to verify that the circuit's output correspond to the equence used to prepare the state, sampled the quantum circuit a million times in just a few minutes, exploring all possibilities -- before the system could lose its quantum coherence.

'A complex superposition state'

"We performed a fixed set of operations that entangles 53 qubits into a complex superposition state," Chiaro explained. "This superposition state encodes the probability distribution. For the quantum computer, preparing this superposition state is accomplished by applying a sequence of tens of control pulses to each qubit in a matter of microseconds. We can prepare and then sample from this distribution by measuring the qubits a million times in 200 seconds."

"For classical computers, it is much more difficult to compute the outcome of these operations because it requires computing the probability of being in any one of the 2^53 possible states, where the 53 comes from the number of qubits -- the exponential scaling is why people are interested in quantum computing to begin with," Foxen said. "This is done by matrix multiplication, which is expensive for classical computers as the matrices become large."

According to the new paper, the researchers used a method called cross-entropy benchmarking to compare the quantum circuit's output (a "bitstring") to its "corresponding ideal probability computed via simulation on a classical computer" to ascertain that the quantum computer was working correctly.

"We made a lot of design choices in the development of our processor that are really advantageous," said Chiaro. Among these advantages, he said, are the ability to experimentally tune the parameters of the individual qubits as well as their interactions.

While the experiment was chosen as a proof-of-concept for the computer, the research has resulted in a very real and valuable tool: a certified random number generator. Useful in a variety of fields, random numbers can ensure that encrypted keys can't be guessed, or that a sample from a larger population is truly representative, leading to optimal solutions for complex problems and more robust machine learning applications. The speed with which the quantum circuit can produce its randomized bit string is so great that there is no time to analyze and "cheat" the system.

"Quantum mechanical states do things that go beyond our day-to-day experience and so have the potential to provide capabilities and application that would otherwise be unattainable," commented Joe Incandela, UC Santa Barbara's vice chancellor for research. "The team has demonstrated the ability to reliably create and repeatedly sample complicated quantum states involving 53 entangled elements to carry out an exercise that would take millennia to do with a classical supercomputer. This is a major accomplishment. We are at the threshold of a new era of knowledge acquisition."

Looking ahead


With an achievement like "quantum supremacy," it's tempting to think that the UC Santa Barbara/Google researchers will plant their flag and rest easy. But for Foxen, Chiaro, Martinis and the rest of the UCSB/Google AI Quantum group, this is just the beginning.

"It's kind of a continuous improvement mindset," Foxen said. "There are always projects in the works." In the near term, further improvements to these "noisy" qubits may enable the simulation of interesting phenomena in quantum mechanics, such as thermalization, or the vast amount of possibility in the realms of materials and chemistry.

In the long term, however, the scientists are always looking to improve coherence times, or, at the other end, to detect and fix errors, which would take many additional qubits per qubit being checked. These efforts have been running parallel to the design and build of the quantum computer itself, and ensure the researchers have a lot of work before hitting their next milestone.

Read more at Science Daily

Oct 22, 2019

Training parents is key to helping children eat a variety of foods

Families dealing with the stress and frustration of their child's overly picky eating habits may have a new addition to their parental toolbox. Pediatric researchers recently described a brief group cognitive-behavioral therapy program that provides parents with specific techniques to improve their child's mealtime behaviors and expand the range of foods their children will eat. Although the study size was small, the parents involved reported "life-changing" improvements.

Researchers from Children's Hospital of Philadelphia (CHOP) and The University of Pennsylvania published this study in the August 2019 issue of Cognitive and Behavioral Practice.

"Our research shows the acceptability, feasibility and positive outcomes of the Picky Eaters Clinic, a seven-session, parent-only, group-based intervention intended to train parents of children with Avoidant/Restrictive Food Intake Disorder (ARFID)," said study leader Katherine Dahlsgaard, PhD, ABPP, Clinical Director of the Anxiety Behaviors Clinic at CHOP. "In the Clinic, parents are taught to act as behavioral therapists who promote long-term improvements in food acceptance and positive mealtime behaviors."

This study included 21 patients and their parents, who were referred to the Picky Eaters Clinic at CHOP. Families, including the child, attended a diagnostic evaluation and were assessed for treatment eligibility. The children ranged in age from 4 to 12 years and were diagnosed with ARFID, due to excessive picky eating and associated functional impairment.

The families reported that picky eating caused considerable stress. Parental stress resulted from: diet containing less than 20 foods; refusal of entire food groups (typically vegetables, meats or fruits); the need to make a separate meal; difficulty traveling, socializing or going to restaurants; high child distress/refusal to eat when presented with a new or non-preferred food; and lack of child's motivation to change or unwillingness to receive treatment.

The seven clinic sessions occurred over a 6-month period. The first four sessions were held one week apart; the fifth and sixth were spaced two 3 to 4 weeks apart, allowing families time to practice the assigned behavior strategies at home. Children were challenged at home to chew and swallow a portion of a new or non-preferred food and a successful challenge resulted in a post-meal reward. The majority chose screen time.

The seventh "reunion" session was held 3 months later, to allow parents to catch up and share gains. The researchers administered post-treatment feeding measures and a parent satisfaction survey at the last sessions.

Read more at Science Daily

The secret of classic Belgian beers? Medieval super yeasts!

An international team of scientists, led by Prof. Kevin Verstrepen (VIB-KU-Leuven) and Prof. Steven Maere (VIB-UGent), has discovered that some of the most renowned classic Belgian beers, including Gueuze and Trappist ales, are fermented with a rare and unusual form of hybrid yeasts. These yeasts combine DNA of the traditional ale yeast, Saccharomyces cerevisiae, with that of more stress-resistant feral yeasts such as Saccharomyces kudriavzevii.

Mixed origins

"These yeasts are hybrids between two completely different species" says Dr. Jan Steensels (VIB -- KU Leuven Center for Microbiology), who coordinated the lab work of this study. "Think of lions and tigers making a super-baby."

Such interspecific hybridizations are rare and seem to be favored by the domestication process. In this case, the new hybrid yeasts combined important characteristics of both parental species, with the fermentation capacity of normal beer yeasts and the stress tolerance and capacity to form special aromas of more feral ancient yeasts like S. kudriavzevii that haphazardly made their way into the brewery.

The team, from the VIB-KU Leuven Center for Microbiology and the University of Munich, supported by industrial partners, has spent five years characterizing the different yeasts used in today's production of beer, wine, bread and biofuels. The genetic analysis of these yeasts was quite a piece of work, because none of the existing pipelines for DNA sequencing can deal with such mixed origins.

For this the team could, surprisingly, count on the plant expertise of professor Steven Maere, a bioinformatics expert from the VIB-UGent Center for Plant Systems Biology. Maere explains: "Plants have some of the most complex genomes of all living organisms. It is fascinating that complex interspecific hybrids with doubled genomes feature prominently both among domesticated yeasts and domesticated plants."

A surprise in DNA

"It was a bit of a surprise for us" says Dr. Brigida Gallone (VIB-KU Leuven Center for Microbiology), the lead author on the paper that appeared today in Nature Ecology and Evolution. "In 2016, we reported that most industrial yeasts belong to, or arose from the species Saccharomyces cerevisiae, the traditional baker's and brewer's yeast. We found that these industrial yeasts are quite different from their wild progenitors, with different subfamilies having adapted to beer, wine and bakery environments. We also noticed that some of the yeasts that were isolated from ancient Belgian beer styles, like Gueuze and Trappist beers, are even more unusual and contained DNA of two different yeast species."

"It really seems that these unique natural yeasts allowed the development of some of the most renowned beers that Belgium is so famous for," says Dr. Philippe Malcorps, Senior Scientist at the Global Innovation and Technology Center of AB InBev, the world's largest brewer. The team of Malcorps helped with the isolation of yeasts from some of their spontaneous fermentation beer cellars. Those natural super-yeasts are living witnesses of brewing from pre-industrial ages, adapted to harsh conditions of fermentation of the strong Trappist beers, or survival in the long lagering typical for Gueuze beers.

"One could say that the unique habitat in wooden fermentation barrels created by adventurous Medieval Belgian brewers allowed these new species to thrive until today," says Prof. Kevin Verstrepen (VIB-KU Leuven Center for Microbiology).

A history of yeasts

Apart from the special Belgian yeasts, the team also collected a large number of hybrids from S. eubayanus and S. cerevisiae, or from S. uvarum strongly adapted to cold fermentation. While it was already known that lager yeasts were hybrids, the complete DNA analysis of a large number of these yeasts showed how these specific hybrids originated in medieval Germany and later spread across different European breweries as the pilsner beers grew more popular.

"It is no coincidence that the origin of today's beer yeasts lies in Belgium and Germany, arguably the two countries that are most associated with the art of brewing," says Prof. Mathias Hutzler (TU Munich).

Read more at Science Daily

Unique brain cells linked to OCD and anxiety

According to the National Institute of Mental Health, 1 in 3 people experience debilitating anxiety -- the kind that prevents someone from going about their normal life. Women are also more at risk to suffer from anxiety. Yet the roots of anxiety and other anxiety-related diseases, such as Obsessive Compulsive Disorder (OCD), are still unclear. In a new study, University of Utah scientists discovered a new lineage of specialized brain cells, called Hoxb8-lineage microglia, and established a link between the lineage and OCD and anxiety in mice.

Mice with disabled Hoxb8-lineage microglia exhibited excessive overgrooming behavior. The symptom resembles behavior in humans with a type of OCD called trichotillomania, a disorder that causes people to obsessively pluck out their own hair. Their experiments proved that Hoxb8-lineage microglia prevent mice from displaying OCD behaviors. Additionally, they found that female sex hormones caused more severe OCD behaviors and induced added anxiety in the mice.

"More women than men experience debilitating anxiety at some point in their lives. Scientists want help these people to get their lives back. In this study were able to link anxiety to a dysfunction in a type of microglia, and to female sex hormones," said lead author Dimitri Traenkner, research assistant professor in the School of Biological Sciences at the University of Utah. "It opens up a new avenue for thinking about anxiety. Since we have this model, we have a way to test new drugs to help these mice and hopefully at some point, this will help people."

The study published today in Cell Reports.

Discovery of a new microglia lineage

Microglia are crucial during brain development in the womb -- they ensure that brain structures and neural circuitry all wire together correctly. Traenkner and colleagues showed that microglia belong to least two distinct sub-lineages of cells. One lineage called Hoxb8-lineage microglia makes up about 30% of all microglia in the brain but until now, no one knew whether they had any unique function.

Mario Capecchi, Nobel laureate and senior author of the study, had long suspected that Hoxb8-microglia were special. In previous research, he disabled Hoxb8-lineage microglia expecting some impact on development. But the mice seemed fine.

'We didn't really know what to make of the fact that mice without Hoxb8 appear so normal, until we noticed that they groom significantly more and longer than what would be considered healthy. And that's how the whole thing started," said Capecchi, who is also a distinguished professor of human genetics at the University of Utah Health.

This is the first study to describe microglia's role in OCD and anxiety behaviors in mice.

"Researchers have long suspected that microglia have a role in anxiety and neuropsychological disorders in humans because this cell type can release substances that may harm neurons. So, we were surprised to find that microglia actually protect from anxiety, they don't cause it," added Traenkner.

Female sex hormones drive symptom severity

The mice showed sex-linked severity in their symptoms; female mice's OCD symptoms were consistently more dramatic than in the males. Females also exhibited an additional anxiety symptom that was lacking in male mice -- the researchers designed and validated a new test showing that the pupils of female mice dilated dramatically, triggered by a fight-or-flight stress response.

To test whether sex hormones drove OCD and anxiety symptoms, Traenkner and colleagues manipulated estrogen and progesterone levels in the mice. They found that at male-levels, female mice's OCD and anxiety behaviors resembled the male response, and at female hormone levels, male mice's OCD behaviors looked more like the female's severe symptoms, and showed signs of anxiety.

"Our findings strongly argue for a mechanistic link between biological sex and genetic family history in the risk to develop an anxiety disorders," said Traenkner.

What does this mean for humans?

For many, anxiety drastically impacts their work, friends, family and lifestyle. Scientists and health care professionals are always looking for ways to help people get their lives back. This study of mouse models links anxiety to dysfunctional microglia. Down the line, the findings could spark new microglia-focused studies in patients with anxiety and, eventually, help to better treat this debilitating disorder.

Read more at Science Daily

New drug-delivery technology promises efficient, targeted cancer treatment

A precise and non-toxic treatment that targets lung cancer cells at the nanoscale is able to effectively kill the cells even at a low dose.

Researchers from Washington State University and the Department of Energy's Pacific Northwest National Laboratory (PNNL) used tiny tubes made from organic molecules called peptoids to deliver cancer-killing drugs in a targeted manner.

The research, led by Yuehe Lin, professor in WSU's School of Mechanical and Materials Engineering, and Chun-Long Chen, a senior research scientist at PNNL and a joint faculty fellow at the University of Washington, was published as the cover story in the journal Small.

The biologically-inspired nanotubes, which are about a hundred thousand times thinner than a human hair, were rolled up from membrane-like nanosheets. The drug molecules, fluorescent dyes and cancer-targeting molecules were precisely placed within the nanotubes, enabling them to track the efficiency of drug delivery into the cancer cells.

The new technology allows the two drugs -- one for chemotherapy and the other for a less-invasive photodynamic therapy treatment -- to be delivered directly to the cancer cells. Photodynamic therapy uses a chemical that, when exposed to light, releases reactive oxygen species (ROS) that kill cancer cells. The researchers' dual-drug approach enabled the use of a lower dose of the cancer drugs than using a single drug, leading to effective killing of cancer cells with low toxicity.

"By precisely engineering these nanotubes with fluorescent dyes and cancer targeting molecules, scientists can clearly locate tumor cells and track how the drug regimen is performing," said Lin. "We can also track how nanotubes enter and deliver the drugs inside the cancer cell."

The team tested the nanotubes on lung cancer cells and found that they delivered the chemotherapy drug doxorubicin directly into the fast-dividing cancer cells, resulting in highly efficient cancer killing while using less chemotherapy drugs.

"This is a promising approach for precision targeting with little damage to healthy surrounding cells," said Lin.

While other nanomaterials, such as carbon nanotubes, have been used to deliver and track cancer-killing drugs, researchers have found that they are toxic to the body. Furthermore, they didn't do well at precisely recognizing molecules.

"By using these peptoids, we were able to develop highly programmable nanotubes and a biocompatible delivery mechanism," said Chen. "We also harnessed the high stability of peptoid and its well-controlled packing to develop nanotubes that are highly stable."

Read more at Science Daily

Oct 21, 2019

DNA-reeling bacteria yield new insight on how superbugs acquire drug-resistance

A new study from Indiana University has revealed a previously unknown role a protein plays in helping bacteria reel in DNA in their environment -- like a fisherman pulling up a catch from the ocean.

The discovery was made possible by a new imaging method invented at IU that let scientists see for the first time how bacteria use their long and mobile appendages -- called pili -- to bind to, or "harpoon," DNA in the environment. The new study, reported Oct. 18 in the journal PLOS Genetics, focuses on how they reel their catch back in.

By revealing the mechanisms involved in this process, the study's authors said the results may help hasten work on new ways to stop bacterial infection.

"The issue of antibiotic resistance is very relevant to this work since the ability of pili to bind to, and 'reel in,' DNA is one of the major ways that bacteria evolve to thwart existing drugs," said Ankur Dalia, an assistant professor in the IU Bloomington College of Arts and Sciences' Department of Biology, who is senior author on the study. "An improved understanding of this 'reeling' activity can help inform strategies to stop it."

The act of gobbling up and incorporating genetic material from the environment -- known as natural transformation -- is an evolutionary process by which bacteria incorporate specific traits from other microorganisms, including genes that convey antibiotic resistance.

The need for new methods to stop bacterial infection is growing since overuse of existing antibiotics, which speeds how quickly infectious organisms evolve to outsmart these drugs, is causing the world to quickly run out of effective treatments. By 2050, it's estimated that 10 million people could die each year from antimicrobial resistance.

Although they may look like tiny arms under a microscope, Dalia said, pili are actually more akin to an erector set that is quickly put together and torn down over and over again. Each "piece" in the structure is a protein sub-unit called the major pilin that assembles into a filament called the pilus fiber.

"There are two main motors that had previously been implicated in this polymerization and depolymerization process," added Jennifer Chlebek, a Ph.D. student in Dalia's lab, who led the study. "In this study, we show that there is a third motor involved in the depolymerization process, and we start to unravel how it works."

The two previously characterized "motors" that control the pili's activity are the proteins PilB, which constructs the pili, and PilT, which deconstructs it. These motors run by utilizing ATP, a source of cellular energy. In this study, IU researchers showed that stopping this process, which switches off the power to PilT, does not prevent the retraction of the pili, as previously thought.

Instead, they found that a third motor protein, called PilU, can power pilus retraction even if PilT is inactive, although this retraction occurs about five times more slowly. The researchers also found that switching off power to both retraction proteins slows the retraction process to a painstaking rate of 50 times slower. An unaltered pilus retracts at a rate of one-fifth of a micron per second.

Moreover, the study found that switching off PilU affects the strength of pilus retraction, which was measured by collaborators at Brooklyn College. The study also showed that PilU and PilT do not form a "hybrid" motor, but instead that these two independent motors somehow coordinate with one another to mediate pilus retraction.

"While the PilU protein had previously been implicated in pilus activity, its exact role has been difficult to determine because cells that lack this protein generally only have very subtle effects," Chlebek added. "Our observation that PilU can support pilus retraction in a mutant strain, when we threw a wrench in the PilT motor, was the key to unlocking how this protein aids in the depolymerization of pili."

The ability to precisely measure the pili's retraction rate -- and therefore precisely measure the impact of altering the proteins that affect this process -- was made possible by the ability to see pili under a microscope, which was not possible until the breakthrough imaging method invented at IU.

"The ability to fluorescently dye the pili was huge," Dalia said. "It allowed us to not only see the pili's activity but also measure it in ways which simply would not have been possible in the past."

Read more at Science Daily

Gimme six! Researchers discover aye-aye's extra finger

Aye-aye.
The world's weirdest little primate has gotten even weirder, thanks to the discovery of a tiny extra digit. A study led by researchers from North Carolina State University has found that aye-ayes possess small "pseudothumbs" -- complete with their own fingerprints -- that may help them grip objects and branches as they move through trees. This is the first accessory digit ever found in a primate.

Aye-ayes are unusual animals from the get-go: these extremely rare lemurs are known for their constantly growing incisors, large ears, and strange hands -- particularly for the slender, elongated middle fingers that they use for locating and spearing grubs inside trees.

"The aye-aye has the craziest hand of any primate," says Adam Hartstone-Rose, associate professor of biological sciences at NC State and lead author of a paper describing the work. "Their fingers have evolved to be extremely specialized -- so specialized, in fact, that they aren't much help when it comes to moving through trees. When you watch them move, it looks like a strange lemur walking on spiders."

Hartstone-Rose and NC State post-doctoral researcher Edwin Dickinson were studying the tendons that lead to the aye-aye's unusual hands when they noticed that one of the tendons branched off toward a small structure on the wrist. Using traditional dissection digital imaging techniques on six aye-ayes, the researchers found that the structure in question is composed of both bone and cartilage, and has musculature that allows it to move in three directions -- much the same way that human thumbs move.

"Using these digital techniques allows us to visualize these structures in three dimensions, and to understand the organization of the muscles which provide movement to the digit," says Dickinson, who built the digital model of the anatomy and is co-first author of the paper.

"The pseudothumb is definitely more than just a nub," Hartstone-Rose says. "It has both a bone and cartilaginous extension and three distinct muscles that move it. The pseudothumb can wriggle in space and exert an amount of force equivalent to almost half the aye-aye's body weight. So it would be quite useful for gripping."

The team examined aye-aye specimens from both sexes, ranging in age from juvenile to adult, and found the same structure in both the left and right hands of each one.

According to Hartstone-Rose and Dickinson, the aye-aye may have developed the pseudothumb to compensate for its other, overspecialized fingers.

"Other species, like the panda bear, have developed the same extra digit to aid in gripping because the standard bear paw is too generalized to allow the dexterity necessary for grasping," Hartstone-Rose says. "And moles and some extinct swimming reptiles have added extra digits to widen the hand for more efficient digging or swimming. In this case, the aye-aye's hand is so specialized for foraging an extra digit for mobility became necessary.

Read more at Science Daily

'Artificial leaf' successfully produces clean gas

A widely-used gas that is currently produced from fossil fuels can instead be made by an 'artificial leaf' that uses only sunlight, carbon dioxide and water, and which could eventually be used to develop a sustainable liquid fuel alternative to petrol.

The carbon-neutral device sets a new benchmark in the field of solar fuels, after researchers at the University of Cambridge demonstrated that it can directly produce the gas -- called syngas -- in a sustainable and simple way.

Rather than running on fossil fuels, the artificial leaf is powered by sunlight, although it still works efficiently on cloudy and overcast days. And unlike the current industrial processes for producing syngas, the leaf does not release any additional carbon dioxide into the atmosphere. The results are reported in the journal Nature Materials.

Syngas is currently made from a mixture of hydrogen and carbon monoxide, and is used to produce a range of commodities, such as fuels, pharmaceuticals, plastics and fertilisers.

"You may not have heard of syngas itself but every day, you consume products that were created using it. Being able to produce it sustainably would be a critical step in closing the global carbon cycle and establishing a sustainable chemical and fuel industry," said senior author Professor Erwin Reisner from Cambridge's Department of Chemistry, who has spent seven years working towards this goal.

The device Reisner and his colleagues produced is inspired by photosynthesis -- the natural process by which plants use the energy from sunlight to turn carbon dioxide into food.

On the artificial leaf, two light absorbers, similar to the molecules in plants that harvest sunlight, are combined with a catalyst made from the naturally abundant element cobalt.

When the device is immersed in water, one light absorber uses the catalyst to produce oxygen. The other carries out the chemical reaction that reduces carbon dioxide and water into carbon monoxide and hydrogen, forming the syngas mixture.

As an added bonus, the researchers discovered that their light absorbers work even under the low levels of sunlight on a rainy or overcast day.

"This means you are not limited to using this technology just in warm countries, or only operating the process during the summer months," said PhD student Virgil Andrei, first author of the paper. "You could use it from dawn until dusk, anywhere in the world."

The research was carried out in the Christian Doppler Laboratory for Sustainable SynGas Chemistry in the University's Department of Chemistry. It was co-funded by the Austrian government and the Austrian petrochemical company OMV, which is looking for ways to make its business more sustainable.

"OMV has been an avid supporter of the Christian Doppler Laboratory for the past seven years. The team's fundamental research to produce syngas as the basis for liquid fuel in a carbon neutral way is ground-breaking," said Michael-Dieter Ulbrich, Senior Advisor at OMV.

Other 'artificial leaf' devices have also been developed, but these usually only produce hydrogen. The Cambridge researchers say the reason they have been able to make theirs produce syngas sustainably is thanks the combination of materials and catalysts they used.

These include state-of-the-art perovskite light absorbers, which provide a high photovoltage and electrical current to power the chemical reaction by which carbon dioxide is reduced to carbon monoxide, in comparison to light absorbers made from silicon or dye-sensitised materials. The researchers also used cobalt as their molecular catalyst, instead of platinum or silver. Cobalt is not only lower-cost, but it is better at producing carbon monoxide than other catalysts.

The team is now looking at ways to use their technology to produce a sustainable liquid fuel alternative to petrol.

Syngas is already used as a building block in the production of liquid fuels. "What we'd like to do next, instead of first making syngas and then converting it into liquid fuel, is to make the liquid fuel in one step from carbon dioxide and water," said Reisner, who is also a Fellow of St John's College.

Although great advances are being made in generating electricity from renewable energy sources such as wind power and photovoltaics, Reisner says the development of synthetic petrol is vital, as electricity can currently only satisfy about 25% of our total global energy demand. "There is a major demand for liquid fuels to power heavy transport, shipping and aviation sustainably," he said.

Read more at Science Daily