Jul 18, 2020

29,000 years of Aboriginal history

The known timeline of the Aboriginal occupation of South Australia's Riverland region has been vastly extended by new research led by Flinders University in collaboration with the River Murray and Mallee Aboriginal Corporation (RMMAC).

Radiocarbon dating of shell middens -- remnants of meals eaten long ago -- capture a record of Aboriginal occupation that extends to around 29,000 years, confirming the location as one of the oldest sites along the 2500km river to become the oldest River Murray Indigenous site in South Australia.

In the first comprehensive survey of the region, one of the oldest Indigenous sites along Australia's longest river system has been discovered. The results, published in Australian Archaeology, used radiocarbon dating methods to analyse river mussel shells from a midden site overlooking the Pike River floodplain downstream of Renmark.

"These results include the first pre-Last Glacial Maximum ages returned on the River Murray in South Australia and extend the known Aboriginal occupation of the Riverland by approximately 22,000 years," says Flinders University archaeologist and PhD candidate Craig Westell.

More than 30 additional radiocarbon dates were collected in the region, spanning the period from 15,000 years ago to the recent present. Together, the results relate Aboriginal people to an ever-changing river landscape, and provide deeper insights into how they responded to these challenges.

The period represented by the radiocarbon results brackets the Last Glacial Maximum (commonly known as the last Ice Age) when climatic conditions were colder and drier and when the arid zone extended over much of the Murray-Darling Basin. The river and lake systems of the basin were under stress during this time.

In the Riverland, dunes were advancing into the Murray floodplains, river flows were unpredictable, and salt was accumulating in the valley.

The ecological impacts witnessed during one of the worst droughts on record, the so-called Millennium Drought (from late 1996 extending to mid-2010), provides an idea of the challenges Aboriginal people may have faced along the river during the Last Glacial Maximum, and other periods of climate stress, researchers conclude.

"These studies show how our ancestors have lived over many thousands of years in the Riverland region and how they managed to survive during times of hardship and plenty," says RMMAC spokesperson Fiona Giles.

"This new research, published in Australian Archaeology, fills in a significant geographic gap in our understanding of the Aboriginal occupation chronologies for the Murray-Darling Basin," adds co-author Associate Professor Amy Roberts.

Read more at Science Daily

New insight into the origin of water on Earth

Scientists have found the interstellar organic matter could produce an abundant supply of water by heating, suggesting that organic matter could be the source of terrestrial water.

There remains a number of mysteries on our planet including the elusive origin of water on the earth. Active studies suggested that terrestrial water had been delivered by icy comets or meteorites containing hydrous silicates that came from outside the "snow line" -- the boundary beyond which ice can condense due the low temperatures. More recent studies, however, have provided observations opposing to cometary origin theory, yet still failing to suggest plausible substitutions for the source of terrestrial water. "Until now, much less attention has been paid to organic matter, comparing to ices and silicates, even though there is an abundance inside the snow line" says planetary scientist Akira Kouchi at Hokkaido University.

In the recent study published in Scientific Reports, a group of scientists led by Akira Kouchi demonstrates that heating of the interstellar organic matter at high temperature could yield abundant water and oil. This suggests that water could be produced inside the snow line, without any contribution of comets or meteorites delivered from outside the snow line.

As a first step, the researchers made an analog of organic matter in interstellar molecular clouds using chemical reagents. To make the analog, they referred to analytical data of interstellar organics made by irradiating UV on a mixture containing H2O, CO, and NH3, which mimicked its natural synthetic process. Then, they gradually heated the organic matter analog from 24 to 400 ? under a pressured condition in a diamond anvil cell. The sample was uniform until 100 ?, but was separated into two phases at 200 ?. At approximately 350 ?, the formation of water droplets became evident and the sizes of the droplets increased as the temperature rose. At 400 ?, in addition to water droplets, black oil was produced.

The group conducted similar experiments with larger amounts of organic matter, which also yielded water and oil. Their analysis of absorption spectra revealed that the main component of the aqueous product was pure water. Additionally, chemical analysis of produced oil showed similar characteristics to the typical crude oil found beneath the earth.

"Our results show that the interstellar organic matter inside the snow line is a potential source of water on the earth. Moreover, the abiotic oil formation we observed suggests more extensive sources of petroleum for the ancient Earth than previously thought," says Akira Kouchi. "Future analyses of organic matter in samples from the asteroid Ryugu, which the Japan's asteroid explorer Hayabusa2 will bring back later this year, should advance our understanding of the origin of terrestrial water."

From Science Daily

Jul 17, 2020

Heat stress: The climate is putting European forests under sustained pressure

No year since weather records began was as hot and dry as 2018. A first comprehensive analysis of the consequences of this drought and heat event shows that central European forests sustained long-term damage. Even tree species considered drought-resistant, such as beech, pine and silver fir, suffered. The international study was directed by the University of Basel, which is conducting a forest experiment unique in Europe.

Until now, 2003 has been the driest and hottest year since regular weather records began. That record has now been broken. A comparison of climate data from Germany, Austria and Switzerland shows that 2018 was significantly warmer. The average temperature during the vegetation period was 1.2°C above the 2003 value and as high as 3.3°C above the average of the years from 1961 to 1990.

Part of the analysis, which has now been published, includes measurements taken at the Swiss Canopy Crane II research site in Basel, where extensive physiological investigations were carried out in tree canopies. The goal of these investigations is to better understand how and when trees are affected by a lack of water in order to counter the consequences of climate change through targeted management measures.

When trees die of thirst

Trees lose a lot of water through their surfaces. If the soil also dries out, the tree cannot replace this water, which is shown by the negative suction tension in the wood's vascular tissue. It's true that trees can reduce their water consumption, but if the soil water reservoir is used up, it's ultimately only a matter of time until cell dehydration causes the death of a tree.

Physiological measurements at the Basel research site have shown the researchers that the negative suction tension and water shortage in trees occurred earlier than usual. In particular, this shortage was more severe throughout all of Germany, Austria and Switzerland than ever measured before. Over the course of the summer, severe drought-related stress symptoms therefore appeared in many tree species important to forestry. Leaves wilted, aged and were shed prematurely.

Spruce, pine and beech most heavily affected


The true extent of the summer heatwave became evident in 2019: many trees no longer formed new shoots -- they were partially or wholly dead. Others had survived the stress of the drought and heat of the previous year, but were increasingly vulnerable to bark beetle infestation or fungus. Trees with partially dead canopies, which reduced the ability to recover from the damage, were particularly affected.

"Spruce was most heavily affected. But it was a surprise for us that beech, silver fir and pine were also damaged to this extent," says lead researcher Professor Ansgar Kahmen. Beech in particular had until then been classified as the "tree of the future," although its supposed drought resistance has been subject to contentious discussion since the 2003 heatwave.

Future scenarios to combat heat and drought


According to the latest projections, precipitation in Europe will decline by up to a fifth by 2085, and drought and heat events will become more frequent. Redesigning forests is therefore essential. "Mixed woodland is often propagated," explains plant ecologist Kahmen, "and it certainly has many ecological and economic advantages. But whether mixed woodland is also more drought-resistant has not yet been clearly proven. We still need to study which tree species are good in which combinations, including from a forestry perspective. That will take a long time."

Read more at Science Daily

Breakthrough in studying ancient DNA from Doggerland that separates the UK from Europe

Thousands of years ago the UK was physically joined to the rest of Europe through an area known as Doggerland. However, a marine inundation took place during the mid-holocene, separating the British landmass from the rest of Europe, which is now covered by the North Sea.

Scientists from the School of Life Sciences at the University of Warwick have studied sedimentary ancient DNA (sedaDNA) from sediment deposits in the southern North Sea, an area which has not previously been linked to a tsunami that occurred 8150 years ago.

The paper, led by the University of Bradford and involving Universities of Warwick, Wales St. Trinity David, St. Andrews, Cork, Aberystwyth, Tartu as well as the Smithsonian and Natural History Museum, 'Multi-Proxy Characterisation of the Storegga Tsunami and Its Impact on the Early Holocene Landscapes of the Southern North Sea', published in the Journal Geosciences, sees Life Scientists from the University of Warwick work specifically on the sedimentary ancient DNA from Doggerland.

A number of innovative breakthroughs were achieved by the University of Warwick scientists in terms of analysing the sedaDNA. One of these was the concept of biogenomic mass, where for the first time they were able to see the how the biomass changes with events, evidence of this presented in the paper refers to the large woody mass of trees from the tsunami found in the DNA of the ancient sediment.

New ways of authenticating the sedaDNA were also developed, as current methods of authentication do not apply to sedaDNA which has been damaged whilst under the sea for thousands of years because there is too little information for each individual species. Researchers therefore came up with a new way, metagenomic assessment methodology, whereby the characteristic damage found at the ends of ancient DNA molecules is collectively analysed across all species rather than one.

Alongside this a key part of analysing the sedaDNA is to determine whether or not it was deposited in situ or has moved over time. This led researchers to develop statistical methods to establish which scenario was appropriate, using stratigraphic integrity they were able to determine that the sedaDNA in the sediment deposits had not moved a massive amount since deposition by assessing the biomolecules vertical movement in the core column of the sedaDNA.

Identifying which organisms the ancient fragmented molecules of DNA came from is also challenging because often there is nothing to directly compare. In a fourth innovation the researchers refined algorithms to define these regions of "dark phylogenetic space" from where organisms must have originated overcome this issue.

Professor Robin Allaby from the School of Life Sciences at the University of Warwick comments: "This study represents an exciting milestone for sedimentary ancient DNA studies establishing a number of breakthrough methods to reconstruct an 8,150 year old environmental catastrophe in the lands that existed before the North Sea flooded them away into history."

Professor Vince Gaffney from the School of Archaeological and Forensic Sciences at the University of Bradford said: "Exploring Doggerland, the lost landscape underneath the North Sea, is one of the last great archaeological challenges in Europe. This work demonstrates that an interdisciplinary team of archaeologists and scientists can bring this landscape back to life and even throw new light on one of prehistory's great natural disasters, the Storegga Tsunami.

Read more at Science Daily

How galaxies die: New insights into the quenching of star formation

Astronomers studying galaxy evolution have long struggled to understand what causes star formation to shut down in massive galaxies. Although many theories have been proposed to explain this process, known as "quenching," there is still no consensus on a satisfactory model.

Now, an international team led by Sandra Faber, professor emerita of astronomy and astrophysics at UC Santa Cruz, has proposed a new model that successfully explains a wide range of observations about galaxy structure, supermassive black holes, and the quenching of star formation. The researchers presented their findings in a paper published July 1 in the Astrophysical Journal.

The model supports one of the leading ideas about quenching which attributes it to black hole "feedback," the energy released into a galaxy and its surroundings from a central supermassive black hole as matter falls into the black hole and feeds its growth. This energetic feedback heats, ejects, or otherwise disrupts the galaxy's gas supply, preventing the infall of gas from the galaxy's halo to feed star formation.

"The idea is that in star-forming galaxies, the central black hole is like a parasite that ultimately grows and kills the host," Faber explained. "That's been said before, but we haven't had clear rules to say when a black hole is big enough to shut down star formation in its host galaxy, and now we have quantitative rules that actually work to explain our observations."

The basic idea involves the relationship between the mass of the stars in a galaxy (stellar mass), how spread out those stars are (the galaxy's radius), and the mass of the central black hole. For star-forming galaxies with a given stellar mass, the density of stars in the center of the galaxy correlates with the radius of the galaxy so that galaxies with bigger radii have lower central stellar densities. Assuming that the mass of the central black hole scales with the central stellar density, star-forming galaxies with larger radii (at a given stellar mass) will have lower black-hole masses.

What that means, Faber explained, is that larger galaxies (those with larger radii for a given stellar mass) have to evolve further and build up a higher stellar mass before their central black holes can grow large enough to quench star formation. Thus, small-radius galaxies quench at lower masses than large-radius galaxies.

"That is the new insight, that if galaxies with large radii have smaller black holes at a given stellar mass, and if black hole feedback is important for quenching, then large-radius galaxies have to evolve further," she said. "If you put together all these assumptions, amazingly, you can reproduce a large number of observed trends in the structural properties of galaxies."

This explains, for example, why more massive quenched galaxies have higher central stellar densities, larger radii, and larger central black holes.

Based on this model, the researchers concluded that quenching begins when the total energy emitted from the black hole is approximately four times the gravitational binding energy of the gas in the galactic halo. The binding energy refers to the gravitational force that holds the gas within the halo of dark matter enveloping the galaxy. Quenching is complete when the total energy emitted from the black hole is twenty times the binding energy of the gas in the galactic halo.

Faber emphasized that the model does not yet explain in detail the physical mechanisms involved in the quenching of star formation. "The key physical processes that this simple theory evokes are not yet understood," she said. "The virtue of this, though, is that having simple rules for each step in the process challenges theorists to come up with physical mechanisms that explain each step."

Astronomers are accustomed to thinking in terms of diagrams that plot the relations between different properties of galaxies and show how they change over time. These diagrams reveal the dramatic differences in structure between star-forming and quenched galaxies and the sharp boundaries between them. Because star formation emits a lot of light at the blue end of the color spectrum, astronomers refer to "blue" star-forming galaxies, "red" quiescent galaxies, and the "green valley" as the transition between them. Which stage a galaxy is in is revealed by its star formation rate.

One of the study's conclusions is that the growth rate of black holes must change as galaxies evolve from one stage to the next. The observational evidence suggests that most of the black hole growth occurs in the green valley when galaxies are beginning to quench.

"The black hole seems to be unleashed just as star formation slows down," Faber said. "This was a revelation, because it explains why black hole masses in star-forming galaxies follow one scaling law, while black holes in quenched galaxies follow another scaling law. That makes sense if black hole mass grows rapidly while in the green valley."

Faber and her collaborators have been discussing these issues for many years. Since 2010, Faber has co-led a major Hubble Space Telescope galaxy survey program (CANDELS, the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey), which produced the data used in this study. In analyzing the CANDELS data, she has worked closely with a team led by Joel Primack, UCSC professor emeritus of physics, which developed the Bolshoi cosmological simulation of the evolution of the dark matter halos in which galaxies form. These halos provide the scaffolding on which the theory builds the early star-forming phase of galaxy evolution before quenching.

The central ideas in the paper emerged from analyses of CANDELS data and first struck Faber about four years ago. "It suddenly leaped out at me, and I realized if we put all these things together -- if galaxies had a simple trajectory in radius versus mass, and if black hole energy needs to overcome halo binding energy -- it can explain all these slanted boundaries in the structural diagrams of galaxies," she said.

At the time, Faber was making frequent trips to China, where she has been involved in research collaborations and other activities. She was a visiting professor at Shanghai Normal University, where she met first author Zhu Chen. Chen came to UC Santa Cruz in 2017 as a visiting researcher and began working with Faber to develop these ideas about galaxy quenching.

"She is mathematically very good, better than me, and she did all of the calculations for this paper," Faber said.

Faber also credited her longtime collaborator David Koo, UCSC professor emeritus of astronomy and astrophysics, for first focusing attention on the central densities of galaxies as a key to the growth of central black holes.

Among the puzzles explained by this new model is a striking difference between our Milky Way galaxy and its very similar neighbor Andromeda. "The Milky Way and Andromeda have almost the same stellar mass, but Andromeda's black hole is almost 50 times bigger than the Milky Way's," Faber said. "The idea that black holes grow a lot in the green valley goes a long way toward explaining this mystery. The Milky Way is just entering the green valley and its black hole is still small, whereas Andromeda is just exiting so its black hole has grown much bigger, and it is also more quenched than the Milky Way."

Read more at Science Daily

Enhanced water repellent surfaces discovered in nature

Through the investigation of insect surfaces, Penn State researchers have detailed a previously unidentified nanostructure that can be used to engineer stronger, more resilient water repellent coatings.

The results of this research were published today (July 17) in Science Advances.

With an enhanced ability to repel droplets, this design could be applied to personal protective equipment (PPE) to better resist virus-laden particles, such as COVID-19, among other applications.

"For the past few decades, conventionally designed water repellent surfaces have usually been based on plants, like lotus leaves," said Lin Wang, a doctoral student in the Department of Materials Science and Engineering at Penn State and the lead author of the paper.

Classical engineering theories have used this approach to create superhydrophobic, or water repellent, surfaces. Traditionally, they are manufactured with low solid fraction textures, which maintain an extremely thin layer of air above a low density of microscopic, hair-like nanostructures, which the researchers liken to an air hockey table.

"The reasoning is if the droplet or object is floating on top of that air, it won't become stuck to the surface," said Tak-Sing Wong, the Wormley Early Career Professor of Engineering, associate professor of mechanical and biomedical engineering and Wang's adviser.

Since it works effectively, human-made coatings tend to mimic the low density of these nanostructures.

However, this paper details an entirely different approach. When examining surfaces like the eye of a mosquito, body of a springtail or the wing of a cicada under high resolution electron microscopes, Wang found that the nanoscopic hairs on those surfaces are more densely packed, referred to in engineering as high solid fraction textures. Upon further exploration, this significant departure from plants' structure may imbue additional water repelling benefits.

"Imagine if you had a high density of these nanostructures on a surface," Wang said. "It could be possible to maintain the stability of the air layer from higher impact forces."

This could also mean the more densely packed structures may be able to repel liquid that is moving at a higher speed, such as raindrops.

While the design concept is new to humans, the researchers theorize this nanostructure boosts the insect's resiliency in its natural environment.

"For these insect surfaces, repelling water droplets is a matter of life and death. The impact force of raindrops is enough to carry them to the ground and kill them," Wang said. "So, it is really important for them to stay dry, and we figured out how."

With this knowledge gleaned from nature, the researchers hope to apply this design principle to create next generation coatings. By developing a water repellent surface that can withstand faster moving and higher impact droplets, the applications are abundant.

From small, flying robotic vehicles, such as the drones that Amazon hopes to deliver packages with, to commercial airliners, a coating that can emulate these insect surfaces could provide increased efficiency and safety.

However, in light of the COVID-19 pandemic, researchers have since realized this knowledge could have an additional impact on human health.

"We hope, when developed, this coating could be used for PPE. For example, if someone sneezes around a face shield, those are high velocity droplets. With a traditional coating, those particles could stick to the surface of the PPE," Wong said. "However, if the design principles detailed in this paper were adopted successfully, it would have the ability to repel those droplets much better and potentially keep the surface germ-free."

As seen in this work, the Wong Laboratory for Nature Inspired Engineering draws insights from biological phenomena to make humanity's innovations better and more effective.

"While we didn't imagine that application at the beginning of this project, COVID-19 made us think about how we can use this design principle to benefit more people," Wong said. "It's up to us as engineers to take these discoveries and apply them in a meaningful way."

The next step for this work will be developing a large scale, cost effective method that can manufacture a coating to mimic these properties.

"In the past, we didn't have an effective surface that could repel high speed water droplets," Wong said. "But the insects told us how. There are so many examples like this in nature; we just need to be learning from them."

Read more at Science Daily

Jul 16, 2020

In a first, astronomers watch a black hole's corona disappear, then reappear

It seems the universe has an odd sense of humor. While a crown-encrusted virus has run roughshod over the world, another entirely different corona about 100 million light years from Earth has mysteriously disappeared.

For the first time, astronomers at MIT and elsewhere have watched as a supermassive black hole's own corona, the ultrabright, billion-degree ring of high-energy particles that encircles a black hole's event horizon, was abruptly destroyed.

The cause of this dramatic transformation is unclear, though the researchers guess that the source of the calamity may have been a star caught in the black hole's gravitational pull. Like a pebble tossed into a gearbox, the star may have ricocheted through the black hole's disk of swirling material, causing everything in the vicinity, including the corona's high-energy particles, to suddenly plummet into the black hole.

The result, as the astronomers observed, was a precipitous and surprising drop in the black hole's brightness, by a factor of 10,000, in under just one year.

"We expect that luminosity changes this big should vary on timescales of many thousands to millions of years," says Erin Kara, assistant professor of physics at MIT. "But in this object, we saw it change by 10,000 over a year, and it even changed by a factor of 100 in eight hours, which is just totally unheard of and really mind-boggling."

Following the corona's disappearance, astronomers continued to watch as the black hole began to slowly pull together material from its outer edges to reform its swirling accretion disk, which in turn began to spin up high-energy X-rays close to the black hole's event horizon. In this way, in just a few months, the black hole was able to generate a new corona, almost back to its original luminosity.

"This seems to be the first time we've ever seen a corona first of all disappear, but then also rebuild itself, and we're watching this in real-time," Kara says. "This will be really important to understanding how a black hole's corona is heated and powered in the first place."

Kara and her co-authors, including lead author Claudio Ricci of Universidad Diego Portales in Santiago, Chile, have published their findings today in Astrophysical Journal Letters. Co-authors from MIT include Ron Remillard, and Dheeraj Pasham.

A nimble washing machine

In March 2018, an unexpected burst lit up the view of ASSASN, the All-Sky Automated Survey for Super-Novae, that surveys the entire night sky for supernova activity. The survey recorded a flash from 1ES 1927+654, an active galactic nucleus, or AGN, that is a type of supermassive black hole with higher-than-normal brightness at the center of a galaxy. ASSASN observed that the object's brightness jumped to about 40 times its normal luminosity.

"This was an AGN that we sort of knew about, but it wasn't very special," Kara says. "Then they noticed that this run-of-the-mill AGN became suddenly bright, which got our attention, and we started pointing lots of other telescopes in lots of other wavelengths to look at it."

The team used multiple telescopes to observe the black hole in the X-ray, optical, and ultraviolet wave bands. Most of these telescopes were pointed at the the black hole periodically, for example recording observations for an entire day, every six months. The team also watched the black hole daily with NASA's NICER, a much smaller X-ray telescope, that is installed aboard the International Space Station, with detectors developed and built by researchers at MIT.

"NICER is great because it's so nimble," Kara says. "It's this little washing machine bouncing around the ISS, and it can collect a ton of X-ray photons. Every day, NICER could take a quick little look at this AGN, then go off and do something else."

With frequent observations, the researchers were able to catch the black hole as it precipitously dropped in brightness, in virtually all the wave bands they measured, and especially in the high-energy X-ray band -- an observation that signaled that the black hole's corona had completely and suddenly vaporized.

"After ASSASN saw it go through this huge crazy outburst, we watched as the corona disappeared," Kara recalls. "It became undetectable, which we have never seen before."

A jolting flash

Physicists are unsure exactly what causes a corona to form, but they believe it has something to do with the configuration of magnetic field lines that run through a black hole's accretion disk. At the outer regions of a black hole's swirling disk of material, magnetic field lines are more or less in a straightforward configuration. Closer in, and especially near the event horizon, material circles with more energy, in a way that may cause magnetic field lines to twist and break, then reconnect. This tangle of magnetic energy could spin up particles swirling close to the black hole, to the level of high-energy X-rays, forming the crown-like corona that encircles the black hole.

Kara and her colleagues believe that if a wayward star was indeed the culprit in the corona's disappearance, it would have first been shredded apart by the black hole's gravitational pull, scattering stellar debris across the accretion disk. This may have caused the temporary flash in brightness that ASSASN captured. This "tidal disruption," as astronomers call such a jolting event, would have triggered much of the material in the disk to suddenly fall into the black hole. It also might have thrown the disk's magnetic field lines out of whack in a way that it could no longer generate and support a high-energy corona.

This last point is a potentially important one for understanding how coronas first form. Depending on the mass of a black hole, there is a certain radius within which a star will most certainly be pulled in by a black hole's gravity.

"What that tells us is that, if all the action is happening within that tidal disruption radius, that means the magnetic field configuration that's supporting the corona must be within that radius," Kara says. "Which means that, for any normal corona, the magnetic fields within that radius are what's responsible for creating a corona."

The researchers calculated that if a star indeed was the cause of the black hole's missing corona, and if a corona were to form in a supermassive black hole of similar size, it would do so within a radius of about 4 light minutes -- a distance that roughly translates to about 75 million kilometers from the black hole's center.

"With the caveat that this event happened from a stellar tidal disruption, this would be some of the strictest constraints we have on where the corona must exist," Kara says.

Read more at Science Daily

Solar Orbiter's first images reveal 'campfires' on the Sun

Solar Orbiter spots 'campfires' on the Sun. Locations of campfires are annotated with white arrows.
The first images from Solar Orbiter, a new Sun-observing mission by ESA and NASA, have revealed omnipresent miniature solar flares, dubbed 'campfires', near the surface of our closest star.

According to the scientists behind the mission, seeing phenomena that were not observable in detail before hints at the enormous potential of Solar Orbiter, which has only just finished its early phase of technical verification known as commissioning.

"These are only the first images and we can already see interesting new phenomena," says Daniel Müller, ESA's Solar Orbiter Project Scientist. "We didn't really expect such great results right from the start. We can also see how our ten scientific instruments complement each other, providing a holistic picture of the Sun and the surrounding environment."

Solar Orbiter, launched on 10 February 2020, carries six remote-sensing instruments, or telescopes, that image the Sun and its surroundings, and fourin situinstruments that monitor the environment around the spacecraft. By comparing the data from both sets of instruments, scientists will get insights into the generation of the solar wind, the stream of charged particles from the Sun that influences the entire Solar System.

The unique aspect of the Solar Orbiter mission is that no other spacecraft has been able to take images of the Sun's surface from a closer distance.

Closest images of the Sun reveal new phenomena

The campfires shown in the first image set were captured by the Extreme Ultraviolet Imager (EUI) from Solar Orbiter's first perihelion, the point in its elliptical orbit closest to the Sun. At that time, the spacecraft was only 77 million km away from the Sun, about half the distance between Earth and the star.

"The campfires are little relatives of the solar flares that we can observe from Earth, million or billion times smaller," says David Berghmans of the Royal Observatory of Belgium (ROB), Principal Investigator of the EUI instrument, which takes high-resolution images of the lower layers of the Sun's atmosphere, known as the solar corona. "The Sun might look quiet at the first glance, but when we look in detail, we can see those miniature flares everywhere we look."

The scientists do not know yet whether the campfires are just tiny versions of big flares, or whether they are driven by different mechanisms. There are, however, already theories that these miniature flares could be contributing to one of the most mysterious phenomena on the Sun, the coronal heating.

Unravelling the Sun's mysteries

"These campfires are totally insignificant each by themselves, but summing up their effect all over the Sun, they might be the dominant contribution to the heating of the solar corona," says Frédéric Auchère, of the Institut d'Astrophysique Spatiale (IAS), France, Co-Principal Investigator of EUI.

The solar corona is the outermost layer of the Sun's atmosphere that extends millions of kilometres into outer space. Its temperature is more than a million degrees Celsius, which is orders of magnitude hotter than the surface of the Sun, a 'cool' 5500 °C. After many decades of studies, the physical mechanisms that heat the corona are still not fully understood, but identifying them is considered the 'holy grail' of solar physics.

"It's obviously way too early to tell but we hope that by connecting these observations with measurements from our other instruments that 'feel' the solar wind as it passes the spacecraft, we will eventually be able to answer some of these mysteries," says Yannis Zouganelis, Solar Orbiter Deputy Project Scientist at ESA.

Seeing the far side of the Sun

The Polarimetric and Helioseismic Imager (PHI) is another cutting-edge instrument aboard Solar Orbiter. It makes high-resolution measurements of the magnetic field lines on the surface of the Sun. It is designed to monitor active regions on the Sun, areas with especially strong magnetic fields, which can give birth to solar flares.

During solar flares, the Sun releases bursts of energetic particles that enhance the solar wind that constantly emanates from the star into the surrounding space. When these particles interact with Earth's magnetosphere, they can cause magnetic storms that can disrupt telecommunication networks and power grids on the ground.

"Right now, we are in the part of the 11-year solar cycle when the Sun is very quiet," says Sami Solanki, the director of the Max Planck Institute for Solar System Research in Göttingen, Germany, and PHI Principal Investigator. "But because Solar Orbiter is at a different angle to the Sun than Earth, we could actually see one active region that wasn't observable from Earth. That is a first. We have never been able to measure the magnetic field at the back of the Sun."

The magnetograms, showing how the strength of the solar magnetic field varies across the Sun's surface, could be then compared with the measurements from thein situinstruments.

"The PHI instrument is measuring the magnetic field on the surface, we see structures in the Sun's corona with EUI, but we also try to infer the magnetic field lines going out into the interplanetary medium, where Solar Orbiter is," says Jose Carlos del Toro Iniesta, PHI Co-Principal Investigator, of Instituto de Astrofísica de Andalucía, Spain.

Catching the solar wind


The four in situ instruments on Solar Orbiter then characterise the magnetic field lines and solar wind as it passes the spacecraft.

Christopher Owen, of University College London Mullard Space Science Laboratory and Principal Investigator of thein situSolar Wind Analyser, adds, "Using this information, we can estimate where on the Sun that particular part of the solar wind was emitted, and then use the full instrument set of the mission to reveal and understand the physical processes operating in the different regions on the Sun which lead to solar wind formation."

"We are all really excited about these first images -- but this is just the beginning," adds Daniel. "Solar Orbiter has started a grand tour of the inner Solar System, and will get much closer to the Sun within less than two years. Ultimately, it will get as close as 42 million km, which is almost a quarter of the distance from Sun to Earth."

"The first data are already demonstrating the power behind a successful collaboration between space agencies and the usefulness of a diverse set of images in unravelling some of the Sun's mysteries," comments Holly Gilbert, Director of the Heliophysics Science Division at NASA Goddard Space Flight Center and Solar Orbiter Project Scientist at NASA.

Read more at Science Daily

New research of oldest light confirms age of the universe

Big Bang illustration
Just how old is the universe? Astrophysicists have been debating this question for decades. In recent years, new scientific measurements have suggested the universe may be hundreds of millions of years younger than its previously estimated age of approximately 13.8 billions of years.

Now new research published in a series of papers by an international team of astrophysicists, including Neelima Sehgal, PhD, from Stony Brook University, suggest the universe is about 13.8 billion years old. By using observations from the Atacama Cosmology Telescope (ACT) in Chile, their findings match the measurements of the Planck satellite data of the same ancient light.

The ACT research team is an international collaboration of scientists from 41 institutions in seven countries. The Stony Brook team from the Department of Physics and Astronomy in the College of Arts and Sciences, led by Professor Sehgal, plays an essential role in analyzing the cosmic microwave background (CMB) -- the afterglow light from the Big Bang.

"In Stony Brook-led work we are restoring the 'baby photo' of the universe to its original condition, eliminating the wear and tear of time and space that distorted the image," explains Professor Sehgal, a co-author on the papers. "Only by seeing this sharper baby photo or image of the universe, can we more fully understand how our universe was born."

Obtaining the best image of the infant universe, explains Professor Sehgal, helps scientists better understand the origins of the universe, how we got to where we are on Earth, the galaxies, where we are going, how the universe may end, and when that ending may occur.

The ACT team estimates the age of the universe by measuring its oldest light. Other scientific groups take measurements of galaxies to make universe age estimates.

The new ACT estimate on the age of the universe matches the one provided by the standard model of the universe and measurements of the same light made by the Planck satellite. This adds a fresh twist to an ongoing debate in the astrophysics community, says Simone Aiola, first author of one of the new papers on the findings posted to arXiv.org.

"Now we've come up with an answer where Planck and ACT agree," says Aiola, a researcher at the Flatiron Institute's Center for Computational Astrophysics in New York City. "It speaks to the fact that these difficult measurements are reliable."

In 2019, a research team measuring the movements of galaxies calculated that the universe is hundreds of millions of years younger than the Planck team predicted. That discrepancy suggested that a new model for the universe might be needed and sparked concerns that one of the sets of measurements might be incorrect.

The age of the universe also reveals how fast the cosmos is expanding, a number quantified by the Hubble constant. The ACT measurements suggest a Hubble constant of 67.6 kilometers per second per megaparsec. That means an object 1 megaparsec (around 3.26 million light-years) from Earth is moving away from us at 67.6 kilometers per second due to the expansion of the universe. This result agrees almost exactly with the previous estimate of 67.4 kilometers per second per megaparsec by the Planck satellite team, but it's slower than the 74 kilometers per second per megaparsec inferred from the measurements of galaxies.

"I didn't have a particular preference for any specific value -- it was going to be interesting one way or another," says Steve Choi of Cornell University, first author of another paper posted to arXiv.org. "We find an expansion rate that is right on the estimate by the Planck satellite team. This gives us more confidence in measurements of the universe's oldest light."

As ACT continues making observations, astronomers will have an even clearer picture of the CMB and a more exact idea of how long ago the cosmos began. The ACT team will also scour those observations for signs of physics that doesn't fit the standard cosmological model. Such strange physics could resolve the disagreement between the predictions of the age and expansion rate of the universe arising from the measurements of the CMB and the motions of galaxies.

Read more at Science Daily

World population likely to shrink after mid-century, forecasting major shifts in global population and economic power

Illustration of people forming a world map
Improvements in access to modern contraception and the education of girls and women are generating widespread, sustained declines in fertility, and world population will likely peak in 2064 at around 9.7 billion, and then decline to about 8.8 billion by 2100 -- about 2 billion lower than some previous estimates, according to a new study published in The Lancet.

The modelling research uses data from the Global Burden of Disease Study 2017 to project future global, regional, and national population. Using novel methods for forecasting mortality, fertility, and migration, the researchers from the Institute for Health Metrics and Evaluation (IHME) at the University of Washington's School of Medicine estimate that by 2100, 183 of 195 countries will have total fertility rates (TFR), which represent the average number of children a woman delivers over her lifetime, below replacement level of 2.1 births per woman. This means that in these countries populations will decline unless low fertility is compensated by immigration.

The new population forecasts contrast to projections of 'continuing global growth' by the United Nations Population Division, and highlight the huge challenges to economic growth of a shrinking workforce, the high burden on health and social support systems of an aging population, and the impact on global power linked to shifts in world population.

The new study also predicts huge shifts in the global age structure, with an estimated 2.37 billion individuals over 65 years globally in 2100, compared with 1.7 billion under 20 years, underscoring the need for liberal immigration policies in countries with significantly declining working age populations.

"Continued global population growth through the century is no longer the most likely trajectory for the world's population," says IHME Director Dr. Christopher Murray, who led the research. "This study provides governments of all countries an opportunity to start rethinking their policies on migration, workforces and economic development to address the challenges presented by demographic change."

IHME Professor Stein Emil Vollset, first author of the paper, continues, "The societal, economic, and geopolitical power implications of our predictions are substantial. In particular, our findings suggest that the decline in the numbers of working-age adults alone will reduce GDP growth rates that could result in major shifts in global economic power by the century's end. Responding to population decline is likely to become an overriding policy concern in many nations, but must not compromise efforts to enhance women's reproductive health or progress on women's rights."

Dr Richard Horton, Editor-in-Chief, The Lancet, adds: "This important research charts a future we need to be planning for urgently. It offers a vision for radical shifts in geopolitical power, challenges myths about immigration, and underlines the importance of protecting and strengthening the sexual and reproductive rights of women. The 21st century will see a revolution in the story of our human civilisation. Africa and the Arab World will shape our future, while Europe and Asia will recede in their influence. By the end of the century, the world will be multipolar, with India, Nigeria, China, and the US the dominant powers. This will truly be a new world, one we should be preparing for today."

Accelerating decline in fertility worldwide

The global TFR is predicted to steadily decline, from 2.37 in 2017 to 1.66 in 2100 -- well below the minimum rate (2.1) considered necessary to maintain population numbers (replacement level) -- with rates falling to around 1.2 in Italy and Spain, and as low as 1.17 in Poland.

Even slight changes in TFR translate into large differences in population size in countries below the replacement level -- increasing TFR by as little as 0.1 births per woman is equivalent to around 500 million more individuals on the planet in 2100.

Much of the anticipated fertility decline is predicted in high-fertility countries, particularly those in sub-Saharan Africa where rates are expected to fall below the replacement level for the first time -- from an average 4.6 births per woman in 2017 to just 1.7 by 2100. In Niger, where the fertility rate was the highest in the world in 2017 -- with women giving birth to an average of seven children -- the rate is projected to decline to around 1.8 by 2100.

Nevertheless, the population of sub-Saharan Africa is forecast to triple over the course of the century, from an estimated 1.03 billion in 2017 to 3.07 billion in 2100 -- as death rates decline and an increasing number of women enter reproductive age. North Africa and the Middle East is the only other region predicted to have a larger population in 2100 (978 million) than in 2017 (600 million).

Many of the fastest-shrinking populations will be in Asia and central and eastern Europe. Populations are expected to more than halve in 23 countries and territories, including Japan (from around 128 million people in 2017 to 60 million in 2100), Thailand (71 to 35 million), Spain (46 to 23 million), Italy (61 to 31 million), Portugal (11 to 5 million), and South Korea (53 to 27 million). An additional 34 countries are expected to have population declines of 25 to 50%, including China (1.4 billion in 2017 to 732 million in 2100; see table).

Huge shifts in global age structure -- with over 80s outnumbering under 5s two to one

As fertility falls and life expectancy increases worldwide, the number of children under 5 years old is forecasted to decline by 41% from 681 million in 2017 to 401 million in 2100, whilst the number of individuals older than 80 years is projected to increase six fold, from 141 million to 866 million. Similarly, the global ratio of adults over 80 years to each person aged 15 years or younger is projected to rise from 0.16 in 2017 to 1.50 in 2100, in countries with a population decline of more than 25%.

Furthermore, the global ratio of non-working adults to workers was around 0.8 in 2017, but is projected to increase to 1.16 in 2100 if labour force participation by age and sex does not change.

"While population decline is potentially good news for reducing carbon emissions and stress on food systems, with more old people and fewer young people, economic challenges will arise as societies struggle to grow with fewer workers and taxpayers, and countries' abilities to generate the wealth needed to fund social support and health care for the elderly are reduced," says Vollset.

Declining working-age populations could see major shifts in size of economies

The study also examined the economic impact of fewer working-age adults for all countries in 2017. While China is set to replace the USA in 2035 with the largest total gross domestic product (GDP) globally, rapid population decline from 2050 onward will curtail economic growth. As a result, the USA is expected to reclaim the top spot by 2098, if immigration continues to sustain the US workforce.

Although numbers of working-age adults in India are projected to fall from 762 million in 2017 to around 578 million in 2100, it is expected to be one of the few -- if only -- major power in Asia to protect its working-age population over the century. It is expected to surpass China's workforce population in the mid-2020s (where numbers of workers are estimated to decline from 950 million in 2017 to 357 million in 2100) -- rising up the GDP rankings from 7th to 3rd.

Sub-Saharan Africa is likely to become an increasingly powerful continent on the geopolitical stage as its population rises. Nigeria is projected to be the only country among the world's 10 most populated nations to see its working-age population grow over the course of the century (from 86 million in 2017 to 458 million in 2100), supporting rapid economic growth and its rise in GDP rankings from 23rd place in 2017 to 9th place in 2100.

While the UK, Germany, and France are expected to remain in the top 10 for largest GDP worldwide at the turn of the century, Italy (from rank 9th in 2017 to 25th in 2100) and Spain (from 13th to 28th) are projected to fall down the rankings, reflecting much greater population decline.

Liberal immigration could help sustain population size and economic growth

The study also suggests that population decline could be offset by immigration, with countries that promote liberal immigration better able to maintain their population size and support economic growth, even in the face of declining fertility rates.

The model predicts that some countries with fertility lower than replacement level, such as the USA, Australia, and Canada, will probably maintain their working-age populations through net immigration (see appendix 2 section 4). Although the authors note that there is considerable uncertainty about these future trends.

"For high-income countries with below-replacement fertility rates, the best solutions for sustaining current population levels, economic growth, and geopolitical security are open immigration policies and social policies supportive of families having their desired number of children," Murray says. "However, a very real danger exists that, in the face of declining population, some countries might consider policies that restrict access to reproductive health services, with potentially devastating consequences. It is imperative that women's freedom and rights are at the top of every government's development agenda."

The authors note some important limitations, including that while the study uses the best available data, predictions are constrained by the quantity and quality of past data. They also note that past trends are not always predictive of what will happen in the future, and that some factors not included in the model could change the pace of fertility, mortality, or migration. For example, the COVID-19 pandemic has affected local and national health systems throughout the world, and caused over half a million deaths. However, the authors believe the excess deaths caused by the pandemic are unlikely to significantly alter longer term forecasting trends of global population.

Writing in a linked Comment, Professor Ibrahim Abubakar, University College London (UCL), UK, and Chair of Lancet Migration (who was not involved in the study), says: "Migration can be a potential solution to the predicted shortage of working-age populations. While demographers continue to debate the long-term implications of migration as a remedy for declining TFR, for it to be successful, we need a fundamental rethink of global politics. Greater multilateralism and a new global leadership should enable both migrant sending and migrant-receiving countries to benefit, while protecting the rights of individuals. Nations would need to cooperate at levels that have eluded us to date to strategically support and fund the development of excess skilled human capital in countries that are a source of migrants. An equitable change in global migration policy will need the voice of rich and poor countries. The projected changes in the sizes of national economies and the consequent change in military power might force these discussions."

Read more at Science Daily

Jul 15, 2020

Rewriting history: New evidence challenges Euro-centric narrative of early colonization

In American history, we learn that the arrival of Spanish explorers led by Hernando de Soto in the 1500s was a watershed moment resulting in the collapse of Indigenous tribes and traditions across the southeastern United States.

While these expeditions unquestionably resulted in the deaths of countless Indigenous people and the relocation of remaining tribes, new research from Washington University in St. Louis provides evidence that Indigenous people in Oconee Valley -- present-day central Georgia -- continued to live and actively resist European influence for nearly 150 years.

The findings, published July 15 in American Antiquity, speak to the resistance and resilience of Indigenous people in the face of European insurgence, said Jacob Lulewicz, a lecturer in archaeology in Arts & Sciences and lead author.

"The case study presented in our paper reframes the historical contexts of early colonial encounters in the Oconee Valley by way of highlighting the longevity and endurance of Indigenous Mississippian traditions and rewriting narratives of interactions between Spanish colonizers and Native Americans," Lulewicz said.

It also draws into question the motives behind early explanations and interpretations that Euro-Americans proposed about Indigenous earthen mounds -- platforms built out of soil, clay and stone that were used for important ceremonies and rituals.

'Myths were purposively racist'

"By the mid-1700s, less than 100 years after the abandonment of the Dyar mound [now submerged under Lake Oconee], explanations for the non-Indigenous origins of earthen mounds were being espoused. As less than 100 years would have passed between the Indigenous use of mounds and these explanations, it could be argued that the motives for these myths were purposively racist, denying what would have been a recent collective memory of Indigenous use in favor of explanations that stole, and disenfranchised, these histories from contemporary Indigenous peoples," Lulewicz said.

The Dyar mound was excavated by University of Georgia archaeologists in the 1970s to make way for a dam. Lulewicz and co-authors -- Victor D. Thompson, professor of archaeology and director of the Laboratory of Archaeology at the University of Georgia; James Wettstaed, archaeologist at Chattahoochee-Oconee National Forests; and Mark Williams, director emeritus of the Laboratory of Archaeology at the University of Georgia -- received funding from the USDA Forest Service to re-date the platform mound, which contained classic markers of Indigenous rituals and ceremonies.

Using advanced radiocarbon dating techniques and complex statistical models, modern-day archaeologists are able to effectively construct high-resolution, high-precision chronologies. In many cases, they can determine, within a 10- to 20-year range, dates of things that happened as far back as 1,000 years ago.

"Radiocarbon dating is really important, not just for getting a date to see when things happened, but for understanding the tempo of how things changed throughout time and really understanding the complex histories of people over hundreds of years," Lulewicz said. "In archaeology, it's really easy to group things in long periods of time, but it would be false to say that nothing changed over those 500 years."

Their research yielded 20 new dates from up and down the mound, which provided a refined perspective on the effects that early Indigenous-colonizer encounters did, and did not, have on the Indigenous people and their traditions.

Missing from the mound was any sign of European artifacts, which is one of the reasons why archaeologists originally believed sites in the region were abruptly abandoned just after their first encounters with Spanish colonizers. "Not only did the ancestors of Muscogee (Creek) people continue their traditions atop the Dyar mound for nearly 150 years after these encounters, but they also actively rejected European things," Lulewicz said.

According to Lulewicz, the Dyar mound does not represent an isolated hold-over after contact with European colonizers. There are several examples of platform mounds that were used beyond the 16th century, including the Fatherland site associated with the Natchez in Louisiana, Cofitachequi in South Carolina and a range of towns throughout the Lower Mississippi Valley.

"However, the mound at Dyar represents one of the only confirmed examples, via absolute dating, of continued Mississippian traditions related to mound-use and construction to date."

Today, members of the Muscogee (Creek) Nation, descendants of the Mississippians who built platform mounds like the one at Dyar, live in Oklahoma. "We have a great, collaborative relationship with archaeologists of the Muscogee (Creek) Nation Historic and Cultural Preservation Department, so we sent them the paper to review. It was really well received. They saw, reflected in that paper, a lot of the traditions they still practice in Oklahoma and were generous enough to contribute commentary that bolstered the results presented in the paper," he said.

"This is where the archaeology that we write becomes so important in the present. ... Without this type of work, we are contributing to the disenfranchisement of Indigenous peoples from their history."

"Of course, they already knew many of the things we 'discovered,' but it was still meaningful to be able to reaffirm their ancestral link to the land."

Read more at Science Daily

Tree planting does not always boost ecosystem carbon stocks, study finds

Planting huge numbers of trees to mitigate climate change is "not always the best strategy" -- with some experimental sites in Scotland failing to increase carbon stocks, a new study has found.

Experts at the University of Stirling and the James Hutton Institute analysed four locations in Scotland where birch trees were planted onto heather moorland -- and found that, over decades, there was no net increase in ecosystem carbon storage.

The team -- led by Dr Nina Friggens, of the Faculty of Natural Sciences at Stirling -- found that any increase to carbon storage in tree biomass was offset by a loss of carbon stored in the soil.

Dr Friggens said: "Both national and international governments have committed to plant huge numbers of trees to mitigate climate change, based on the simple logic that trees -- when they photosynthesise and grow -- remove carbon from the atmosphere and lock it into their biomass. However, trees also interact with carbon in soil, where much more carbon is found than in plants.

"Our study considered whether planting native trees on heather moorlands, with large soil carbon stores, would result in net carbon sequestration -- and, significantly, we found that over a period of 39 years, it did not."

The tree-planting experiments -- in the Grampians, Cairngorms and Glen Affric -- were set up by the late Dr John Miles, of the then Institute of Terrestrial Ecology (a forerunner to the UK Centre for Ecology and Hydrology), in 1980, and the Hutton Institute in 2005. The research sites enabled the team to assess the impact of tree planting on vegetation and soil carbon stocks, by comparing these experimental plots to adjacent control plots consisting of original heath vegetation.

Working with Dr Ruth Mitchell and Professor Alison Hester, both of the James Hutton Institute, Dr Friggens measured soil respiration -- the amount of carbon dioxide released from the soil to the atmosphere -- at regular intervals during 2017 and 2018. Along with soil cores taken by Dr Friggens and Dr Thomas Parker to record soil carbon stocks and calculated tree carbon stocks by using non-destructive metrics, including tree height and girth.

The study recorded a 58 percent reduction in soil organic carbon stocks 12 years after the birch trees had been planted on the heather moorland -- and, significantly, this decline was not compensated for by the gains in carbon contained in the growing trees.

It also found that, 39 years after planting, the carbon sequestered into tree biomass offset the carbon lost from the soil -- but, crucially, there was no overall increase in ecosystem carbon stocks.

Dr Friggens said: "When considering the carbon stocks both above and below ground together, planting trees onto heather moorlands did not lead to an increase in net ecosystem carbon stocks 12 or 39 years after planting. This is because planting trees also accelerated the rate at which soil organisms work to decompose organic matter in the soil -- in turn, releasing carbon dioxide back into the atmosphere.

"This work provides evidence that planting trees in some areas of Scotland will not lead to carbon sequestration for at least 40 years -- and, if we are to successfully manage our landscapes for carbon sequestration, planting trees is not always the best strategy.

"Tree planting can lead to carbon sequestration; however, our study highlights the need to understand where, in the landscape, this approach is best deployed in order to achieve maximum climate mitigation gains."

Dr Ruth Mitchell, a researcher within the James Hutton Institute's Ecological Sciences department and co-author of the study, said: "Our work shows that tree planting locations need to be carefully sited, taking into account soil conditions, otherwise the tree planting will not result in the desired increase in carbon storage and climate change mitigation."

Although conducted in Scotland, the study's results are relevant in vast areas around the northern fringes of the boreal forests and the southern Arctic tundra, of North America and Eurasia.

Dr Friggens added: "The climate emergency affects us all -- and it is important that strategies implemented to mitigate climate change -- such as large-scale tree planting -- are robust and achieve the intended outcomes.

Read more at Science Daily

In one hour, surface coating inactivates virus that causes COVID-19

Door knobs, light switches, shopping carts. Fear runs rampant nowadays when it comes to touching common surfaces because of the rapid spread of the coronavirus.

A Virginia Tech professor has found a solution.

Since mid-March, William Ducker, a chemical engineering professor, has developed a surface coating that when painted on common objects, inactivates SARS-CoV-2, the virus that causes COVID-19.

"The idea is when the droplets land on a solid object, the virus within the droplets will be inactivated," Ducker said.

Since mid-April, Ducker has been working with Leo Poon, a professor and researcher at the University of Hong Kong's School of Public Health, to test the film's success at inactivating the virus. Their research was published July 13 in ACS Applied Materials & Interfaces, a scientific journal for chemists, engineers, biologists, and physicists.

The results of the tests have been outstanding, Ducker said. When the coating is painted on glass or stainless steel, the amount of virus is reduced by 99.9 percent in one hour, compared to the uncoated sample.

"One hour is the shortest period that we have tested so far, and tests at shorter periods are ongoing," Ducker said.

His expectation is that his team can inactivate the virus in minutes. Results have shown that the coating is robust. It does not peel off after being slashed with a razor blade. It also retains its ability to inactivate the virus after multiple rounds of being exposed to the SARS-CoV-2 virus and then disinfection or after being submerged in water for a week, based on the tests.

If the project's success continues, it is a significant discovery in fighting the virus' spread.

"Everybody is worried about touching objects that may have the coronavirus," said Ducker, who recalled that his wife, in March, questioned whether she should sit on a park bench during the pandemic. "It would help people to relax a little bit."

Already, Ducker's research was focused on making films that kill bacteria. As the COVID-19 virus began to spread to the United States a few months ago, Ducker asked himself "Why not make a coating that can eradicate a virus, rather than bacteria?"

"We have to use our chemical knowledge and experience of other viruses to guess what would kill it [SARS-CoV-2]," Ducker said.

Virginia Tech granted essential personnel status to Ducker, his two PhD. chemical engineering graduate students -- Saeed Behzadinasab and Mohsen Hossein -- and Xu Feng from the university's Department of Chemistry so that they could enter campus labs to make the film and test its properties.

"It was an interesting experience," Ducker said. "Almost the entire campus was shut down, and we were like ghosts wandering the empty halls of Goodwin Hall. But it was very exciting to have such a clear goal. I know that it was a difficult time for many people who were bored, unhappy, or scared. We were just focused on making a coating."

Next, he needed to find someone who could test the coating's effectiveness. Through an internet search, Ducker found Poon, who is known for his work studying SARs-CoV-1, which was the virus that caused the SARS outbreak in 2003 and 2004. Poon has been active in the fight against SARS-CoV-2.

For Poon's tests, Ducker and the graduate students spread three different kinds of coatings on glass and stainless steel. Then, they shipped the samples to Poon.

Now, Ducker said he hopes to attract funding in order to mass produce the film.

Read more at Science Daily

Experimental COVID-19 vaccine safe, generates immune response

Coronavirus vaccine photo concept
An investigational vaccine, mRNA-1273, designed to protect against SARS-CoV-2, the virus that causes coronavirus disease 2019 (COVID-19), was generally well tolerated and prompted neutralizing antibody activity in healthy adults, according to interim results published online today in the New England Journal of Medicine.

The ongoing Phase 1 trial is supported by the National Institute of Allergy and Infectious Diseases (NIAID), part of the National Institutes of Health. The experimental vaccine is being co-developed by researchers at NIAID and at Moderna, Inc. of Cambridge, Massachusetts. Manufactured by Moderna, mRNA-1273 is designed to induce neutralizing antibodies directed at a portion of the coronavirus "spike" protein, which the virus uses to bind to and enter human cells.

The trial was led by Lisa A. Jackson, M.D., MPH, of Kaiser Permanente Washington Health Research Institute in Seattle, where the first participant received the candidate vaccine on March 16. This interim report details the initial findings from the first 45 participants ages 18 to 55 years enrolled at the study sites in Seattle and at Emory University in Atlanta. Three groups of 15 participants received two intramuscular injections, 28 days apart, of either 25, 100 or 250 micrograms (mcg) of the investigational vaccine. All the participants received one injection; 42 received both scheduled injections.

In April, the trial was expanded to enroll adults older than age 55 years; it now has 120 participants. However, the newly published results cover the 18 to 55-year age group only.

Regarding safety, no serious adverse events were reported. More than half of the participants reported fatigue, headache, chills, myalgia or pain at the injection site. Systemic adverse events were more common following the second vaccination and in those who received the highest vaccine dose. Data on side effects and immune responses at various vaccine dosages informed the doses used or planned for use in the Phase 2 and 3 clinical trials of the investigational vaccine.

The interim analysis includes results of tests measuring levels of vaccine-induced neutralizing activity through day 43 after the second injection. Two doses of vaccine prompted high levels of neutralizing antibody activity that were above the average values seen in convalescent sera obtained from persons with confirmed COVID-19 disease.

A Phase 2 clinical trial of mRNA-1273, sponsored by Moderna, began enrollment in late May. Plans are underway to launch a Phase 3 efficacy trial in July 2020.

Read more at Science Daily

Jul 14, 2020

Evolution after Chicxulub asteroid impact: Rapid response of life to end-cretaceous mass

The impact event that formed the Chicxulub crater (Yucatán Peninsula, México) caused the extinction of 75% of species on Earth 66 million years ago, including non-avian dinosaurs. One place that did not experience much extinction was the deep, as organisms living in the abyss made it through the mass extinction event with just some changes to community structure.

New evidence from International Ocean Discovery Program (IODP) Expedition 364 of trace fossils of burrowing organisms that lived in the seafloor of the Chicxulub Crater beginning a few years after the impact shows just how quick the recovery of the seafloor ecosystem was, with the establishment of a well-developed tiered community within  approximately 700,000 years after the event.

In April and May 2016, a team of international scientists drilled into the Chicxulub impact crater. This joint expedition, organized by the International Ocean Discovery Program (IODP) and International Continental Scientific Drilling Program (ICDP) recovered an extended syn- and post-impact set of rock cores, allowing study of the effects of the impact on life and its recovery after the mass extinction event. The end Cretaceous (K-Pg) event has been profusely studied and its effect on biota are relatively well-known. However, the effect of these changes on the macrobenthic community, the community of organisms living on and in the seafloor that do not leave body fossils, is poorly known.

The investigators concluded that the diversity and abundance of trace fossils responded primarily to variations in the flux of organic matter (i.e., food) sinking to the seafloor during the early Paleocene. Local and regional-scale effects of the K-Pg impact included earthquakes of magnitude 10-11, causing continental and marine landslides, tsunamis hundreds of meters in height that swept more than 300 km onshore, shock waves and air blasts, and the ignition of wildfires. Global phenomena included acid rain, injection of aerosols, dust, and soot into the atmosphere, brief intense cooling followed by slight warming, and destruction of the stratospheric ozone layer, followed by a longer-term greenhouse effect.

Mass extinction events have punctuated the past 500 million years of Earth's history, and studying them helps geoscientists understand how organisms respond to stress in their environment and how ecosystems recover from the loss of biodiversity. Although the K-Pg mass extinction was caused by an asteroid impact, previous ones were caused by slower processes, like massive volcanism, which caused ocean acidification and deoxygenation and had environmental effects that lasted millions of years.

By comparing the K-Pg record to earlier events like the end Permian mass extinction (the so-called "Great Dying" when 90% of life on Earth went extinct), geoscientists can determine how different environmental changes affect life. There are similar overall patterns of recovery after both events with distinct phases of stabilization and diversification, but with very different time frames. The initial recovery after the K-Pg, even at ground zero of the impact, lasted just a few years; this same phase lasted tens of thousands of years after the end Permian mass extinction. The overall recovery of seafloor burrowing organisms after the K-Pg took ~700,000 years, but it took several million years after the end Permian.

Read more at Science Daily

Gigantic, red and full of spots

Among the Sun's most striking features are its sunspots, relatively darker areas compared to the rest of the surface, some of which are visible from Earth even without magnification. Numerous other stars, which like the Sun are in the prime of their lives, are also covered by spots. In red giants, on the other hand, which are in an advanced stage of stellar evolution, such spots were previously considered to be rare. The reason for this difference can be found deep in the interior of stars. In a dynamo process, the interplay of electrically conductive plasma currents and rotation generates a star's magnetic field that is then washed up to its surface. In some places, particularly strong magnetic fields prevent hot plasma from flowing upwards. These regions appear dark and constitute starspots.

"Rotation and convection are both crucial ingredients for the formation of surface magnetic fields and starspots," explains Dr. Federico Spada of MPS, co-author of the new study. "Stars with outer convective layers have the potential to generate surface magnetic fields via dynamo action, but only when the star rotates fast enough the magnetic activity becomes detectable," he adds. Until now, researchers had assumed that almost all red giants rotate rather slowly around their own axis. After all, stars expand dramatically when they develop into red giants towards the end of their lives. As a result their rotation slows down, like a figure skater doing a pirouette with his arms stretched out. The new study led by scientists from MPS and New Mexico State University (USA) now paints a different picture. About eight percent of the observed red giants rotate quickly enough for starspots to form.

The research team scoured the measurement data of about 4500 red giants recorded by NASA's Kepler space telescope from 2009 to 2013 for evidence of spots. Such spots reduce the amount of light that a star emits into space. Since they usually change only slightly over several months, they gradually rotate out of the telescope's field of view -- and then reappear after some time. This produces typical, regularly recurring brightness fluctuations.

In a second step, the scientists investigated the question why the spotted giants rotate so quickly. How do they muster the necessary energy? "To answer this question, we had to determine as many of the stars' properties as possible and then put together an overall picture," says Dr. Patrick Gaulme, lead author of the publication. At the Apache Point Observatory in New Mexico (USA), for example, the researchers studied how the wavelengths of starlight from some of the stars change over time. This allows conclusions about their exact movement. The team also looked at rapid fluctuations in brightness, which are superimposed on the slower ones caused by starspots. The faster fluctuations are the expression of pressure waves propagating through a star's interior to its surface. They contain information on many internal properties such as the star's mass and age.

The analysis revealed that approximately 15 percent of the spotted giants belong to close binary star systems, usually constituted of a red giant with a small and less massive companion. "In such systems, the rotational speeds of both stars synchronize over time until they rotate in unison like a pair of figure skaters," says Gaulme. The slower red giant thus gains momentum and spins faster than it would have without a companion star.

The other red giants with starspots, about 85 percent, are on their own -- and yet they rotate quickly. Those with a mass roughly equal to that of the Sun probably merged with another star or planet in the course of their evolution and thus gained speed. The somewhat heavier ones, whose masses are two to three times that of the Sun, look back on a different development. In the heyday of their lives before they became red giants, their internal structure prevented the creation of a global magnetic field that gradually carries particles away from the star. Unlike their magnetic counterparts, which therefore rotate slower and slower over time, their rotation has probably never slowed down significantly. Even as red giants, they still rotate almost as quickly as they did in their youth.

"In total, behind the common observational feature that some red giants have spots, we find three groups of rapidly rotating stars, each of which has a very different explanation. So it's no wonder that the phenomenon is more widespread than we previously thought," says Gaulme.

Read more at Science Daily

Keeping innocent people out of jail using the science of perception

People wrongfully accused of a crime often wait years -- if ever -- to be exonerated. Many of these wrongfully accused cases stem from unreliable eyewitness testimony. Now, Salk scientists have identified a new way of presenting a lineup to an eyewitness that could improve the likelihood that the correct suspect is identified and reduce the number of innocent people sentenced to jail. Their report is published in Nature Communications on July 14, 2020.

"Misidentification by eyewitnesses is a long-standing problem in our society. Our new lineup method uncovers the structure of eyewitness memory, removes decision bias from the identification process, and quantifies performance of individual witnesses," says Salk Professor Thomas D. Albright, co-corresponding author of the study. "This study is a great example of using laboratory science to bring about criminal justice reform."

In the United States, nearly 70 percent of DNA exonerations are due to misidentifications by eyewitnesses, according to the Innocence Project. To overcome this societal problem, research has focused on factors that influence the likelihood that a witness will identify the correct person. One key factor is the way individuals are presented to the eyewitness during the lineup, according to Albright, who co-chaired a National Academy of Sciences committee to examine the validity of eyewitness identification. Albright, an expert in the fields of visual perception and recognition, taps into decades of research suggesting that people commonly misperceive visual events, and memories of those events are continuously augmented and deteriorate over time.

Currently, the two most common (or traditional) methods used by law enforcement are known as simultaneous and sequential lineups. In the simultaneous method the eyewitness views six photographs of individuals at the same time; in the sequential method the eyewitness views six photos, one at a time. The witness then either identifies a suspect or rejects the lineup if no face matches their memory of the crime scene.

The research team sought to create a new lineup method that would help estimate the strengths of memories for each face and eliminate unconscious biases that shape decisions without awareness.

"Traditional lineups just reveal the top choice -- the tip of the iceberg. But the cause of the witness's decision is ambiguous. It may reflect strong memory for the culprit, or it may mean that the witness was not very discerning," says Albright. "Our new procedure overcomes that ambiguity by revealing the strength of recognition memory for all lineup faces."

The scientists used a technique, called the method of paired comparisons, which works similar to how an optometrist gives an eye exam: Just like looking through pairs of lenses and stating which lens is clearer, the eyewitness is shown two photographs of individuals at a time and they choose the one that looks more similar to the person they remember from the crime scene. The procedure yields an estimate of the strength of recognition memory for each lineup face. Statistical analysis of these memory strengths then reveals the probability of correctly identifying the culprit.

"Our methods derive from a branch of science called sensory psychophysics," says Staff Scientist Sergei Gepshtein, first and co-corresponding author of the paper, who founded and directs the Collaboratory for Adaptive Sensory Technologies at Salk. "Psychophysical tools are designed to reveal how properties of the physical world are ordered -- or 'scaled' -- in the mind. Our approach allowed us to peek into the 'black box' and measure how lineup faces are organized in the witness's memory in terms of their similarity to the culprit."

The paired comparison method yields greater information about the identity of the culprit than previous methods. What is more, it offers an unprecedented quantitative index of certainty for individual eyewitnesses, which is what the judge and jury really need to know.

"The conduct of a lineup is just one application of our method," says Gepshtein. "Another application is selection of lineup 'fillers,' which are faces of people known to be innocent. The fillers should not be too similar or too dissimilar to the suspect. Because the new method reveals the perceived similarity of faces, it can be used to optimize the choice of lineup fillers."

Read more at Science Daily

Vision scientists discover why people literally don't see eye to eye

We humans may not always see eye to eye on politics, religion, sports and other matters of debate. But at least we can agree on the location and size of objects in our physical surroundings. Or can we?

Not according to new research from the University of California, Berkeley, recently published in the Proceedings of the Royal Society B: Biological Sciences journal, that shows that our ability to pinpoint the exact location and size of things varies from one person to the next, and even within our own individual field of vision.

"We assume our perception is a perfect reflection of the physical world around us, but this study shows that each of us has a unique visual fingerprint," said study lead author Zixuan Wang, a UC Berkeley doctoral student in psychology.

The discovery by Wang and fellow researchers in UC Berkeley's Whitney Laboratory for Perception and Action has ramifications for the practices of medicine, technology, driving and sports, among other fields where accurate visual localization is critical.

For example, a driver who makes even a small miscalculation about the location of a pedestrian crossing the street can cause a catastrophe. Meanwhile, in sports, an error of visual judgment can lead to controversy, if not a fiercely disputed championship loss.

Take, for example, the 2004 U.S. Open quarterfinals, in which tennis icon Serena Williams lost to Jennifer Capriati after a series of questionable line calls. An umpire incorrectly overruled a line judge who called a backhand hit by Williams as in, resulting in an apology to Williams by the U.S. Tennis Association.

"Line judges need to rule on whether the ball is outside or inside the parameters. Even an error as small as half a degree of visual angle, equal to a sub-millimeter shift on the judge's retina, may influence the result of the whole match," said Wang, a die-hard tennis fan.

Researchers sought to understand if different people see objects in their surroundings exactly the same way. For example, when glancing at a coffee cup on a table, can two people agree on its exact position and whether its handle is big enough to grip? The result of a series of experiments suggest not, though there's an upside.

"We may reach for a coffee mug thousands of times in our life, and through practice we reach our target," Wang said. "That's the behavioral aspect of how we train ourselves to coordinate how we act in relation to what we see."

In the first task to test visual localization, study participants pinpointed on a computer screen the location of a circular target. In another experiment looking at variations of acuity within each person's field of vision, participants viewed two lines set a minimal distance apart and determined whether one line was located clockwise or counterclockwise to the other line.

And in an experiment measuring perception of size, participants viewed a series of arcs of varying lengths and were asked to estimate their lengths. Surprisingly, people perceived the exact same arcs to be bigger at some locations in the visual field and smaller at other locations.

Overall, the results showed remarkable variations in visual performance among the group and even within each individual's field of vision. The data were mapped to show each study participant's unique visual fingerprint of perceptual distortion.

"Though our study might suggest that the source of our visual deficiencies can originate from our brain, further investigations are needed to uncover the neural basis," said Wang.

"What's also important," she added, "is how we adapt to them and compensate for our errors."

Read more at Science Daily

Jul 13, 2020

Climate change will cause more extreme wet and dry seasons

The world can expect more rainfall as the climate changes, but it can also expect more water to evaporate, complicating efforts to manage reservoirs and irrigate crops in a growing world, according to a Clemson University researcher whose latest work has been published in the journal Nature Communications.

Ashok Mishra, who is the corresponding author on the new article, said that previous studies have focused mostly on how climate change will affect precipitation. But the key contribution of the new study is that it also examined the magnitude and variability of precipitation and evaporation and how much water will be available during the wettest and driest months of the year.

Researchers found that dry seasons will become drier, and wet seasons will become wetter, said Mishra, who is the Dean's Associate Professor in the Glenn Department of Civil Engineering.

Most of the Eastern United States, including all of South Carolina, has high precipitation that it is well distributed throughout the year, researchers found. The region and others like it can expect greater precipitation and evaporation in both wet and dry seasons, according to the study. The amount of water available will vary more widely than it does now, researchers found.

The greatest concern for such regions will be more flooding, Mishra said in an interview.

The regions that will be hardest hit by climate change are the ones that already get slammed with rain during wet seasons and struggle with drought during dry seasons, researchers found. They include much of India and its neighbors to the east, including Bangladesh and Myanmar, along with an inland swath of Brazil, two sections running east-west across Africa, and northern Australia, according to the study.

"The regions which already have more drought and flooding relative to other regions will further see an increase in these events," Mishra said.

As part of the study, researchers divided the world into nine land regions, or regimes. They looked at annual precipitation and how it fluctuates through the seasons for each region from 1971-2000.

Researchers then used that data to predict future water availability during each region's three wettest months and three driest months. They evaluated three scenarios based on multiple global climate models.

The best case scenario for relatively stable water availability during wet and dry seasons is that the global temperature will stabilize at 2 degrees Celsius over pre-industrial levels, according to the study.

But researchers also looked at what would happen if the temperature were to rise to 3.5 degrees Celsius or 5 degrees Celsius over pre-industrial levels by the end of the century.

The higher the temperature, the more variation in water availability, researchers found.

Mishra said that his message to the world is that water is a very important resource.

"The availability of this resource is an issue everybody is facing," he said. "We need to take precautions to optimally use how much water we have. As the climate changes and population increases, we should be preparing for the future by improving the technology to efficiently use water for crops."

Jesus M. de la Garza, chair of the Glenn Department of Civil Engineering, congratulated Mishra on publication of the research.

Read more at Science Daily

Genetic differences between global American Crocodile populations identified in DNA analysis

A genetic analysis of the American crocodile (Crocodylus acutus) has re-established our understanding of its population structure, aiding its conservation. The collaborative study spanning seven countries and led by the Wildlife Conservation Society and University of Bristol researchers is published in PLOS ONE.

The American crocodile is widespread across the American continent (from South Florida to Venezuela, across the Greater Antilles, and from Mexico to Ecuador). Successful due to its ability to thrive within brackish and saltwater environments. Efforts to conserve the crocodile species have existed since 1975 when their status was set to vulnerable on the IUCN (International Union for Conservation of Nature) red list. However, although conservation efforts have been put in place, the American crocodile faces further threats including habitat degradation due to coastal development.

Replenishing these populations requires understanding of population structures through genetic analysis, which can elaborate on the evolution of the species' distribution. Gaining more understanding on how a species has come to be distributed so widely and how populations can differentiate genetically, can inform regions how best to manage their populations.

The study reflected a regional collaborative effort, where DNA sampling occurred across seven countries including Venezuela, Jamaica and Cuba. There has been ongoing discussion on how these regional populations of C.acutus are similar. However, the study's results found that populations in Northern, Central and Southern America's and Great Antilles differed genetically. There were similarities found between Costa Rica and Jamaican populations. In Venezuela, they identified three new haplotypes, which are closely related genes that help scientists identify an origin of distribution.

Researchers believe that the mating with different species could have contributed to this distribution, also known as hybridisation. Crocodiles hybridise easily, contributing to their ability to survive since the prehistoric era. Additionally, in Florida genetic analysis showed there had been a case of unintentional translocation, where the species had been moved from a different location over time. This had been flagged by previous research, where crocodiles with haplotypes from Central and South America had been transported to Florida, most likely for the pet trade, and later escaped or released into the wild by owners.

By identifying these differences between regional populations of C. acutus, conservation efforts can establish population clusters which consider the populations as independent management units that may have different needs and focuses.

Natalia Rossi, Country Manager of the Cuba Program at the Wildlife Conservation Society and the study's co-author explains some of the challenges around taking samples from large crocodiles: "Our study involved several research teams across multiple sites and countries and often in difficult field conditions. For four years between May to July the team would record, mark and sample crocodile hatchings, and juvenile and adult crocodiles in Cuba's Birama Swamp, one of the study sites. It was not unusual for us to have to spend hours in the mangrove lakes waiting for one to appear, and when a crocodile was spotted the whole team would have to enter the water to help net it. While both exciting and rewarding work, it is also dangerous as the crocodiles are powerful and it involves lots of team co-ordination and trust to secure the crocodile to enable us to take samples."

Read more at Science Daily

Drug that calms 'cytokine storm' associated with 45 percent lower risk of dying among COVID-19 patients on ventilators

Critically ill COVID-19 patients who received a single dose of a drug that calms an overreacting immune system were 45% less likely to die overall, and more likely to be out of the hospital or off a ventilator one month after treatment, compared with those who didn't receive the drug, according to a new study by a team from the University of Michigan.

The lower risk of death in patients who received intravenous tocilizumab happened despite the fact that they were also twice as likely to develop an additional infection, on top of the novel coronavirus.

The study is published in the peer-reviewed journal Clinical Infectious Diseases after being available as a preprint last month.

It suggests a benefit from timely and targeted efforts to calm the "cytokine storm" caused by the immune system's overreaction to the coronavirus. Tocilizumab, originally designed for rheumatoid arthritis, has already been used to calm such storms in patients receiving advanced immunotherapy treatment for cancer.

The researchers base their conclusions on a thorough look back at data from 154 critically ill patients treated at Michigan Medicine, U-M's academic medical center, during the first six weeks of the pandemic's arrival in Michigan from early March to late April. The analysis looked at patients' records through late May.

During that time, when little was known about what would help COVID-19 patients on ventilators, about half of the studied patients received tocilizumab and half did not. Most received it within the 24-hour period surrounding their intubation.

This created a natural opportunity for comparing the two groups' outcomes in an observational study, though clinical trials are still needed to truly see if the drug provides a benefit, the authors say.

Promising result

Lead author Emily Somers, Ph.D., Sc.M., an epidemiologist who has studied both rheumatologic and immunologic diseases, says the research team went into their analysis uncertain whether they would find a benefit, a risk, or no clear effect associated with tocilizumab in the patients with life-threatening COVID-19. But they knew it was a critically important question that they were uniquely positioned to answer at that point in the pandemic.

"One role of epidemiology is to rigorously evaluate real-world data on treatment effects, especially when evidence from clinical trials is not available. We kept trying to prove ourselves wrong as signals of benefit emerged in the data, both because of the immediate implications of these data, and in part because of concern about the supply of the medication for other patients," she says. "But the difference in mortality despite the increase in secondary infection is quite pronounced, even after accounting for many other factors."

Somers is an associate professor in the U-M Medical School's Department of Internal Medicine and member of the U-M Institute for Healthcare Policy and Innovation. She co-leads the COVID-19 Rapid Response Registry, which is supported by the Michigan Institute for Clinical and Health Research.

The paper's co-first author is Gregory Eschenauer, Pharm.D., a clinical pharmacist at Michigan Medicine and clinical associate professor at the U-M College of Pharmacy. He and senior author Jason Pogue, Pharm.D., are members of the Michigan Medicine Antimicrobial Stewardship Program.

The ASP group developed treatment guidelines provided to Michigan Medicine physicians in mid-March that identified tocilizumab as a potentially beneficial therapy for the most severely ill COVID-19 patients. Those guidelines also pointed out its risks and the lack of evidence for its use in COVID-19, and recommended a dose of 8 milligrams per kilogram.

This led some physicians to choose to use it, while others did not -- setting the stage inadvertently for a natural comparison.

More research needed

Pogue, clinical professor at the U-M College of Pharmacy and an infectious disease pharmacist at Michigan Medicine, notes that more robust data released in June from a large randomized controlled trial in the United Kingdom has led him to recommend the steroid dexamethasone as the first choice to treat critically ill COVID-19 patients.

"For a retrospective, single-center study, our data are robust. But at this time, due to the lack of randomized controlled trial data and the much higher cost, we recommend reserving tocilizumab for the treatment of select patients who decompensate while on or after receiving dexamethasone or in patients where the risks of adverse events from steroid therapy outweigh the potential benefits" says Pogue.

"Further studies of tocilizumab, which is more targeted than dexamethasone in addressing the hyperinflammatory process, could include combining these agents or comparing them head-to-head," he adds.

Pogue notes that a single dose of tocilizumab is roughly 100 times more expensive than a course of dexamethasone. He also notes that another drug that aims to treat cytokine storm by targeting the interleukin-6 (IL-6) receptor -- one called sarilumab -- appears to have failed to improve outcomes in a clinical trial in COVID-19 patients including those on ventilators.

Michigan Medicine had been participating in the sarilumab study at the time the patients in the current study were treated, but not all patients qualified because of the timing of their admission or issues around testing for COVID-19. The current study does not include any patients who received sarilumab.

If the evidence around IL-6 targeting bears out in further studies, the authors note that it will be important to select the dose and timing carefully, to address the cytokine storm without interfering with IL-6's other roles in activating the body's response to infections and its processes for repairing tissue.

More about the study

The majority of the patients were transferred to U-M from Detroit-area hospitals after diagnosis with COVID-19, and those who received tocilizumab were less likely overall to have been transferred while already on a ventilator.

By the end of the 28-day period after patients went on a ventilator, 18% of those who received tocilizumab had died, compared with 36% of those who had not. When adjusted for health characteristics, this represents a 45% reduction in mortality. Of those still in the hospital at the end of the study period, 82% of the tocilizumab patients had come off the ventilator, compared with 53% of those who didn't receive the drug.

In all, 54% of the tocilizumab patients developed a secondary infection, mostly ventilator associated pneumonia; 26% of those who didn't receive tocilizumab developed such infections. Such "superinfections" usually reduce the chance of survival for COVID-19 patients.

Hydroxychloroquine was included in the treatment guidelines for COVID-19 inpatients at Michigan Medicine for the first two and a half weeks of the study period, before being removed as evidence of its lack of benefit and risks emerged. In all, it was used in one-quarter of the patients who received tocilizumab and one-fifth of those who didn't. Similar percentages of the two patient groups received steroids, though none received dexamethasone.

The patients in the two groups were similar in most ways except for a slightly higher average age in the non-tocilizumab group, and lower rates of chronic obstructive pulmonary disease and chronic kidney disease among the tocilizumab patients.

Read more at Science Daily