Nov 4, 2023

Exploding stars

When massive stars or other stellar objects explode in the Earth's cosmic neighborhood, ejected debris can also reach our solar system. Traces of such events are found on Earth or the Moon and can be detected using accelerator mass spectrometry, or AMS for short. An overview of this exciting research is provided in the scientific journal Annual Review of Nuclear and Particle Science (DOI: 10.1146/annurev-nucl-011823-045541) by Prof. Anton Wallner of the Helmholtz-Zentrum Dresden-Rossendorf (HZDR), who soon plans to decisively advance this promising branch of research with the new, ultrasensitive AMS facility "HAMSTER."

In their paper, HZDR physicist Anton Wallner and colleague Prof. Brian D. Fields from the University of Illinois in Urbana, USA, provide an overview of near-Earth cosmic explosions with a particular focus on events that occurred three and, respectively, seven million years ago.

"Fortunately, these events were still far enough away, so they probably did not significantly impact the Earth's climate or have major effects on the biosphere. However, things get really uncomfortable when cosmic explosions occur at a distance of 30 light-years or less," Wallner explains. Converted into the astrophysical unit parsec, this corresponds to less than eight to ten parsecs.

Once massive stars have burned up all their fuel, their cores collapse into an ultra-dense neutron star or a black hole, while at the same time, hot gas is ejected outward at a high velocity. A large part of the gas and dust finely dispersed between the stars is carried away by an expanding shock wave. Like a giant balloon with bumps and dents, this envelope also sweeps up any material already present in space. After many thousands of years, the remnants of a supernova have expanded to a diameter of several 10 parsecs, spreading out ever more slowly until the motion finally ceases.

A nearby explosion has the potential to severely disrupt the Earth's biosphere and cause a mass extinction similar to the asteroid impact 66 million years ago. The dinosaurs and many other animal species fell victim to that event. "If we consider the time period since the solar system's formation, which spans billions of years, very close cosmic explosions cannot be ruled out," Wallner emphasizes.

Nevertheless, supernovae only occur in very heavy stars with more than eight to ten times the mass of our sun. Such stars are rare. One of the closest candidates of this size is the red supergiant Betelgeuse in the constellation of Orion, located at a safe distance of about 150 parsecs from our solar system.

Production of interstellar isotopes


Many new atoms are generated during cosmic explosions or shortly before and during the supernova -- among them also a number of radioactive atoms. Wallner is particularly interested in the radioactive iron isotope with the atomic mass of 60. About half of these isotopes, called iron-60 for short, have turned into a stable nickel isotope after 2.6 million years. Therefore, all iron-60 that was present at the Earth's formation some 4,500 million years ago has long since disappeared.

"Iron-60 is extremely rare on Earth because, by natural means, it is not produced in any significant amount. However, it is produced in large quantities just before a supernova takes place. If this isotope now turns up in sediments from the ocean floor or in material from the surface of the moon, it probably came from a supernova or another similar process in space that has taken place near Earth only a few million years ago," Wallner summarizes.

The same applies to the plutonium isotope with the atomic mass of 244. However, this plutonium-244 is more likely generated by the collision of neutron stars than by supernovae. Thus, it is an indicator of the nucleosynthesis of heavy elements. After a period of 80 million years, about half of the plutonium-244 isotope has turned into other elements. Therefore, the slowly decaying plutonium-244 is, in addition to iron-60, another indicator of galactic events and the production of new elements in the last millions of years.

"Exactly how often, where, and under what conditions these heavy elements are produced is currently the subject of intense scientific debate. Plutonium-244 also requires explosive events and, according to theory, is produced similarly to the elements gold or platinum, which have always occurred naturally on Earth but consist of stable atoms today," Wallner explains.

Dust particles as cosmic cargo vessels

But how do these isotopes get to Earth in the first place? The iron-60 atoms ejected by the supernova like to congregate in dust particles. So do the plutonium-244 isotopes, which were possibly created in other events and swept up by the supernova's expanding envelope. After cosmic explosions at a distance of more than ten but less than 150 parsecs, according to theory, the solar wind and the magnetic field of the heliosphere prevent individual atoms from reaching the Earth. However, the iron-60 and plutonium-244 atoms trapped in dust particles continue to fly toward the Earth and the Moon, where they can eventually trickle down to the surface.

Even with a supernova occurring within the so-called "kill radius" of less than ten parsecs, not even a microgram of matter from the envelope will land on each square centimeter. In fact, only very few iron-60 atoms per square centimeter reach the Earth each year. This poses an enormous challenge to "investigators" like physicist Anton Wallner: Within a one-gram sediment sample, perhaps a few thousand iron-60 atoms are distributed like needles in a haystack among billions times billions of the ubiquitous and stable iron atoms with the atomic mass of 56. On top of that, even the most sensitive measurement method may only detect every five thousandth particle, i.e., a maximum of only a few iron-60 atoms in a typical measurement sample.

Such extremely low concentrations can only be determined with Accelerator Mass Spectrometry, short AMS. One of these facilities, the Dresden AMS (DREAMS), is located at the HZDR, soon to be joined by the Helmholtz Accelerator Mass Spectrometer Tracing Environmental Radionuclides (HAMSTER). Since AMS facilities around the globe are designed differently, the various facilities can complement each other in the search for rare isotopes from supernova explosions.

20 years for just one thousand iron-60 atoms

Isotopes of the same element but with a different mass, like the naturally occurring iron-56, are removed with mass filters. Atoms of other elements with the same mass as the target object iron-60, for example, the naturally occurring nickel-60, also interfere. Even after very complex chemical preparation of the samples, they are still billions of times more abundant than iron-60 and must be separated in a special accelerator facility using nuclear physics methods.

In the end, perhaps five individual iron-60 atoms are identified in a measuring process that lasts several hours. Pioneering work on iron-60 detection was conducted at TU Munich. Presently, however, Canberra at the Australian National University is the only existing facility worldwide that is sensitive enough to perform such measurements.

In total, only about one thousand iron-60 atoms have been measured in the past 20 years. For the interstellar plutonium-244, which occurs in concentrations more than 10,000 times lower, only data for individual atoms were available for a long time. Only recently has it been possible to determine about a hundred plutonium-244 atoms at a specialized infrastructure in Sydney -- similar to the HAMSTER facility currently under development at the HZDR.

However, only certain samples are suitable for investigation, which act as archives to preserve these atoms coming from space for millions of years. Samples from the Earth's surface, for example, are rapidly "diluted" by geological processes. Sediments and crusts from the deep sea, which slowly form undisturbed on the ocean floor, are ideal. Alternatively, samples from the lunar surface are suitable because disruptive processes are hardly a problem.

On a research trip until the beginning of November 2023, Wallner and his colleagues will hunt for further cosmic isotopes at particularly suitable AMS facilities in the Australian cities of Canberra (iron-60) and Sydney (plutonium-244). For this purpose, he has received a number of lunar samples from the U.S. space agency NASA.

Read more at Science Daily

To restore ecosystems, think about thwarting hungry herbivores

Re-establishing plantings of trees, grasses and other vegetation is essential for restoring degraded ecosystems, but a new survey of almost 2,600 restoration projects from nearly every type of ecosystem on Earth finds that most projects fail to recognize and control one of the new plants' chief threats: hungry critters that eat plants.

"While most of the projects took steps to exclude competing plant species, only 10% took steps to control or temporarily exclude herbivores, despite the fact that in the early stages these plants are like lollipops -- irresistible little treats for grazers," said Brian Silliman, Rachel Carson Distinguished Professor of Marine Conservation Biology at Duke University's Nicholas School of the Environment.

By not protecting plants in their early states, conservationists are missing out on great opportunity to significantly speed restoration, improve its outcomes, and lower its costs, he said.

"Our analysis of the surveyed projects shows that introducing predators to keep herbivore populations in check or installing barriers to keep them at bay until plantings become more established and less vulnerable, can increase plant re-growth by 89% on average," said Silliman, who helped conceptualize the study and was one of its coauthors.

Those gains are equal to or greater than the gains realized by excluding competing plant species, the new survey shows.

"This begs the question: Why aren't we doing it more?" he asks.

The new survey was conducted with input from an international team of researchers affiliated with 20 universities and institutions. They published their peer-reviewed findings Nov. 3 in Science.

Qiang He, professor of coastal ecology at Fudan University and a former postdoctoral research associate of Silliman's at Duke, co-led the study with Changlin Xu, a member of He's Coastal Ecology Lab at Fudan.

The survey's findings have far-reaching implications for efforts to restore vegetation at a time of climate change, He said.

"Herbivores' effects were particularly pronounced in regions with higher temperatures and lower precipitation," He noted.

All of which leads to one inescapable conclusion, Silliman said.

"If we want more plants, we have to let more predators in or restore their populations," Silliman said. "Indeed, the decline of large predators, like wolves, lions, and sharks, that normally keep herbivore populations in check, is likely an important indirect cause of high grazing pressures."

"Conventional restoration is slowing our losses, but it's not expanding vegetation in many places, and climate change could make that even more difficult," he said.

Using predators to keep herbivores in check at restored sites is a relatively untapped approach that could help us boost plant diversity and restore ecosystems that are vital to human and environmental health, in less time and at lower costs," Silliman said. "It's like learning a new gardening trick that doubles your yield."

Once a planting is established, the herbivores are essential too, he added. "Plants just need a small break from being eaten to get restarted making ecosystems. Once they establish, herbivores are key to maintaining plant ecosystem diversity and function."

Read more at Science Daily

New species of mosasaur named for Norse sea serpent

Scientists have discovered a new species of mosasaur, large, carnivorous aquatic lizards that lived during the late Cretaceous. With "transitional" traits that place it between two well-known mosasaurs, the new species is named after a sea serpent in Norse mythology, Jormungandr, and the small North Dakota city Walhalla near to where the fossil was found. Details describing Jormungandr walhallaensis are published today in the Bulletin of the American Museum of Natural History.

"If you put flippers on a Komodo dragon and made it really big, that's basically what it would have looked like," said the study's lead author Amelia Zietlow, a Ph.D. student in comparative biology at the American Museum of Natural History's Richard Gilder Graduate School.

The first mosasaur was discovered more than 200 years ago, and the word "mosasaur" predates the word "dinosaur." But many questions about these animals remain, including how many times they evolved flippers and became fully aquatic -- researchers think it was at least three times, and maybe four or more -- and whether they are more closely related to monitor lizards or snakes. Researchers are still trying to determine how the different groups of mosasaurs are related to each other, and the new study adds a new piece to that puzzle.

The fossil on which the study is based was discovered in 2015, when researchers excavating in the northeastern part of North Dakota found an impressive specimen: a nearly complete skull, jaws, and cervical spine, as well as a number of vertebrae.

After extensive analysis and surface scanning of the fossil material, Zietlow and her collaborators found that this animal is a new species with a mosaic of features seen in two iconic mosasaurs: Clidastes, a smaller and more primitive form of mosasaur; and Mosasaurus, a larger form that grew to be nearly 50 feet long and lived alongside Tyrannosaurus rex. The specimen is estimated to be about 24 feet long, and in addition to flippers and a shark-like tail, it would have had "angry eyebrows" caused by a bony ridge on the skull, and a slightly stumpy tail that would have been shorter than its body.

"As these animals evolved into these giant sea monsters, they were constantly making changes," Zietlow said. "This work gets us one step closer to understanding how all these different forms are related to one another."

The work suggests that Jormungandr was a precursor to Mosasaurus and that it would have lived about 80 million years ago.

"This fossil is coming from a geologic time in the United States that we don't really understand," said co-author Clint Boyd, from the North Dakota Geological Survey. "The more we can fill in the geographic and temporal timeline, the better we can understand these creatures."

Read more at Science Daily

Nov 3, 2023

Black holes are messy eaters

New observations down to light-year scale of the gas flows around a supermassive black hole have successfully detected dense gas inflows and shown that only a small portion (about 3 percent) of the gas flowing towards the black hole is eaten by the black hole. The remainder is ejected and recycled back into the host galaxy.

Not all of the matter which falls towards a black hole is absorbed, some of it is ejected as outflows. But the ratio of the matter that the black hole "eats," and the amount "dropped" has been difficult to measure.

An international research team led by Takuma Izumi, an assistant professor at the National Astronomical Observatory of Japan, used the Atacama Large Millimeter/submillimeter Array (ALMA) to observe the supermassive black hole in the Circinus Galaxy, located 14 million light-years away in the direction of the constellation Circinus. This black hole is known to be actively feeding.

Thanks to ALMA's high resolution, the team was the first in the world to measure the amount of inflow and outflow down to a scale of a few light-years around the black hole. By measuring the flows of gasses in different states (molecular, atomic, and plasma) the team was able to determine the overall efficiency of black hole feeding, and found that it was only about 3 precent. The team also confirmed that gravitational instability is driving the inflow. Analysis also showed that the bulk of the expelled outflows are not fast enough to escape the galaxy and be lost. They are recycled back into the circumnuclear regions around the black hole, and start to slowly fall towards the black hole again.

Read more at Science Daily

Plastic-eating bacteria turn waste into useful starting materials for other products

Mountains of used plastic bottles get thrown away every day, but microbes could potentially tackle this problem. Now, researchers in ACS Central Science report that they've developed a plastic-eating E. coli that can efficiently turn polyethylene terephthalate (PET) waste into adipic acid, which is used to make nylon materials, drugs and fragrances.

Previously, a team of researchers including Stephen Wallace engineered a strain of E. coli to transform the main component in old PET bottles, terephthalic acid, into something tastier and more valuable: the vanilla flavor compound vanillin. At the same time, other researchers engineered microbes to metabolize terephthalic acid into a variety of small molecules, including short acids. So, Wallace and a new team from the University of Edinburgh wanted to expand E. coli's biosynthetic pathways to include the metabolism of terephthalic acid into adipic acid, a feedstock for many everyday products that's typically generated from fossil fuels using energy-intensive processes.

The team developed a new E. coli strain that produced enzymes that could transform terephthalic acid into compounds such as muconic acid and adipic acid. Then, to transform the muconic acid into adipic acid, they used a second type of E. coli, which produced hydrogen gas, and a palladium catalyst. In experiments, the team found that attaching the engineered microbial cells to alginate hydrogel beads improved their efficiency, and up to 79% of the terephthalic acid was converted into adipic acid. Using real-world samples of terephthalic acid from a discarded bottle and a coating taken from waste packaging labels, the engineered E. coli system efficiently produced adipic acid. In the future, the researchers say they will look for pathways to biosynthesize additional higher-value products.

From Science Daily

New designs for solid-state electrolytes may soon revolutionize the battery industry

Researchers led by Professor KANG Kisuk of the Center for Nanoparticle Research within the Institute for Basic Science (IBS), have announced a major breakthrough in the field of next-generation solid-state batteries. It is believed that their new findings will enable the creation of batteries based on a novel chloride-based solid electrolyte that exhibits exceptional ionic conductivity.

A pressing concern with current commercial batteries their reliance on liquid electrolytes, which leads to flammability and explosion risks. Therefore, the development of non-combustible solid electrolytes is of paramount importance for advancing solid-state battery technology. As the world gears up to regulate internal combustion engine vehicles and expand the use of electric vehicles in the ongoing global shift toward sustainable transportation, research into the core components of secondary batteries, particularly solid-state batteries, has gained significant momentum.

To make solid-state batteries practical for everyday use, it is crucial to develop materials with high ionic conductivity, robust chemical and electrochemical stability, and mechanical flexibility. While previous research successfully led to sulfide and oxide-based solid electrolytes with high ionic conductivity, none of these materials fully met all these essential requirements.

In the past, scientists have also explored chloride-based solid electrolytes, known for their superior ionic conductivity, mechanical flexibility, and stability at high voltages. These properties led some to speculate that chloride-based batteries are the most likely candidates for solid-state batteries. However, these hopes quickly died out, as the chloride batteries were considered impractical due to their heavy reliance on expensive rare earth metals, including yttrium, scandium, and lanthanide elements, as secondary components.

To address these concerns, the IBS research team looked at the distribution of metal ions in chloride electrolytes. They believed the reason trigonal chloride electrolytes can achieve low ionic conductivity is based on the variation of metal ion arrangements within the structure.

They first tested this theory on lithium yttrium chloride, a common lithium metal chloride compound. When the metal ions were positioned near the pathway of lithium ions, electrostatic forces caused obstruction in their movement. Conversely, if the metal ion occupancy was too low, the path for lithium ions became too narrow, impeding their mobility.

Building on these insights, the research team introduced strategies to design electrolytes in a way that mitigates these conflicting factors, ultimately leading to the successful development of a solid electrolyte with high ionic conductivity. The group went further to successfully demonstrate this strategy by creating a lithium-metal-chloride solid-state battery based on zirconium, which is far cheaper than the variants that employ rare earth metals. This was the first instance where the significance of the metal ions arrangement on a material's ionic conductivity was demonstrated.

This research brings to light the often-overlooked role of metal ion distribution in the ionic conductivity of chloride-based solid electrolytes. It is expected that the IBS Center's research will pave the way for the development of various chloride-based solid electrolytes and further drive the commercialization of solid-state batteries, promising improved affordability and safety in energy storage.

Read more at Science Daily

Rats have an imagination, new research suggests

As humans, we live in our thoughts: from pondering what to make for dinner to daydreaming about our last beach vacation.

Now, researchers at HHMI's Janelia Research Campus have found that animals also possess an imagination.

A team from the Lee and Harris labs developed a novel system combining virtual reality and a brain-machine interface to probe the rat's inner thoughts.

They found that, like humans, animals can think about places and objects that aren't right in front of them, using their thoughts to imagine walking to a location or moving a remote object to a specific spot.

Like humans, when rodents experience places and events, specific neural activity patterns are activated in the hippocampus, an area of the brain responsible for spatial memory. The new study finds rats can voluntarily generate these same activity patterns and do so to recall remote locations distant from their current position.

"The rat can indeed activate the representation of places in the environment without going there," says Chongxi Lai, a postdoc in the Harris and Lee Labs and first author of a paper describing the new findings. "Even if his physical body is fixed, his spatial thoughts can go to a very remote location."

This ability to imagine locations away from one's current position is fundamental to remembering past events and imagining possible future scenarios. Therefore, the new work shows that animals, like humans, possess a form of imagination, according to the study's authors.

"To imagine is one of the remarkable things that humans can do. Now we have found that animals can do it too, and we found a way to study it," says Albert Lee, formerly a Group Leader at Janelia and now an HHMI Investigator at Beth Israel Deaconess Medical Center.

A novel brain-machine interface

The project began nine years ago when Lai arrived at Janelia as a graduate student with an idea to test whether an animal could think. His advisor, Janelia Senior Fellow Tim Harris, suggested Lai walk down the hall to chat with Lee, whose lab had similar questions.

Together, the labs worked to develop a system to understand what animals are thinking -- a real-time "thought detector" that could measure neural activity and translate what it meant.

The system uses a brain-machine interface (BMI), which provides a direct connection between brain activity and an external device. In the team's system, the BMI produces a connection between the electrical activity in the rat's hippocampus and its position in a 360-degree virtual reality arena.

The hippocampus stores mental maps of the world involved in recalling past events and imagining future scenarios. Memory recall involves the generation of specific hippocampal activity patterns related to places and events. But no one knew whether animals could voluntarily control this activity.

The BMI allows the researchers to test whether a rat can activate hippocampal activity to just think about a location in the arena without physically going there -- essentially, detecting if the animal is able to imagine going to the location.

Probing the rat's inner thoughts

Once they developed their system, the researchers had to create the "thought dictionary" that would allow them to decode the rat's brain signals. This dictionary compiles what activity patterns look like when the rat experiences something -- in this case, places in the VR arena.

The rat is harnessed in the VR system, designed by Shinsuke Tanaka, a postdoc in the Lee Lab. As the rat walks on a spherical treadmill, its movements are translated on the 360-degree screen. The rat is rewarded when it navigates to its goal.

At the same time, the BMI system records the rat's hippocampal activity. The researchers can see which neurons are activated when the rat navigates the arena to reach each goal. These signals provide the basis for a real-time hippocampal BMI, with the brain's hippocampal activity translated into actions on the screen.

Next, the researchers disconnect the treadmill and reward the rat for reproducing the hippocampal activity pattern associated with a goal location. In this "Jumper" task -- named after a 2008 movie of the same name -- the BMI translates the animal's brain activity into motion on the virtual reality screen. Essentially, the animal uses its thoughts to navigate to the reward by first thinking about where they need to go to get the reward. This thought process is something humans experience regularly. For example, when we're asked to pick up groceries at a familiar store, we might imagine the locations we will pass along the way before we ever leave the house.

In the second task, the "Jedi" task -- a nod to Star Wars -- the rat moves an object to a location by thoughts alone. The rat is fixed in a virtual place but "moves" an object to a goal in the VR space by controlling its hippocampal activity, like how a person sitting in their office might imagine taking a cup next to the coffee machine and filling it with coffee. The researchers then changed the location of the goal, requiring the animal to produce activity patterns associated with the new location.

The team found that rats can precisely and flexibly control their hippocampal activity, in the same way humans likely do. The animals are also able to sustain this hippocampal activity, holding their thoughts on a given location for many seconds -- a timeframe similar to the one at which humans relive past events or imagine new scenarios.

"The stunning thing is how rats learn to think about that place, and no other place, for a very long period of time, based on our, perhaps naïve, notion of the attention span of a rat," Harris says.

Read more at Science Daily

Nov 2, 2023

Giant planets cast a deadly pall

Giant gas planets can be agents of chaos, ensuring nothing lives on their Earth-like neighbors around other stars. New studies show, in some planetary systems, the giants tend to kick smaller planets out of orbit and wreak havoc on their climates.

Jupiter, by far the biggest planet in our solar system, plays an important protective role. Its enormous gravitational field deflects comets and asteroids that might otherwise hit Earth, helping create a stable environment for life. However, giant planets elsewhere in the universe do not necessarily protect life on their smaller, rocky planet neighbors.

A new Astronomical Journal paper details how the pull of massive planets in a nearby star system are likely to toss their Earth-like neighbors out of the "habitable zone." This zone is defined as the range of distances from a star that are warm enough for liquid water to exist on a planet's surface, making life possible.

Unlike most other known solar systems, the four giant planets in HD 141399 are farther from their star. This makes it a good model for comparison with our solar system where Jupiter and Saturn are also relatively far from the sun.

"It's as if they have four Jupiters acting like wrecking balls, throwing everything out of whack," said Stephen Kane, UC Riverside astrophysicist and author of the journal paper.

Taking data about the system's planets into account, Kane ran multiple computer simulations to understand the effect of these four giants. He wanted specifically to look at the habitable zone in this star system and see if an Earth could remain in a stable orbit there.

"The answer is yes, but it's very unlikely. There are only a select few areas where the giants' gravitational pull would not knock a rocky planet out of its orbit and send it flying right out of the zone," Kane said.

While this paper shows giant planets outside the habitable zone destroying the chances for life, a second, related paper shows how one big planet in the middle of the zone would have a similar effect.

Also published in the Astronomical Journal, this second paper examines a star system only 30 light years away from Earth called GJ 357. For reference, the galaxy is estimated to be 100,000 light years in diameter, so this system is "definitely in our neighborhood," Kane said.

Earlier studies found that a planet in this system, named GJ 357 d, resides in the system's habitable zone and has been measured at about six times the mass of the Earth. However, in this paper titled "Agent of Chaos," Kane shows the mass is likely much bigger.

"It's possible GJ 357 d is as much as 10 Earth masses, which means it's probably not terrestrial, so you couldn't have life on it," Kane said. "Or at least, it would not be able to host life as we know it."

In the second part of the paper, Kane and his collaborator, UCR planetary science postdoctoral scholar Tara Fetherolf, demonstrate that if the planet is much larger than previously believed, it is certain to prevent more Earth-like planets from residing in the habitable zone alongside it.

Though there are also a select few locations in the habitable zone of this system where an Earth could potentially reside, their orbits would be highly elliptical around the star. "In other words, the orbits would produce crazy climates on those planets," Kane said. "This paper is really a warning, when we find planets in the habitable zone, not to assume they are automatically capable of hosting life."

Read more at Science Daily

Human emissions increased mercury in the atmosphere sevenfold

Humans have increased the concentration of potentially toxic mercury in the atmosphere sevenfold since the beginning of the modern era around 1500 C.E., according to new research from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS).

The research team, led by Elsie M. Sunderland, the Fred Kavli Professor of Environmental Chemistry and Professor of Earth and Planetary Sciences, developed a new method to accurately estimate how much mercury is emitted annually from volcanos, the largest single natural emitter of mercury. The team used that estimate -- along with a computer model -- to reconstruct pre-anthropogenic atmospheric mercury levels.

The researchers estimated that before humans started pumping mercury into the atmosphere, it contained on average about 580 megagrams of mercury. However, in 2015, independent research that looked at all available atmospheric measurements estimated the atmospheric mercury reservoir was about 4,000 Mg -- nearly 7 times larger than the natural condition estimated in this study.

Human emissions of mercury from coal-fired power plants, waste-incineration, industry and mining make up the difference.

"Methylmercury is a potent neurotoxicant that bioaccumulates in fish and other organisms -- including us," said Sunderland, senior author of the paper. "Understanding the natural mercury cycle driven by volcanic emissions sets a baseline goal for policies aimed at reducing mercury emissions and allows us to understand the full impact of human activities on the environment."

The research is published in Geophysical Research Letters.

The challenge with measuring mercury in the atmosphere is that there's not very much of it, despite its outsized impact on human health. In a cubic meter of air, there may be only a nanogram of mercury, making it virtually impossible to detect via satellite. Instead, the researchers needed to use another chemical emitted in tandem with mercury as a proxy. In this case, the team used sulfur dioxide, a major component of volcanic emissions.

"The nice thing about sulfur dioxide is that it's really easy to see using satellites," said Benjamin Geyman, a PhD student in Environmental Science & Engineering at SEAS and first author of the paper. "Using sulfur dioxide as a proxy for mercury allows us to understand where and when volcanic mercury emissions are occurring."

Using a compilation of mercury to sulfur dioxide ratios measured in volcanic gas plumes, the researchers reverse engineered how much mercury could be attributed to volcanic eruptions. Then, using the GEOS-Chem atmospheric model, they modeled how mercury from volcanic eruptions moved across the globe.

The team found that while mercury mixes into the atmosphere and can travel long distances from its injection site, volcanic emissions are directly responsible for only a few percent of ground level concentrations in most areas on the planet. However, there are areas -- such as in South America, the Mediterranean and the Ring of Fire in the Pacific -- where levels of volcanic emissions of mercury make it harder to track human emissions.

"In Boston, we can do our local monitoring and we don't have to think about whether it was a big volcano year or a small volcano year," said Geyman. "But in a place like Hawaii, you've got a big source of natural mercury that is highly variable over time. This map helps us understand where volcanos are important and where they aren't, which is really useful for understanding the impact of humans on long-term mercury trends in fish, in the air and in the ocean. It's important to be able to correct for natural variability in the volcanic influence in places where we think that influence may not be negligible."

Read more at Science Daily

Study uncovers hundred-year lifespans for three freshwater fish species in the Arizona desert

A recent study found some of the oldest animals in the world living in a place you wouldn't expect: fishes in the Arizona desert. Researchers found the second genus of animal ever for which three or more species have known lifespans greater than 100 years, which could open the doors to aging studies across disciplines, such as gerontology and senescence (aging) among vertebrates.

The study centers around a series of fish species within the Ictiobus genus, known as buffalofishes. Minnesota has native populations of each of the three species studied: bigmouth buffalo, smallmouth buffalo and black buffalo. The importance of this research is underscored by the fact that these fishes are often misidentified and lumped in with invasive species, like carp, and the fishing regulations in many places, including Minnesota, do not properly protect these species, and what could become a wealth of information about longevity and aging.

This new research from the University of Minnesota Duluth (UMD), recently published in Scientific Reports, was a collaboration between Alec Lackmann, PhD, an ichthyologist and assistant professor in the Department of Mathematics and Statistics of the Swenson College of Science and Engineering at UMD; other scientists including from North Dakota State University; and a group of conservation anglers who fish the Apache Lake reservoir in Arizona.

"There is likely a treasure trove of aging, longevity and negligible senescence information within the genus Ictiobus," said Lackmann. "This study brings light to this potential and opens the door to a future in which a more complete understanding of the process of vertebrate aging can be realized, including for humans. The research begs the question: what is the buffalofishes' fountain of youth?"

Lackmann has studied buffalofishes before, and his research from 2019 went so far as to extend the previously thought maximum age of bigmouth buffalo from around 25 years of age, to more than 100 years of age by applying and validating a far more refined aging technique than had been used previously. Instead of examining the fish's scale, "you extract what are called the otoliths, or earstones, from inside the cranium of the fish, and then thin section the stones to determine their age," said Lackmann.

Approximately 97 percent of fish species have otoliths. They're tiny stone-like structures that grow throughout the fish's lifetime, forming a new layer each year. When processed properly, scientists like Lackmann can examine the otolith with a compound microscope and count the layers, like the rings on a tree, and learn the age of the fish.

Results of the study include:

  • Unparalleled longevity for freshwater fishes. Namely, three species with lifespans more than a century, with greater than 90 percent of the buffalofishes in Apache Lake more than 85 years old.
  • The discovery that some of the original buffalofishes from the Arizona stocking in 1918 are likely still alive.
  • A fishery of catch-and-release buffalofish angling that has not only increased our knowledge of fisheries, but also our understanding of how buffalofishes can be identified and recaptured across years, including uniquely-marked centenarians.
  • A robust citizens and scientists collaborative effort that has resulted in thorough and consistent scientific outreach and learning.


Buffalofishes are native to central North America, including Minnesota, but those in this recent study were found in Apache Lake, a reservoir in the desert southwest. Originally reared in hatcheries and rearing ponds along the Mississippi River in the Midwest, the government stocked buffalofishes into Roosevelt Lake (upstream of Apache Lake), Arizona in 1918. While Roosevelt Lake was fished commercially, Apache Lake's fish populations remained largely untouched until anglers recently learned how to consistently catch buffalofishes there on rod-and-line.

When these catch-and-release conservation anglers noticed unique orange and black spots on many of the fish they were catching, they wanted to learn more about the markings, and found Lackmann's previous research. An Arizona angler, Stuart Black, reached out and invited Lackmann to a fishing expedition at Apache Lake, where the fish collected would be donated to science.

By studying the fishes collected at the angling event and analyzing their otoliths for age, Lackmann found that some of the buffalofishes from the 1918 Arizona stocking are likely still alive today, and that most of the buffalofishes in Apache Lake hatched during the early 1920s. More importantly, they discovered that the three different buffalofish species found in the lake had ages more than 100 years. To their knowledge, such longevity across multiple freshwater fish species is found nowhere else in the world.

For Lackmann, there are exciting possibilities for the future of studying this unique group of fish, with far-reaching implications.

Read more at Science Daily

In a surprising finding, light can make water evaporate without heat

Evaporation is happening all around us all the time, from the sweat cooling our bodies to the dew burning off in the morning sun. But science's understanding of this ubiquitous process may have been missing a piece all this time.

In recent years, some researchers have been puzzled upon finding that water in their experiments, which was held in a sponge-like material known as a hydrogel, was evaporating at a higher rate than could be explained by the amount of heat, or thermal energy, that the water was receiving. And the excess has been significant -- a doubling, or even a tripling or more, of the theoretical maximum rate.

After carrying out a series of new experiments and simulations, and reexamining some of the results from various groups that claimed to have exceeded the thermal limit, a team of researchers at MIT has reached a startling conclusion: Under certain conditions, at the interface where water meets air, light can directly bring about evaporation without the need for heat, and it actually does so even more efficiently than heat. In these experiments, the water was held in a hydrogel material, but the researchers suggest that the phenomenon may occur under other conditions as well.

The findings are published this week in a paper in PNAS, by MIT postdoc Yaodong Tu, professor of mechanical engineering Gang Chen, and four others.

The phenomenon might play a role in the formation and evolution of fog and clouds, and thus would be important to incorporate into climate models to improve their accuracy, the researchers say. And it might play an important part in many industrial processes such as solar-powered desalination of water, perhaps enabling alternatives to the step of converting sunlight to heat first.

The new findings come as a surprise because water itself does not absorb light to any significant degree. That's why you can see clearly through many feet of clean water to the surface below. So, when the team initially began exploring the process of solar evaporation for desalination, they first put particles of a black, light-absorbing material in a container of water to help convert the sunlight to heat.

Then, the team came across the work of another group that had achieved an evaporation rate double the thermal limit -- which is the highest possible amount of evaporation that can take place for a given input of heat, based on basic physical principles such as the conservation of energy. It was in these experiments that the water was bound up in a hydrogel. Although they were initially skeptical, Chen and Tu starting their own experiments with hydrogels, including a piece of the material from the other group. "We tested it under our solar simulator, and it worked," confirming the unusually high evaporation rate, Chen says. "So, we believed them now." Chen and Tu then began making and testing their own hydrogels.

They began to suspect that the excess evaporation was being caused by the light itself -- that photons of light were actually knocking bundles of water molecules loose from the water's surface. This effect would only take place right at the boundary layer between water and air, at the surface of the hydrogel material -- and perhaps also on the sea surface or the surfaces of droplets in clouds or fog.

In the lab, they monitored the surface of a hydrogel, a JELL-O-like matrix consisting mostly of water bound by a sponge-like lattice of thin membranes. They measured its responses to simulated sunlight with precisely controlled wavelengths.

The researchers subjected the water surface to different colors of light in sequence and measured the evaporation rate. They did this by placing a container of water-laden hydrogel on a scale and directly measuring the amount of mass lost to evaporation, as well as monitoring the temperature above the hydrogel surface. The lights were shielded to prevent them from introducing extra heat. The researchers found that the effect varied with color and peaked at a particular wavelength of green light. Such a color dependence has no relation to heat, and so supports the idea that it is the light itself that is causing at least some of the evaporation.

The researchers tried to duplicate the observed evaporation rate with the same setup but using electricity to heat the material, and no light. Even though the thermal input was the same as in the other test, the amount of water that evaporated never exceeded the thermal limit. However, it did so when the simulated sunlight was on, confirming that light was the cause of the extra evaporation.

Though water itself does not absorb much light, and neither does the hydrogel material itself, when the two combine they become strong absorbers, Chen says. That allows the material to harness the energy of the solar photons efficiently and exceed the thermal limit, without the need for any dark dyes for absorption.

Having discovered this effect, which they have dubbed the photomolecular effect, the researchers are now working on how to apply it to real-world needs. They have a grant from the Abdul Latif Jameel Water and Food Systems Lab to study the use of this phenomenon to improve the efficiency of solar-powered desalination systems, and a Bose Grant to explore the phenomenon's effects on climate change modeling.

Tu explains that in standard desalination processes, "it normally has two steps: First we evaporate the water into vapor, and then we need to condense the vapor to liquify it into fresh water." With this discovery, he says, potentially "we can achieve high efficiency on the evaporation side." The process also could turn out to have applications in processes that require drying a material.

Chen says that in principle, he thinks it may be possible to increase the limit of water produced by solar desalination, which is currently 1.5 kilograms per square meter, by as much as three- or fourfold using this light-based approach. "This could potentially really lead to cheap desalination," he says.

Tu adds that this phenomenon could potentially also be leveraged in evaporative cooling processes, using the phase change to provide a highly efficient solar cooling system.

Meanwhile, the researchers are also working closely with other groups who are attempting to replicate the findings, hoping to overcome skepticism that has faced the unexpected findings and the hypothesis being advanced to explain them.

Read more at Science Daily

Nov 1, 2023

The Crab Nebula seen in new light by NASA's Webb

NASA's James Webb Space Telescope has gazed at the Crab Nebula, a supernova remnant located 6,500 light-years away in the constellation Taurus. Since the recording of this energetic event in 1054 CE by 11th-century astronomers, the Crab Nebula has continued to draw attention and additional study as scientists seek to understand the conditions, behavior, and after-effects of supernovae through thorough study of the Crab, a relatively nearby example.

Using Webb's NIRCam (Near-Infrared Camera) and MIRI (Mid-Infrared Instrument), a team led by Tea Temim at Princeton University is searching for answers about the Crab Nebula's origins.

"Webb's sensitivity and spatial resolution allow us to accurately determine the composition of the ejected material, particularly the content of iron and nickel, which may reveal what type of explosion produced the Crab Nebula," explained Temim.

At first glance, the general shape of the supernova remnant is similar to the optical wavelength image released in 2005 from NASA's Hubble Space Telescope: In Webb's infrared observation, a crisp, cage-like structure of fluffy gaseous filaments are shown in red-orange. However, in the central regions, emission from dust grains (yellow-white and green) is mapped out by Webb for the first time.

Additional aspects of the inner workings of the Crab Nebula become more prominent and are seen in greater detail in the infrared light captured by Webb. In particular, Webb highlights what is known as synchrotron radiation: emission produced from charged particles, like electrons, moving around magnetic field lines at relativistic speeds. The radiation appears here as milky smoke-like material throughout the majority of the Crab Nebula's interior.

This feature is a product of the nebula's pulsar, a rapidly rotating neutron star. The pulsar's strong magnetic field accelerates particles to extremely high speeds and causes them to emit radiation as they wind around magnetic field lines. Though emitted across the electromagnetic spectrum, the synchrotron radiation is seen in unprecedented detail with Webb's NIRCam instrument.

To locate the Crab Nebula's pulsar heart, trace the wisps that follow a circular ripple-like pattern in the middle to the bright white dot in the center. Farther out from the core, follow the thin white ribbons of the radiation. The curvy wisps are closely grouped together, outlining the structure of the pulsar's magnetic field, which sculpts and shapes the nebula.

At center left and right, the white material curves sharply inward from the filamentary dust cage's edges and goes toward the neutron star's location, as if the waist of the nebula is pinched. This abrupt slimming may be caused by the confinement of the supernova wind's expansion by a belt of dense gas.

The wind produced by the pulsar heart continues to push the shell of gas and dust outward at a rapid pace. Among the remnant's interior, yellow-white and green mottled filaments form large-scale loop-like structures, which represent areas where dust grains reside.

Read more at Science Daily

Low-income countries could lose 30% of nutrients like protein and omega-3 from seafood due to climate change

The nutrients available from seafood could drop by 30 per cent for low-income countries by the end of the century due to climate change, suggests new UBC research.

That's in a high carbon emissions and low mitigation scenario, according to the study published today in Nature Climate Change. This could be reduced to a roughly 10 per cent decline if the world were to meet the Paris Agreement targets of limiting global warming to 1.5 to 2 degrees Celsius -- which recent reports have shown we're not on track to achieve.

"Low-income countries and the global south, where seafood is central to diets and has the potential to help address malnutrition, are the hardest hit by the effects of climate change," said first author Dr. William Cheung, professor and director of the UBC Institute for the Oceans and Fisheries (IOF). "For many, seafood is an irreplaceable and affordable source of nutrients."

The researchers examined historical fisheries and seafood farming, or mariculture, databases including data from UBC's Sea Around Us to find out quantities of key nutrients that were available through fisheries and seafood farming in the past, and used predictive climate models to project these into the future. They focused on four nutrients that are plentiful in seafood and important to human health: calcium, iron, protein and omega-3 fatty acids, the latter of which is not readily available in other food sources.

They found that the availability of these nutrients peaked in the 1990s and stagnated to the 2010s, despite increases provided by farming seafood, and from fishing for invertebrates such as shrimp and oysters.

Calcium sees biggest decline

Looking to the future, the availability of all four nutrients from catches is projected to decrease, with calcium the hardest hit at a projected decline of about 15 to 40 per cent by 2100 under a low and high emissions scenario, respectively. Omega-3 would see an approximately five to 25 per cent decrease. These declines are largely driven by decreases in the amounts of pelagic fish available for catch.

"Small pelagic fish are really rich in calcium so in areas of the world where people have intolerances to milk or where other animal-sourced foods, like meat and dairy, are much more expensive, fish is really key to people's diets," said senior author Dr. Christina Hicks, professor at Lancaster University. "In many parts of the world, particularly low-income countries across the tropics, fish supply nutrients that are lacking in people's diets."

While seafood farming will contribute more nutrients in the future compared with current levels, the researchers projected these increases would not be able to compensate for the loss from fisheries. Under a high emissions scenario, any gains in the availability of nutrients from seafood farming before 2050 would be lost by 2100.

"The primary reason for this is climate change, which is also a significant threat to seafood farming, leaving us with a growing nutritional deficit," said co-author Dr. Muhammed Oyinlola, a postdoctoral fellow in the UBC department of zoology and the Institut national de la recherche scientifique. "Seafood farming alone cannot provide a comprehensive solution to this complex issue."

The availability of all four nutrients from tropical waters of generally lower income nations, such as Indonesia, the Solomon Islands and Sierra Leone, is projected to decline steeply by the end of the century under a high emissions scenario, compared with minimal declines in higher income, non-tropical waters, such as those of Canada, the U.S. and the U.K.

Globally, the researchers projected that seafood-sourced nutrient availability would decrease by about four to seven per cent per degree Celsius warming. For lower-income countries across the tropics including Nigeria, Sierra Leone, and the Solomon Islands, the projected decline was two to three times this global average at nearly 10 to 12 per cent per unit of warming.

"This research highlights the impact of every degree of warming," said Dr. Cheung. "The more we can reduce warming, the fewer risks to marine and human life."

Using all of a fish


Certain types of fish such as anchovies and herring are packed with nutrients but often used for fish meal and fish oil because these nutrients also promote fish growth. Similarly, many countries retain only select parts of a fish for sale. The researchers highlighted potential adaptations to increase nutrient availability from seafood, by retaining more of these nutritious fish for local human consumption, as well as reducing food waste in fisheries production and consumption by using all parts of a fish including the head and fins.

Read more at Science Daily

The remains of an ancient planet lie deep within Earth

In the 1980s, geophysicists made a startling discovery: two continent-sized blobs of unusual material were found deep near the center of the Earth, one beneath the African continent and one beneath the Pacific Ocean. Each blob is twice the size of the Moon and likely composed of different proportions of elements than the mantle surrounding it.

Where did these strange blobs -- formally known as large low-velocity provinces (LLVPs) -- come from? A new study led by Caltech researchers suggests that they are remnants of an ancient planet that violently collided with Earth billions of years ago in the same giant impact that created our Moon.

The study, published in the journal Nature on November 1, also proposes an answer to another planetary science mystery. Researchers have long hypothesized that the Moon was created in the aftermath of a giant impact between Earth and a smaller planet dubbed Theia, but no trace of Theia has ever been found in the asteroid belt or in meteorites. This new study suggests that most of Theia was absorbed into the young Earth, forming the LLVPs, while residual debris from the impact coalesced into the Moon.

The research was led by Qian Yuan, O.K. Earl Postdoctoral Scholar Research Associate in the laboratories of both Paul Asimow (MS '93, PhD '97), the Eleanor and John R. McMillan Professor of Geology and Geochemistry; and Michael Gurnis, the John E. And Hazel S. Smits Professor of Geophysics and Clarence R. Allen Leadership Chair, director of Caltech's Seismological Laboratory, and director of the Schmidt Academy for Software Engineering at Caltech.

Scientists first discovered the LLVPs by measuring seismic waves traveling through the earth. Seismic waves travel at different speeds through different materials, and in the 1980s, the first hints emerged of large-scale three-dimensional variations deep within the structure of Earth. In the deepest mantle, the seismic wave pattern is dominated by the signatures of two large structures near the Earth's core that researchers believe possess an unusually high level of iron. This high iron content means the regions are denser than their surroundings, causing seismic waves passing through them to slow down and leading to the name "large low velocity provinces."

Yuan, a geophysicist by training, was attending a seminar about planet formation given by Mikhail Zolotov, a professor at Arizona State University, in 2019. Zolotov presented the giant-impact hypothesis, while Qian noted that the Moon is relatively rich in iron. Zolotov added that no trace had been found of the impactor that must have collided with the Earth.

"Right after Mikhail had said that no one knows where the impactor is now, I had a 'eureka moment' and realized that the iron-rich impactor could have transformed into mantle blobs," says Yuan.

Yuan worked with multidisciplinary collaborators to model different scenarios for Theia's chemical composition and its impact with Earth. The simulations confirmed that the physics of the collision could have led to the formation of both the LLVPs and the Moon. Some of Theia's mantle could have become incorporated into the Earth's own, where it ultimately clumped and crystallized together to form the two distinct blobs detectable today at Earth's core-mantle boundary today; other debris from the collision mixed together to form the Moon.

Given such a violent impact, why did Theia's material clump into the two distinct blobs instead of mixing together with the rest of the forming planet? The researchers' simulations showed that much of the energy delivered by Theia's impact remained in the upper half of the mantle, leaving Earth's lower mantle cooler than estimated by earlier, lower-resolution impact models. Because the lower mantle was not totally melted by the impact, the blobs of iron-rich material from Theia stayed largely intact as they sifted down to the base of the mantle, like the colored masses of paraffin wax in a turned-off lava lamp. Had the lower mantle been hotter (that is, if it had received more energy from the impact), it would have mixed more thoroughly with the iron-rich material, like the colors in a stirred pot of paints.

The next steps are to examine how the early presence of Theia's heterogeneous material deep within the earth might have influenced our planet's interior processes, such as plate tectonics.

Read more at Science Daily

Humans are disrupting natural 'salt cycle' on a global scale, new study shows

The planet's demand for salt comes at a cost to the environment and human health, according to a new scientific review led by University of Maryland Geology Professor Sujay Kaushal. Published in the journal Nature Reviews Earth & Environment, the paper revealed that human activities are making Earth's air, soil and freshwater saltier, which could pose an "existential threat" if current trends continue.

Geologic and hydrologic processes bring salts to Earth's surface over time, but human activities such as mining and land development are rapidly accelerating the natural "salt cycle." Agriculture, construction, water and road treatment, and other industrial activities can also intensify salinization, which harms biodiversity and makes drinking water unsafe in extreme cases.

"If you think of the planet as a living organism, when you accumulate so much salt it could affect the functioning of vital organs or ecosystems," said Kaushal, who holds a joint appointment in UMD's Earth System Science Interdisciplinary Center. "Removing salt from water is energy intensive and expensive, and the brine byproduct you end up with is saltier than ocean water and can't be easily disposed of."

Kaushal and his co-authors described these disturbances as an "anthropogenic salt cycle," establishing for the first time that humans affect the concentration and cycling of salt on a global, interconnected scale.

"Twenty years ago, all we had were case studies. We could say surface waters were salty here in New York or in Baltimore's drinking water supply," said study co-author Gene Likens, an ecologist at the University of Connecticut and the Cary Institute of Ecosystem Studies. "We now show that it's a cycle -- from the deep Earth to the atmosphere -- that's been significantly perturbed by human activities."

The new study considered a variety of salt ions that are found underground and in surface water. Salts are compounds with positively charged cations and negatively charged anions, with some of the most abundant ones being calcium, magnesium, potassium and sulfate ions.

"When people think of salt, they tend to think of sodium chloride, but our work over the years has shown that we've disturbed other types of salts, including ones related to limestone, gypsum and calcium sulfate," Kaushal said.

When dislodged in higher doses, these ions can cause environmental problems. Kaushal and his co-authors showed that human-caused salinization affected approximately 2.5 billion acres of soil around the world -- an area about the size of the United States. Salt ions also increased in streams and rivers over the last 50 years, coinciding with an increase in the global use and production of salts.

Salt has even infiltrated the air. In some regions, lakes are drying up and sending plumes of saline dust into the atmosphere. In areas that experience snow, road salts can become aerosolized, creating sodium and chloride particulate matter.

Salinization is also associated with "cascading" effects. For example, saline dust can accelerate the melting of snow and harm communities -- particularly in the western United States -- that rely on snow for their water supply. Because of their structure, salt ions can bind to contaminants in soils and sediments, forming "chemical cocktails" that circulate in the environment and have detrimental effects.

"Salt has a small ionic radius and can wedge itself between soil particles very easily," Kaushal said. "In fact, that's how road salts prevent ice crystals from forming."

Road salts have an outsized impact in the U.S., which churns out 44 billion pounds of the deicing agent each year. Road salts represented 44% of U.S. salt consumption between 2013 and 2017, and they account for 13.9% of the total dissolved solids that enter streams across the country. This can cause a "substantial" concentration of salt in watersheds, according to Kaushal and his co-authors.

To prevent U.S. waterways from being inundated with salt in the coming years, Kaushal recommended policies that limit road salts or encourage alternatives. Washington, D.C., and several other U.S. cities have started treating frigid roads with beet juice, which has the same effect but contains significantly less salt.

Kaushal said it is becoming increasingly important to weigh the short- and long-term risks of road salts, which play an important role in public safety but can also diminish water quality.

"There's the short-term risk of injury, which is serious and something we certainly need to think about, but there's also the long-term risk of health issues associated with too much salt in our water," Kaushal said. "It's about finding the right balance."

The study's authors also called for the creation of a "planetary boundary for safe and sustainable salt use" in much the same way that carbon dioxide levels are associated with a planetary boundary to limit climate change. Kaushal said that while it's theoretically possible to regulate and control salt levels, it comes with unique challenges.

Read more at Science Daily

Oct 31, 2023

To advance space colonization, new research explores 3D printing in microgravity

Research from West Virginia University students and faculty into how 3D printing works in a weightless environment aims to support long-term exploration and habitation on spaceships, the moon or Mars.

Extended missions in outer space require the manufacture of crucial materials and equipment onsite, rather than transporting those items from Earth. Members of the Microgravity Research Team said they believe 3D printing is the way to make that happen.

The team's recent experiments focused on how a weightless microgravity environment affects 3D printing using titania foam, a material with potential applications ranging from UV blocking to water purification. ACS Applied Materials and Interfaces published their findings.

"A spacecraft can't carry infinite resources, so you have to maintain and recycle what you have and 3D printing enables that," said lead author Jacob Cordonier, a doctoral student in mechanical and aerospace engineering at the WVU Benjamin M. Statler College of Engineering and Mineral Resources. "You can print only what you need, reducing waste. Our study looked at whether a 3D-printed titanium dioxide foam could protect against ultraviolet radiation in outer space and purify water.

"The research also allows us to see gravity's role in how the foam comes out of the 3D printer nozzle and spreads onto a substrate. We've seen differences in the filament shape when printed in microgravity compared to Earth gravity. And by changing additional variables in the printing process, such as writing speed and extrusion pressure, we're able to paint a clearer image of how all these parameters interact to tune the shape of the filament."

Cordonier's co-authors include current and former undergraduate students Kyleigh Anderson, Ronan Butts, Ross O'Hara, Renee Garneau and Nathanael Wimer. Also contributing to the paper were John Kuhlman, professor emeritus, and Konstantinos Sierros, associate professor and associate chair for research in the Department of Mechanical and Aerospace Engineering.

Sierros has overseen the Microgravity Research Team's titania foam studies since 2016. The work now happens in his WVU labs but originally required taking a ride on a Boeing 727. There, students printed lines of foam onto glass slides during 20-second periods of weightlessness when the jet was at the top of its parabolic flight path.

"Transporting even a kilogram of material in space is expensive and storage is limited, so we're looking into what is called 'in-situ resource utilization,'" Sierros said. "We know the moon contains deposits of minerals very similar to the titanium dioxide used to make our foam, so the idea is you don't have to transport equipment from here to space because we can mine those resources on the moon and print the equipment that's necessary for a mission."

Necessary equipment includes shields against ultraviolet light, which poses a threat to astronauts, electronics and other space assets.

"On Earth, our atmosphere blocks a significant part of UV light -- though not all of it, which is why we get sunburned," Cordonier said. "In space or on the moon, there's nothing to mitigate it besides your spacesuit or whatever coating is on your spacecraft or habitat."

To measure titania foam's effectiveness at blocking UV waves, "we would shine light ranging from the ultraviolet wavelengths up to the visible light spectrum," he explained. "We measured how much light was getting through the titania foam film we had printed, how much got reflected back and how much was absorbed by the sample. We showed the film blocks almost all the UV light hitting the sample and very little visible light gets through. Even at only 200 microns thick, our material is effective at blocking UV radiation."

Cordonier said the foam also demonstrated photocatalytic properties, meaning that it can use light to promote chemical reactions that can do things like purify air or water.

Team member Butts, an undergraduate from Wheeling, led experiments in contact angle testing to analyze how changes in temperature affected the foam's surface energy. Butts called the research "a different type of challenge that students don't always get to experience," and said he especially valued the engagement component.

"Our team gets to do a lot of outreach with young students like the Scouts through the Merit Badge University at WVU. We get to show them what we do here as a way to say, 'Hey, this is something you could do, too,'" Butts said.

According to Sierros, "We're trying to integrate research into student careers at an early point. We have a student subgroup that's purely hardware and they make the 3D printers. We have students leading materials development, automation, data analysis. The undergraduates who have been doing this work with the support of two very competitive NASA grants are participating in the whole research process. They have published peer-reviewed scientific articles and presented at conferences."

Garneau, a student researcher from Winchester, Virginia, said her dream is for their 3D printer -- custom designed to be compact and automated -- to take a six-month trip to the International Space Station. That would enable more extensive monitoring of the printing process than was possible during the 20-second freefalls.

"This was an amazing experience," Garneau said. "It was the first time I participated in a research project that didn't have predetermined results like what I have experienced in research-based classes. It was really rewarding to analyze the data and come to conclusions that weren't based on fixed expectations.

Read more at Science Daily

Window to avoid 1.5°C of warming will close before 2030 if emissions are not reduced

Without rapid carbon dioxide emission reductions, the world has a 50% chance of locking in 1.5°C of warming before 2030, according to a study led by Imperial College London researchers.

The study, published today in Nature Climate Change, is the most up-to-date and comprehensive analysis of the global carbon budget. The carbon budget is an estimate of the amount of carbon dioxide emissions that can be emitted while keeping global warming below certain temperature limits.

The Paris Agreement aims to limit global temperature increase to well below 2°C above preindustrial levels and pursue efforts to limit it to 1.5°C. The remaining carbon budget is commonly used to assess global progress against these targets.

The new study estimates that for a 50% chance of limiting warming to 1.5°C, there are less than 250 gigatonnes of carbon dioxide left in the global carbon budget.

The researchers warn that if carbon dioxide emissions remain at 2022 levels of about 40 gigatonnes per year, the carbon budget will be exhausted by around 2029, committing the world to warming of 1.5°C above preindustrial levels.

The finding means the budget is less than previously calculated and has approximately halved since 2020 due to the continued increase of global greenhouse gas emissions, caused primarily from the burning of fossil fuels as well as an improved estimate of the cooling effect of aerosols, which are decreasing globally due to measures to improve air quality and reduce emissions.

Dr Robin Lamboll, research fellow at the Centre for Environmental Policy at Imperial College London, and the lead author of the study, said: "Our finding confirms what we already know -- we're not doing nearly enough to keep warming below 1.5°C.

"The remaining budget is now so small that minor changes in our understanding of the world can result in large proportional changes to the budget. However, estimates point to less than a decade of emissions at current levels.

"The lack of progress on emissions reduction means that we can be ever more certain that the window for keeping warming to safe levels is rapidly closing."

Dr Joeri Rogelj, Director of Research at the Grantham Institute and Professor of Climate Science & Policy at the Centre for Environmental Policy at Imperial College London, said: "This carbon budget update is both expected and fully consistent with the latest UN Climate Report.

"That report from 2021 already highlighted that there was a one in three chance that the remaining carbon budget for 1.5°C could be as small as our study now reports.

"This shows the importance of not simply looking at central estimates, but also considering the uncertainty surrounding them."

The study also found that the carbon budget for a 50% chance of limiting warming to 2°C is approximately 1,200 gigatonnes, meaning that if carbon dioxide emissions continue at current levels, the central 2°C budget will be exhausted by 2046.

There has been much uncertainty in calculating the remaining carbon budget, due to the influence of other factors, including warming from gasses other than carbon dioxide and the ongoing effects of emissions that are not accounted for in models.

The new researchused an updated dataset and improved climate modelling compared to other recent estimates, published in June, characterising these uncertainties and increasing confidence around the remaining carbon budget estimates.

The strengthened methodology also gave new insights into the importance of the potential responses of the climate system to achieving net zero.

'Net zero' refers to achieving an overall balance between global emissions produced and emissions removed from the atmosphere.

According to the modelling results in the study, there are still large uncertainties in the way various parts of the climate system will respond in the years just before net zero is achieved.

It is possible that the climate will continue warming due to effects such as melting ice, the release of methane, and changes in ocean circulation.

However, carbon sinks such as increased vegetation growth could also absorb large amounts of carbon dioxide leading to a cooling of global temperatures before net zero is achieved.

Dr Lamboll says these uncertainties further highlight the urgent need to rapidly cut emissions. "At this stage, our best guess is that the opposing warming and cooling will approximately cancel each other out after we reach net zero.

"However, it's only when we only when we cut emissions and get closer to net zero that we will be able to see what the longer-term heating and cooling adjustments will look like.

Read more at Science Daily

Engineers develop an efficient process to make fuel from carbon dioxide

The search is on worldwide to find ways to extract carbon dioxide from the air or from power plant exhaust and then make it into something useful. One of the more promising ideas is to make it into a stable fuel that can replace fossil fuels in some applications. But most such conversion processes have had problems with low carbon efficiency, or they produce fuels that can be hard to handle, toxic, or flammable.

Now, researchers at MIT and Harvard University have developed an efficient process that can convert carbon dioxide into formate, a liquid or solid material that can be used like hydrogen or methanol to power a fuel cell and generate electricity. Potassium or sodium formate, already produced at industrial scales and commonly used as a de-icer for roads and sidewalks, is nontoxic, nonflammable, easy to store and transport, and can remain stable in ordinary steel tanks to be used months, or even years, after its production.

The new process, developed by MIT doctoral students Zhen Zhang, Zhichu Ren, and Alexander H. Quinn, Harvard University doctoral student Dawei Xi, and MIT Professor Ju Li, is described this week in the journal Cell Press Physical Sciences. The whole process -- including capture and electrochemical conversion of the gas to a solid formate powder, which is then used in a fuel cell to produce electricity -- was demonstrated at a small, laboratory scale. However, the researchers expect it to be scalable so that it could provide emissions-free heat and power to individual homes and even be used in industrial or grid-scale applications.

Other approaches to converting carbon dioxide into fuel, Li explains, usually involve a two-stage process: First the gas is chemically captured and turned into a solid form as calcium carbonate, then later that material is heated to drive off the carbon dioxide and convert it to a fuel feedstock such as carbon monoxide. That second step has very low efficiency, typically converting less than 20 percent of the gaseous carbon dioxide into the desired product, Li says.

By contrast, the new process achieves a conversion of well over 90 percent and eliminates the need for the inefficient heating step by first converting the carbon dioxide into an intermediate form, liquid metal bicarbonate. That liquid is then electrochemically converted into liquid potassium or sodium formate in an electrolyzer that uses low-carbon electricity, e.g. nuclear, wind, or solar power. The highly concentrated liquid potassium or sodium formate solution produced can then be dried, for example by solar evaporation, to produce a solid powder that is highly stable and can be stored in ordinary steel tanks for up to years or even decades, Li says.

Several steps of optimization developed by the team made all the difference in changing an inefficient chemical-conversion process into a practical solution, says Li, who holds joint appointments in the departments of Nuclear Science and Engineering and of Materials Science and Engineering.

The process of carbon capture and conversion involves first an alkaline solution based capture that concentrates carbon dioxide, either from concentrated streams such as from power plant emissions or from very low-concentration sources, even open air, into the form of a liquid metal-bicarbonate solution. Then, through the use of a cation-exchange membrane electrolyzer, this bicarbonate is electrochemically converted into solid formate crystals with a carbon efficiency of greater than 96 percent, as confirmed in the team's lab-scale experiments.

These crystals have an indefinite shelf life, remaining so stable that they could be stored for years, or even decades, with little or no loss. By comparison, even the best available practical hydrogen storage tanks allow the gas to leak out at a rate of about 1 percent per day, precluding any uses that would require year-long storage, Li says. Methanol, another widely explored alternative for converting carbon dioxide into a fuel usable in fuel cells, is a toxic substance that cannot easily be adapted to use in situations where leakage could pose a health hazard. Formate, on the other hand, is widely used and considered benign, according to national safety standards.

Several improvements account for the greatly improved efficiency of this process. First, a careful design of the membrane materials and their configuration overcomes a problem that previous attempts at such a system have encountered, where a buildup of certain chemical byproducts changes the pH, causing the system to steadily lose efficiency over time. "Traditionally, it is difficult to achieve long-term, stable, continuous conversion of the feedstocks," Zhang says. "The key to our system is to achieve a pH balance for steady-state conversion."

To achieve that, the researchers carried out thermodynamic modeling to design the new process so that it is chemically balanced and the pH remains at a steady state with no shift in acidity over time. It can therefore can continue operating efficiently over long periods. In their tests, the system ran for over 200 hours with no significant decrease in output. The whole process can be done at ambient temperatures and relatively low pressures (about five times atmospheric pressure).

Another issue was that unwanted side reactions produced other chemical products that were not useful, but the team figured out a way to prevent these side reactions by the introduction of an extra "buffer" layer of bicarbonate-enriched fiberglass wool that blocked these reactions.

The team also built a fuel cell specifically optimized for the use of this formate fuel to produce electricity. The stored formate particles are simply dissolved in water and pumped into the fuel cell as needed. Although the solid fuel is much heavier than pure hydrogen, when the weight and volume of the high-pressure gas tanks needed to store hydrogen is considered, the end result is an electricity output near parity for a given storage volume, Li says.

The formate fuel can potentially be adapted for anything from home-sized units to large scale industrial uses or grid-scale storage systems, the researchers say. Initial household applications might involve an electrolyzer unit about the size of a refrigerator to capture and convert the carbon dioxide into formate, which could be stored in an underground or rooftop tank. Then, when needed, the powdered solid would be mixed with water and fed into a fuel cell to provide power and heat. "This is for community or household demonstrations," Zhang says, "but we believe that also in the future it may be good for factories or the grid."

Read more at Science Daily

Cold War spy satellite imagery reveals Ancient Roman forts

Two-thousand years ago, forts were constructed by the Roman Empire across the northern Fertile Crescent, spanning from what is now western Syria to northwestern Iraq.

In the 1920s, 116 forts were documented in the region by Father Antoine Poidebard, who conducted one of the world's first aerial surveys using a WWI-era biplane. Poidebard reported that the forts were constructed from north to south to establish an eastern boundary of the Roman Empire.

A new Dartmouth study analyzing declassified Cold War satellite imagery reveals 396 previously undocumented Roman forts and reports that these forts were constructed from east to west. The analysis refutes Poidebard's claim that the forts were located along a north-south axis by showing that the forts spanned from Mosul on the Tigris River to Aleppo in western Syria.

The results are published in Antiquity.

"I was surprised to find that there were so many forts and that they were distributed in this way because the conventional wisdom was that these forts formed the border between Rome and its enemies in the east, Persia or Arab armies," says lead author Jesse Casana, a professor in the Department of Anthropology and director of the Spatial Archaeometry Lab at Dartmouth. "While there's been a lot of historical debate about this, it had been mostly assumed that this distribution was real, that Poidebard's map showed that the forts were demarcating the border and served to prevent movement across it in some way."

For the study, the team drew on declassified Cold-War era CORONA and HEXAGON satellite imagery collected between 1960 and 1986. Most of the imagery is part of the open-access CORONA Atlas Project through which Casana and colleagues developed better methods for correcting the data and made it available online.

The researchers examined satellite imagery of approximately 300,000 square kilometers (115,831 square miles) of the northern Fertile Cresent. It is a place where sites show up particularly well and is archaeologically significant, according to Casana. The team mapped 4,500 known sites and then systematically documented every other site-like feature in each of the nearly 5 by 5 kilometer (3.1 mile by 3.1 mile) survey grids, which resulted in the addition of 10,000 undiscovered sites to the database.

When the database was originally developed, Casana had created morphological categories based on the different features evident in the imagery, which allows researchers to run queries. One of the categories was Poidebard's forts -- distinctive squares measuring approximately 50 by 100 meters (.03 x .06 miles), comparable in size to about half a soccer field.

The forts would have been large enough to accommodate soldiers, horses, and/or camels. Based on the satellite imagery, some of the forts had lookout towers in the corners or sides. They would have been made of stone and mud-brick or entirely of the latter, so eventually, these non-permanent structures would have melted into the ground.

While most of the forts that Poidebard documented were probably destroyed or obscured by agriculture, land use, or other activities between the 1920s and 1960s, the team was able to find 38 of 116 of Poidebard's forts, in addition to identifying 396 others.

Of those 396 forts, 290 were located in the study region and 106 were found in western Syria, in Jazireh. In addition to identifying forts similar to the walled fortresses Poidebard found, the team identified forts with interior architecture features and ones built around a mounded citadel.

Read more at Science Daily

Oct 30, 2023

Venus had Earth-like plate tectonics billions of years ago, study suggests

Venus, a scorching wasteland of a planet according to scientists, may have once had tectonic plate movements similar to those believed to have occurred on early Earth, a new study found. The finding sets up tantalizing scenarios regarding the possibility of early life on Venus, its evolutionary past and the history of the solar system.

Writing in Nature Astronomy, a team of scientists led by Brown University researchers describes using atmospheric data from Venus and computer modeling to show that the composition of the planet's current atmosphere and surface pressure would only have been possible as a result of an early form of plate tectonics, a process critical to life that involves multiple continental plates pushing, pulling and sliding beneath one another.

On Earth, this process intensified over billions of years, forming new continents and mountains, and leading to chemical reactions that stabilized the planet's surface temperature, resulting in an environment more conducive to the development of life.

Venus, on the other hand, Earth's nearest neighbor and sister planet, went in the opposite direction and today has surface temperatures hot enough to melt lead. One explanation is that the planet has always been thought to have what's known as a "stagnant lid," meaning its surface has only a single plate with minimal amounts of give, movement and gasses being released into the atmosphere.

The new paper posits that this wasn't always the case. To account for the abundance of nitrogen and carbon dioxide present in Venus' atmosphere, the researchers conclude that Venus must have had plate tectonics sometime after the planet formed, about 4.5 billion to 3.5 billion years ago. The paper suggests that this early tectonic movement, like on Earth, would have been limited in terms of the number of plates moving and in how much they shifted. It also would have been happening on Earth and Venus simultaneously.

"One of the big picture takeaways is that we very likely had two planets at the same time in the same solar system operating in a plate tectonic regime -- the same mode of tectonics that allowed for the life that we see on Earth today," said Matt Weller, the study's lead author who completed the work while he was a postdoctoral researcher at Brown and is now at the Lunar and Planetary Institute in Houston.

This bolsters the possibility of microbial life on ancient Venus and shows that at one point the two planets -- which are in the same solar neighborhood, are about the same size, and have the same mass, density and volume -- were more alike than previously thought before diverging.

The work also highlights the possibility that plate tectonics on planets might just come down to timing -- and therefore, so may life itself.

"We've so far thought about tectonic state in terms of a binary: it's either true or it's false, and it's either true or false for the duration of the planet," said study co-author Alexander Evans, an assistant professor of Earth, environmental and planetary sciences at Brown. "This shows that planets may transition in and out of different tectonic states and that this may actually be fairly common. Earth may be the outlier. This also means we might have planets that transition in and out of habitability rather than just being continuously habitable."

That concept will be important to consider as scientists look to understand nearby moons -- like Jupiter's Europa, which has shown proof of having Earth-like plate tectonics -- and distant exoplanets, according to the paper.

The researchers initially started the work as a way to show that the atmospheres of far-off exoplanets can be powerful markers of their early histories, before deciding to investigate that point closer to home.

They used current data on Venus' atmosphere as the endpoint for their models and started by assuming Venus has had a stagnant lid through its entire existence. Quickly, they were able to see that simulations recreating the planet's current atmosphere didn't match up with where the planet is now in terms of the amount nitrogen and carbon dioxide present in the current atmosphere and its resulting surface pressure.

The researchers then simulated what would have had to happen on the planet to get to where it is today. They eventually matched the numbers almost exactly when they accounted for limited tectonic movement early in Venus' history followed by the stagnant lid model that exists today.

Overall, the team believes the work serves as a proof of concept regarding atmospheres and their ability to provide insights into the past.

"We're still in this paradigm where we use the surfaces of planets to understand their history," Evans said. "We really show for the first time that the atmosphere may actually be the best way to understand some of the very ancient history of planets that is often not preserved on the surface."

Upcoming NASA DAVINCI missions, which will measure gasses in the Venusian atmosphere, may help solidify the study's findings. In the meantime, the researchers plan to delve deep into a key question the paper raises: What happened to plate tectonics on Venus? The theory in the paper suggests that the planet ultimately became too hot and its atmosphere too thick, drying up the necessary ingredients for tectonic movement.

"Venus basically ran out of juice to some extent, and that put the brakes on the process," said Daniel Ibarra, a professor in Brown's Department of Earth, Environmental and Planetary Sciences and co-author on the paper.

The researchers say the details of how this happened may hold important implications for Earth.

"That's going to be the next critical step in understanding Venus, its evolution and ultimately the fate of the Earth," Weller said. "What conditions will force us to move in a Venus-like trajectory, and what conditions could allow the Earth to remain habitable?"

Read more at Science Daily

Alpine rock reveals dynamics of plate movements in Earth's interior

Examining how plates move in Earth's mantle and how mountains form is no easy feat. Certain rocks that have sunk deep into Earth's interior and then returned from there can deliver answers.

Geoscientists analyze rocks in mountain belts to reconstruct how they once moved downwards into the depths and then returned to the surface. This history of burial and exhumation sheds light on the mechanisms of plate tectonics and mountain building. Certain rocks that sink far down into Earth's interior together with plates are transformed into different types under the enormous pressure that prevails there. During this UHP metamorphosis (UHP: Ultra High Pressure), silica (SiO2) in the rock, for example, becomes coesite, which is also referred to as the UHP polymorph of SiO2. Although it is chemically still silica, the crystal lattices are more tightly packed and therefore denser. When the plates move upwards again from the depths, the UHP rocks also come to the surface and can be found in certain places in the mountains. Their mineral composition provides information about the pressures to which they were exposed during their vertical journey through Earth's interior. Using lithostatic pressure as a unit of measurement, it is possible to correlate pressure and depth: the higher the pressure, the deeper the rocks once lay.

Until now, research had assumed that UHP rocks were buried at a depth of 120 kilometers. From there, they returned to the surface together with the plates. In the process, ambient pressure decreased at a stable rate, i.e. statically. However, a new study by Goethe University Frankfurt and the universities of Heidelberg and Rennes (France) calls this assumption of a long, continuous ascent into question. Among those involved in the study on the part of Goethe University Frankfurt were first author Cindy Luisier, who came to the university on a Humboldt Research Fellowship, and Thibault Duretz, head of the Geodynamic Modeling Working Group at the Department of Geosciences. The research team analyzed whiteschist from the Dora Maira Massif in the Western Alps, Italy. "Whiteschists are rocks that formed as a result of the UHP metamorphosis of a hydrothermally altered granite during the formation of the Alps," explains Duretz. "What is special about them is the large amount of coesite. The coesite crystals in the whiteschist are several hundred micrometers in size, which makes them ideal for our experiments." The piece of whiteschist from the Dora Maira Massif contained pink garnets in a silvery-white matrix composed of quartz and other minerals. "The rock has special chemical and thus mineralogical properties," says Duretz. Together with the team, he analyzed it by first cutting a very thin slice about 50 micrometers thick and then gluing it onto glass. In this way, it was possible to identify the minerals under a microscope. The next step was computer modeling of specific, particularly interesting areas.

These areas were silica particles surrounded by the grains of pink garnet, in which two SiO2 polymorphs had formed. One of these was coesite, which had formed under very high pressure (4.3 gigapascals). The other silica polymorph was quartz, which lay like a ring around the coesite. It had formed under much lower pressure (1.1 gigapascals). The whiteschist had evidently first been exposed to very high and then much lower pressure. There had been a sharp decrease in pressure or decompression. The most important discovery was that spoke-shaped cracks radiated from the SiO2 inclusions in all directions: the result of the phase transition from coesite to quartz. The effect of this transition was a large change in volume, and it caused extensive geological stresses in the rock. These made the garnet surrounding the SiO2 inclusions fracture. "Such radial cracks can only form if the host mineral, the garnet, stays very strong," explains Duretz. "At such temperatures, garnet only stays very strong if the pressure drops very quickly." On a geological timescale, "very quickly" means in thousands to hundreds of thousands of years. In this "short" period, the pressure must have dropped from 4.3 to 1.1 gigapascals. The garnet would otherwise have creeped viscously to compensate for the change in volume in the SiO2 inclusions, instead of forming cracks.

Read more at Science Daily

Evolutionary chance made this bat a specialist hunter

Ask a biologist why predators don't exterminate all their prey, part of the answer often is that there is an ongoing arms race between predators and prey, with both parties continuously evolving new ways to cheat each other.

The hypothesis is particularly prevalent for bats and their prey; insects. 50 million years ago, the first bats evolved the ability to echolocate and thus hunt in the dark, and in response to this, some insects evolved ultrasound-sensitive ears so they could hear and evade the bats.

But if there is an ongoing arms race, bats should have responded to this, says University of Southern Denmark biologist, associate professor and bat expert Lasse Jakobsen, co-author of a new study published in Current Biology, In the study, he and colleagues question the evolutionary arms race between bats and insects.

The other authors are Daniel Lewanzik and Holger R. Goerlitz from the Max Planck Institute for Biological Intelligence and John M. Ratcliffe and Erik Etzler from the University of Toronto.

The main argument supporting the arms race hypothesis is that some bats do not call as loudly as others when hunting, and thus cannot be heard as easily by the insects. These are the barbastelles (Barbastella barbastellus), and they are approx. 20 dB quieter than other bats that hunt flying insects, which means that the sound pressure they emit is 10 times lower.

- The barbastelle is traditionally highlighted as the bat that has "struck back" at the insects, says Lasse Jakobsen.

But something puzzled him and his colleagues: If you look at the barbastelle's close relatives, there are virtually no other members catching insects in the air. Instead, they eat insects that sit on surfaces such as leaves and branches, and those species are all quieter than the species that hunt flying insects.

In bat research circles, the bats that catch insects in the air are called hawking bats, while the bats that pick insects from a surface, so to speak, are called gleaning bats. The barbastelle is a hawking bat.

- If most of the barbastelle's family are gleaners, then their ancestor was very likely also a gleaner, says Lasse Jakobsen.

Accordingly, it is therefore unlikely that the ancestor of the barbastelle was a loud hawker that evolved into the whispering barbastelle as a response to insect hearing.

- A species does not have free choice when it evolves in a new direction. For example, it is a condition for mammals that their ancestor did not have feathers, so their descendants will never evolve a wing with feathers. Instead, they have found another solution for flying: modified skin between the fingers, explains Lasse Jakobsen.

But if the barbastelle didn't evolve its ability to be quieter when hunting in the air, as part of the arms race between insects and bats; where does it come from?

- It is not an evolved ability. It just cannot produce louder calls than it does, because as a descendant of a gleaner it is probably morphologically limited. But it has found a niche, where it can use its low amplitude calls. It is an evolutionary coincidence; it sort of fell into this niche, where there was something to eat.

This niche is populated by flying, nocturnal insects that can hear and are thus good at avoiding nocturnal bats. But they cannot hear well enough to register the barbastelle, so they end up as their prey.

The reason for the morphological limitation must be found in how bats emit their sound. Most bats call out of their mouths, and this allows them to emit loud sounds. Many gleaners, on the other hand, emit sound with their noses, and this makes their calls 20 dB lower.

- So, the reason why the barbastelles are so quiet today is not an expression of an arms race between bats and insects, but rather simply an expression of the fact that it is descended from bats that cannot call as loudly as others, says Lasse Jakobsen.

Read more at Science Daily