Apr 26, 2024

Climate change could become the main driver of biodiversity decline by mid-century

Global biodiversity has declined between 2% and 11% during the 20th century due to land-use change alone, according to a large multi-model study published in Science. Projections show climate change could become the main driver of biodiversity decline by the mid-21st century.

The analysis was led by the German Centre for Integrative Biodiversity Research (iDiv) and the Martin Luther University Halle-Wittenberg (MLU) and is the largest modelling study of its kind to date. The researchers compared thirteen models for assessing the impact of land-use change and climate change on four distinct biodiversity metrics, as well as on nine ecosystem services.

GLOBAL BIODIVERSITY MAY HAVE DECLINED BY 2% TO 11% DUE TO LAND-USE CHANGE ALONE

Land-use change is considered the largest driver of biodiversity change, according to the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES). However, scientists are divided over how much biodiversity has changed in past decades. To better answer this question, the researchers modelled the impacts of land-use change on biodiversity over the 20th century. They found global biodiversity may have declined by 2% to 11% due to land-use change alone. This span covers a range of four biodiversity metrics1 calculated by seven different models.

"By including all world regions in our model, we were able to fill many blind spots and address criticism of other approaches working with fragmented and potentially biased data," says first author Prof Henrique Pereira, research group head at iDiv and MLU. "Every approach has its ups and downsides. We believe our modelling approach provides the most comprehensive estimate of biodiversity trends worldwide."

MIXED TRENDS FOR ECOSYSTEM SERVICES

Using another set of five models, the researchers also calculated the simultaneous impact of land-use change on so-called ecosystem services, i.e., the benefits nature provides to humans. In the past century, they found a massive increase in provisioning ecosystem services, like food and timber production. By contrast, regulating ecosystem services, like pollination, nitrogen retention, or carbon sequestration, moderately declined.

CLIMATE AND LAND-USE CHANGE COMBINED MIGHT LEAD TO BIODIVERSITY LOSS IN ALL WORLD REGIONS


The researchers also examined how biodiversity and ecosystem services might evolve in the future. For these projections, they added climate change as a growing driver of biodiversity change to their calculations.

Climate change stands to put additional strain on biodiversity and ecosystem services, according to the findings. While land-use change remains relevant, climate change could become the most important driver of biodiversity loss by mid-century. The researchers assessed three widely-used scenarios -- from a sustainable development to a high emissions scenario. For all scenarios, the impacts of land-use change and climate change combined result in biodiversity loss in all world regions.

While the overall downward trend is consistent, there are considerable variations across world regions, models, and scenarios.

PROJECTIONS ARE NOT PREDICTIONS

"The purpose of long-term scenarios is not to predict what will happen," says co-author Dr Inês Martins from the University of York. "Rather, it is to understand alternatives, and therefore avoid these trajectories, which might be least desirable, and select those that have positive outcomes. Trajectories depend on the policies we choose, and these decisions are made day by day." Martins co-led the model analyses and is an alumna of iDiv and MLU.

The authors also note that even the most sustainable scenario assessed does not deploy all the policies that could be put in place to protect biodiversity in the coming decades. For instance, bioenergy deployment, one key component of the sustainability scenario, can contribute to mitigating climate change, but can simultaneously reduce species habitats. In contrast, measures to increase the effectiveness and coverage of protected areas or large-scale rewilding were not explored in any of the scenarios

MODELS HELP IDENTIFY EFFECTIVE POLICIES

Assessing the impacts of concrete policies on biodiversity helps identify those policies most effective for safeguarding and promoting biodiversity and ecosystem services, according to the researchers. "There are modelling uncertainties, for sure," Pereira adds. "Still, our findings clearly show that current policies are insufficient to meet international biodiversity goals. We need renewed efforts to make progress against one of the world's largest problems, which is human-caused biodiversity change."

Read more at Science Daily

Voluntary corporate emissions targets not enough to create real climate action

Companies' emissions reduction targets should not be the sole measure of corporate climate ambition, according to a new perspective paper.

Relying on emissions can favour more established companies and hinder innovation, say the authors, who suggest updating regulations to improve corporate climate action.

The paper, published today in Science, is by an international team led by Utrecht University, which includes Imperial College London researchers.

Lead author of the study Dr Yann Robiou Du Pont, from the Copernicus Institute of Sustainable Development at Utrecht University, said: "Assessing the climate ambition of companies based only on their emissions reductions may not be meaningful for emerging companies working on green innovation."

Companies can set individual climate goals, typically commitments to reduce greenhouse gas emissions from their activities -- not unlike national governments. To indicate how ambitious these voluntary commitments are, businesses can get them validated as 'Paris-aligned' under the Science Based Targets initiative (SBTi), a collaboration that started in 2015.

This validation means SBTi considers their targets to be aligned to the Paris Agreement, which aims to limit global temperature increase to well below 2°C above preindustrial levels and pursue efforts to limit it to 1.5°C.

The new paper says this approach may inadvertently favour larger existing companies, stifling innovation and skewing the playing field against emerging competitors. This is because Paris-aligned targets for larger, established companies often assume that they can simply keep their current market share of emissions, leaving no capacity for emissions from the activities of emerging companies.

For example, a new solar panel manufacturer that needs to grow its emissions ten years from now while it scales up a new, highly efficient method of building those panels, may be squeezed out of the market because, in this model, their operation would mean overshooting the Paris-aligned climate goal.

Dr Robiou Du Pont said: "These voluntary corporate targets may have been useful to achieve some progress on emissions reduction in the largest companies. But our paper shows that this approach is not sufficient to guide the corporate sector and cannot be the sole basis for regulations assessing if businesses are Paris-compliant."

To level the playing field, the authors say corporate climate targets could be based on other factors than reductions in emissions, such as emissions intensity per unit of economic or physical output. These types of targets however are harder to align to Paris Agreement targets, as they don't cap absolute emissions.

The study also highlights that adopting a target doesn't necessarily cause a drop in actual emissions, as voluntary targets are just that. The authors point to evidence that corporations are already using these voluntary targets, often of questionable credibility, as justification for watering down or delaying mandatory regulations.

Co-author Professor Joeri Rogelj, from the Centre for Environmental Policy and Director of Research at the Grantham Institute at Imperial College London, said: "Companies setting their own individual targets risk complacency that we can't afford. The window to keep the planet to 1.5°C warming is rapidly closing, and even for keeping warming well below the upper Paris limit of 2°C we need concerted action to reduce greenhouse gas emissions now. Voluntary corporate emissions targets alone are not enough for rapid global decarbonization and certainly not a substitute for regulation."

The authors conclude that governments or intergovernmental organisations need to introduce legal frameworks based on a range of indicators that encourage best practices and innovation, as well as stringent requirements on transparency for any assessments.

The toolkit for building those frameworks exists, argue the authors, including carbon pricing, green subsidies and demand-side measures. Regulators should also consider the usefulness of the products that companies produce in the green transition, not only their emissions. Under a revised framework, the more efficient solar panel manufacturer would not have to constrain production, allowing for needed innovation with spillover effects in the future.

Read more at Science Daily

How do birds flock? Researchers do the math to reveal previously unknown aerodynamic phenomenon

In looking up at the sky during these early weeks of spring, you may very well see a flock of birds moving in unison as they migrate north. But how do these creatures fly in such a coordinated and seemingly effortless fashion?

Part of the answer lies in precise, and previously unknown, aerodynamic interactions, reports a team of mathematicians in a newly published study. Its breakthrough broadens our understanding of wildlife, including fish, who move in schools, and could have applications in transportation and energy.

"This area of research is important since animals are known to take advantage of the flows, such as of air or water, left by other members of a group to save on the energy needed to move or to reduce drag or resistance," explains Leif Ristroph, an associate professor at New York University's Courant Institute of Mathematical Sciences and the senior author of the paper, which appears in the journal Nature Communications. "Our work may also have applications in transportation -- like efficient propulsion through air or water -- and energy, such as more effectively harvesting power from wind, water currents, or waves."

The team's results show that the impact of aerodynamics depends on the size of the flying group -- benefiting small groups and disrupting large ones.

"The aerodynamic interactions in small bird flocks help each member to hold a certain special position relative to their leading neighbor, but larger groups are disrupted by an effect that dislodges members from these positions and may cause collisions," notes Sophie Ramananarivo, an assistant professor at École Polytechnique Paris and one of the paper's authors.

Previously, Ristroph and his colleagues uncovered how birds move in groups -- but these findings were drawn from experiments mimicking the interactions of two birds. The new Nature Communications research expanded the inquiry to account for many flyers.

To replicate the columnar formations of birds, in which they line up one directly behind the other, the researchers created mechanized flappers that act like birds' wings. The wings were 3D-printed from plastic and driven by motors to flap in water, which replicated how air flows around bird wings during flight. This "mock flock" propelled through water and could freely arrange itself within a line or queue, as seen in a video of the experiment.

The flows affected group organization in different ways -- depending on the size of the group.

For small groups of up to about four flyers, the researchers discovered an effect by which each member gets help from the aerodynamic interactions in holding its position relative to its neighbors.

"If a flyer is displaced from its position, the vortices or swirls of flow left by the leading neighbor help to push the follower back into place and hold it there," explains Ristroph, director of NYU's Applied Mathematics Laboratory, where the experiments were conducted. "This means the flyers can assemble into an orderly queue of regular spacing automatically and with no extra effort, since the physics does all the work.

"For larger groups, however, these flow interactions cause later members to be jostled around and thrown out of position, typically causing a breakdown of the flock due to collisions among members. This means that the very long groups seen in some types of birds are not at all easy to form, and the later members likely have to constantly work to hold their positions and avoid crashing into their neighbors."

The authors then deployed mathematical modeling to better understand the underlying forces driving the experimental results.

Here, they concluded that flow-mediated interactions between neighbors are, in effect, spring-like forces that hold each member in place -- just as if the cars of a train were connected by springs.

However, these "springs" act in only one direction -- a lead bird can exert force on its follower, but not vice versa -- and this non-reciprocal interaction means that later members tend to resonate or oscillate wildly.

"The oscillations look like waves that jiggle the members forwards and backwards and which travel down the group and increase in intensity, causing later members to crash together," explains Joel Newbolt, who was an NYU graduate student in physics at the time of research.

The team named these new types of waves "flonons," which is based on the similar concept of phonons that refer to vibrational waves in systems of masses linked by springs and which are used to model the motions of atoms or molecules in crystals or other materials.

"Our findings therefore raise some interesting connections to material physics in which birds in an orderly flock are analogous to atoms in a regular crystal," Newbolt adds.

Read more at Science Daily

Why can't robots outrun animals?

Robotics engineers have worked for decades and invested many millions of research dollars in attempts to create a robot that can walk or run as well as an animal. And yet, it remains the case that many animals are capable of feats that would be impossible for robots that exist today.

"A wildebeest can migrate for thousands of kilometres over rough terrain, a mountain goat can climb up a literal cliff, finding footholds that don't even seem to be there, and cockroaches can lose a leg and not slow down," says Dr. Max Donelan, Professor in Simon Fraser University's Department of Biomedical Physiology and Kinesiology. "We have no robots capable of anything like this endurance, agility and robustness."

To understand why, and quantify how, robots lag behind animals, an interdisciplinary team of scientists and engineers from leading research universities completed a detailed study of various aspects of running robots, comparing them with their equivalents in animals, for a paper published in Science Robotics. The paper finds that, by the metrics engineers use, biological components performed surprisingly poorly compared to fabricated parts. Where animals excel, though, is in their integration and control of those components.

Alongside Donelan, the team comprised Drs. Sam Burden, Associate Professor in the Department of Electrical & Computer Engineering at the University of Washington; Tom Libby, Senior Research Engineer, SRI International; Kaushik Jayaram, Assistant Professor in the Paul M Rady Department of Mechanical Engineering at the University of Colorado Boulder; and Simon Sponberg, Dunn Family Associate Professor of Physics and Biological Sciences at the Georgia Institute of Technology.

The researchers each studied one of five different "subsystems" that combine to create a running robot -- Power, Frame, Actuation, Sensing, and Control -- and compared them with their biological equivalents. Previously, it was commonly accepted that animals' outperformance of robots must be due to the superiority of biological components.

"The way things turned out is that, with only minor exceptions, the engineering subsystems outperform the biological equivalents -- and sometimes radically outperformed them," says Libby. "But also what's very, very clear is that, if you compare animals to robots at the whole system level, in terms of movement, animals are amazing. And robots have yet to catch up."

More optimistically for the field of robotics, the researchers noted that, if you compare the relatively short time that robotics has had to develop its technology with the countless generations of animals that have evolved over many millions of years, the progress has actually been remarkably quick.

"It will move faster, because evolution is undirected," says Burden. "Whereas we can very much correct how we design robots and learn something in one robot and download it into every other robot, biology doesn't have that option. So there are ways that we can move much more quickly when we engineer robots than we can through evolution -- but evolution has a massive head start."

More than simply an engineering challenge, effective running robots offer countless potential uses. Whether solving 'last mile' delivery challenges in a world designed for humans that is often difficult to navigate for wheeled robots, carrying out searches in dangerous environments or handling hazardous materials, there are many potential applications for the technology.

Read more at Science Daily

Apr 25, 2024

Eruption of mega-magnetic star lights up nearby galaxy

While ESA's satellite INTEGRAL was observing the sky, it spotted a burst of gamma-rays -- high-energy photons -- coming from the nearby galaxy M82. Only a few hours later, ESA's XMM-Newton X-ray space telescope searched for an afterglow from the explosion but found none. An international team, including researchers from the University of Geneva (UNIGE), realised that the burst must have been an extra-galactic flare from a magnetar, a young neutron star with an exceptionally strong magnetic field. The discovery is published in the journal Nature.

On 15 November 2023, ESA's satellite INTEGRAL spotted a sudden explosion from a rare object. For only a tenth of a second, a short burst of energetic gamma-rays appeared in the sky. "The satellite data were received in the INTEGRAL Science Data Centre (ISDC), based on the Ecogia site of the UNIGE Astronomy Department, from where a gamma-ray burst alert was sent out to astronomers worldwide, only 13 seconds after its detection," explains Carlo Ferrigno, senior research associate in the Astronomy Department at UNIGE Faculty of Science, PI of the ISDC and co-author of the publication.

The IBAS (Integral Burst Alert System) software gave an automatic localisation coinciding with the galaxy M82, 12 million light-years away. This alert system was developed and is operated by scientists and engineers from the UNIGE in collaboration with international colleagues.

A curious signal from a nearby galaxy?

"We immediately realised that this was a special alert. Gamma-ray bursts come from far-away and anywhere in the sky, but this burst came from a bright nearby galaxy," explains Sandro Mereghetti of the National Institute for Astrophysics (INAF-IASF) in Milan, Italy, lead author of the publication and contributor of IBAS. The team immediately requested ESA's XMM-Newton space telescope to perform a follow-up observation of the burst's location as soon as possible. If this had been a short gamma-ray burst, caused by two colliding neutron stars, the collision would have created gravitational waves and have an afterglow in X-rays and visible light.

However, XMM-Newton's observations only showed the hot gas and stars in the galaxy. Using ground-based optical telescopes, including the Italian Telescopio Nazionale Galileo and the French Observatoire de Haute-Provence, they also looked for a signal in visible light, starting only a few hours after the explosion, but again did not find anything. With no signal in X-rays and visible light, and no gravitational waves measured by detectors on Earth (LIGO/VIRGO/KAGRA), the most certain explanation is that the signal came from a magnetar.

Magnetars: mega-magnetic stars, recently dead

"When stars more massive than eight times the Sun die, they explode in a supernova that leaves a black hole or neutron star behind. Neutron stars are very compact stellar remnants with more than the mass of the Sun packed into a sphere with the size of the Canton of Geneva. They rotate quickly and have strong magnetic fields." explains Volodymyr Savchenko, senior research associate in the Astronomy Department at UNIGE Faculty of Science, and co-author of the publication. Some young neutron stars have extra strong magnetic fields, more than 10,000 times that of typical neutron stars. These are called magnetars. They emit energy away in flares, and occasionally these flares are gigantic.

However, in the past 50 years of gamma-ray observations, only three giant flares have been identified as coming from magnetars in our galaxy. These outbursts are very strong: one that was detected in December 2004, came from 30,000 light-years from us but was still powerful enough to affect the upper layers of Earth's atmosphere, like the Solar flares, coming from much closer to us, do.

The flare detected by INTEGRAL is the first firm confirmation of a magnetar flare outside of the Milky Way. M82 is a bright galaxy where star formation takes place. In these regions, massive stars are born, live short turbulent lives and leave behind a neutron star. "The discovery of a magnetar in this region confirms that magnetars are likely young neutron stars," adds Volodymyr Savchenko. The search for more magnetars will continue in other extra-galactic star-forming regions, to?understand these extraordinary astronomical objects. If astronomers can find many more, they can start to understand how often these flares happen and how neutron stars lose energy in the process.

INTEGRAL, a key instrument in a race against time


Outbursts of such short duration can only be captured serendipitously when an observatory is already pointing in the right direction. This makes INTEGRAL with its large field of view, more than 3000 times greater than the sky area covered by the Moon, so important for these detections.

Read more at Science Daily

How light can vaporize water without the need for heat

It's the most fundamental of processes -- the evaporation of water from the surfaces of oceans and lakes, the burning off of fog in the morning sun, and the drying of briny ponds that leaves solid salt behind. Evaporation is all around us, and humans have been observing it and making use of it for as long as we have existed.

And yet, it turns out, we've been missing a major part of the picture all along.

In a series of painstakingly precise experiments, a team of researchers at MIT has demonstrated that heat isn't alone in causing water to evaporate. Light, striking the water's surface where air and water meet, can break water molecules away and float them into the air, causing evaporation in the absence of any source of heat.

The astonishing new discovery could have a wide range of significant implications. It could help explain mysterious measurements over the years of how sunlight affects clouds, and therefore affect calculations of the effects of climate change on cloud cover and precipitation. It could also lead to new ways of designing industrial processes such as solar-powered desalination or drying of materials.

The findings, and the many different lines of evidence that demonstrate the reality of the phenomenon and the details of how it works, are described in the journal PNAS, in a paper by Carl Richard Soderberg Professor of Power Engineering Gang Chen, postdocs Guangxin Lv and Yaodong Tu, and graduate student James Zhang.

The authors say their study suggests that the effect should happen widely in nature -- everywhere from clouds to fogs to the surfaces of oceans, soils, and plants -- and that it could also lead to new practical applications, including in energy and clean water production. "I think this has a lot of applications," Chen says. "We're exploring all these different directions. And of course, it also affects the basic science, like the effects of clouds on climate, because clouds are the most uncertain aspect of climate models."

A newfound phenomenon

The new work builds on research reported last year, which described this new "photomolecular effect" but only under very specialized conditions: on the surface of specially prepared hydrogels soaked with water. In the new study, the researchers demonstrate that the hydrogel is not necessary for the process; it occurs at any water surface exposed to light, whether it's a flat surface like a body of water or a curved surface like a droplet of cloud vapor.

Because the effect was so unexpected, the team worked to prove its existence with as many different lines of evidence as possible. In this study, they report 14 different kinds of tests and measurements they carried out to establish that water was indeed evaporating -- that is, molecules of water were being knocked loose from the water's surface and wafted into the air -- due to the light alone, not by heat, which was long assumed to be the only mechanism involved.

One key indicator, which showed up consistently in four different kinds of experiments under different conditions, was that as the water began to evaporate from a test container under visible light, the air temperature measured above the water's surface cooled down and then leveled off, showing that thermal energy was not the driving force behind the effect.

Other key indicators that showed up included the way the evaporation effect varied depending on the angle of the light, the exact color of the light, and its polarization. None of these varying characteristics should happen because at these wavelengths, water hardly absorbs light at all -- and yet the researchers observed them.

The effect is strongest when light hits the water surface at an angle of 45 degrees. It is also strongest with a certain type of polarization, called transverse magnetic polarization. And it peaks in green light -- which, oddly, is the color for which water is most transparent and thus interacts the least.

Chen and his co-researchers have proposed a physical mechanism that can explain the angle and polarization dependence of the effect, showing that the photons of light can impart a net force on water molecules at the water surface that is sufficient to knock them loose from the body of water. But they cannot yet account for the color dependence, which they say will require further study.

They have named this the photomolecular effect, by analogy with the photoelectric effect that was discovered by Heinrich Hertz in 1887 and finally explained by Albert Einstein in 1905. That effect was one of the first demonstrations that light also has particle characteristics, which had major implications in physics and led to a wide variety of applications, including LEDs. Just as the photoelectric effect liberates electrons from atoms in a material in response to being hit by a photon of light, the photomolecular effect shows that photons can liberate entire molecules from a liquid surface, the researchers say.

"The finding of evaporation caused by light instead of heat provides new disruptive knowledge of light-water interaction," says Xiulin Ruan, professor of mechanical engineering at Purdue University, who was not involved in the study. "It could help us gain new understanding of how sunlight interacts with cloud, fog, oceans, and other natural water bodies to affect weather and climate. It has significant potential practical applications such as high-performance water desalination driven by solar energy. This research is among the rare group of truly revolutionary discoveries which are not widely accepted by the community right away but take time, sometimes a long time, to be confirmed."

Solving a cloud conundrum


The finding may solve an 80-year-old mystery in climate science. Measurements of how clouds absorb sunlight have often shown that they are absorbing more sunlight than conventional physics dictates possible. The additional evaporation caused by this effect could account for the longstanding discrepancy, which has been a subject of dispute since such measurements are difficult to make.

"Those experiments are based on satellite data and flight data," Chen explains. "They fly an airplane on top of and below the clouds, and there are also data based on the ocean temperature and radiation balance. And they all conclude that there is more absorption by clouds than theory could calculate. However, due to the complexity of clouds and the difficulties of making such measurements, researchers have been debating whether such discrepancies are real or not. And what we discovered suggests that hey, there's another mechanism for cloud absorption, which was not accounted for, and this mechanism might explain the discrepancies."

Chen says he recently spoke about the phenomenon at an American Physical Society conference, and one physicist there who studies clouds and climate said they had never thought about this possibility, which could affect calculations of the complex effects of clouds on climate. The team conducted experiments using LEDs shining on an artificial cloud chamber, and they observed heating of the fog, which was not supposed to happen since water does not absorb in the visible spectrum. "Such heating can be explained based on the photomolecular effect more easily," he says.

Lv says that of the many lines of evidence, "the flat region in the air-side temperature distribution above hot water will be the easiest for people to reproduce." That temperature profile "is a signature" that demonstrates the effect clearly, he says.

Zhang adds: "It is quite hard to explain how this kind of flat temperature profile comes about without invoking some other mechanism" beyond the accepted theories of thermal evaporation. "It ties together what a whole lot of people are reporting in their solar desalination devices," which again show evaporation rates that cannot be explained by the thermal input.

The effect can be substantial. Under the optimum conditions of color, angle, and polarization, Lv says, "the evaporation rate is four times the thermal limit."

Already, since publication of the first paper, the team has been approached by companies that hope to harness the effect, Chen says, including for evaporating syrup and drying paper in a paper mill. The likeliest first applications will come in the areas of solar desalinization systems or other industrial drying processes, he says. "Drying consumes 20 percent of all industrial energy usage," he points out.

Read more at Science Daily

Bioluminescence first evolved in animals at least 540 million years ago

Bioluminescence first evolved in animals at least 540 million years ago in a group of marine invertebrates called octocorals, according to the results of a new study from scientists with the Smithsonian's National Museum of Natural History.

The results, published today, April 23, in the Proceedings of the Royal Society B, push back the previous record for the luminous trait's oldest dated emergence in animals by nearly 300 million years, and could one day help scientists decode why the ability to produce light evolved in the first place.

Bioluminescence -- the ability of living things to produce light via chemical reactions -- has independently evolved at least 94 times in nature and is involved in a huge range of behaviors including camouflage, courtship, communication and hunting. Until now, the earliest dated origin of bioluminescence in animals was thought to be around 267 million years ago in small marine crustaceans called ostracods.

But for a trait that is literally illuminating, bioluminescence's origins have remained shadowy.

"Nobody quite knows why it first evolved in animals," said Andrea Quattrini, the museum's curator of corals and senior author on the study.

But for Quattrini and lead author Danielle DeLeo, a museum research associate and former postdoctoral fellow, to eventually tackle the larger question of why bioluminescence evolved, they needed to know when the ability first appeared in animals.

In search of the trait's earliest origins, the researchers decided to peer back into the evolutionary history of the octocorals, an evolutionarily ancient and frequently bioluminescent group of animals that includes soft corals, sea fans and sea pens. Like hard corals, octocorals are tiny colonial polyps that secrete a framework that becomes their refuge, but unlike their stony relatives, that structure is usually soft. Octocorals that glow typically only do so when bumped or otherwise disturbed, leaving the precise function of their ability to produce light a bit mysterious.

"We wanted to figure out the timing of the origin of bioluminescence, and octocorals are one of the oldest groups of animals on the planet known to bioluminesce," DeLeo said. "So, the question was when did they develop this ability?"

Not coincidentally, Quattrini and Catherine McFadden with Harvey Mudd College had completed an extremely detailed, well-supported evolutionary tree of the octocorals in 2022. Quattrini and her collaborators created this map of evolutionary relationships, or phylogeny, using genetic data from 185 species of octocorals.

With this evolutionary tree grounded in genetic evidence, DeLeo and Quattrini then situated two octocoral fossils of known ages within the tree according to their physical features. The scientists were able to use the fossils' ages and their respective positions in the octocoral evolutionary tree to date to figure out roughly when octocoral lineages split apart to become two or more branches. Next, the team mapped out the branches of the phylogeny that featured living bioluminescent species.

With the evolutionary tree dated and the branches that contained luminous species labeled, the team then used a series of statistical techniques to perform an analysis called ancestral state reconstruction.

"If we know these species of octocorals living today are bioluminescent, we can use statistics to infer whether their ancestors were highly probable to be bioluminescent or not," Quattrini said. "The more living species with the shared trait, the higher the probability that as you move back in time that those ancestors likely had that trait as well."

The researchers used numerous different statistical methods for their ancestral state reconstruction, but all arrived at the same result: Some 540 million years ago, the common ancestor of all octocorals were very likely bioluminescent. That is 273 million years earlier than the glowing ostracod crustaceans that previously held the title of earliest evolution of bioluminescence in animals.

DeLeo and Quattrini said that the octocorals' thousands of living representatives and relatively high incidence of bioluminescence suggests the trait has played a role in the group's evolutionary success. While this further begs the question of what exactly octocorals are using bioluminescence for, the researchers said the fact that it has been retained for so long highlights how important this form of communication has become for their fitness and survival.

Now that the researchers know the common ancestor of all octocorals likely already had the ability to produce its own light, they are interested in a more thorough accounting of which of the group's more than 3,000 living species can still light up and which have lost the trait. This could help zero in on a set of ecological circumstances that correlate with the ability to bioluminesce and potentially illuminate its function.

To this end, DeLeo said she and some of her co-authors are working on creating a genetic test to determine if an octocoral species has functional copies of the genes underlying luciferase, an enzyme involved in bioluminescence. For species of unknown luminosity, such a test would enable researchers to get an answer one way or the other more rapidly and more easily.

Aside from shedding light on the origins of bioluminescence, this study also offers evolutionary context and insight that can inform monitoring and management of these corals today. Octocorals are threatened by climate change and resource-extraction activities, particularly fishing, oil and gas extraction and spills, and more recently by marine mineral mining.

This research supports the museum's Ocean Science Center, which aims to advance and share knowledge of the ocean with the world. DeLeo and Quattrini said there is still much more to learn before scientists can understand why the ability to produce light first evolved, and though their results place its origins deep in evolutionary time, the possibility remains that future studies will discover that bioluminescence is even more ancient.

Read more at Science Daily

Holographic displays offer a glimpse into an immersive future

Setting the stage for a new era of immersive displays, researchers are one step closer to mixing the real and virtual worlds in an ordinary pair of eyeglasses using high-definition 3D holographic images, according to a study led by Princeton University researchers.

Holographic images have real depth because they are three dimensional, whereas monitors merely simulate depth on a 2D screen. Because we see in three dimensions, holographic images could be integrated seamlessly into our normal view of the everyday world.

The result is a virtual and augmented reality display that has the potential to be truly immersive, the kind where you can move your head normally and never lose the holographic images from view. "To get a similar experience using a monitor, you would need to sit right in front of a cinema screen," said Felix Heide, assistant professor of computer science and senior author on a paper published April 22 in Nature Communications.

And you wouldn't need to wear a screen in front of your eyes to get this immersive experience. Optical elements required to create these images are tiny and could potentially fit on a regular pair of glasses. Virtual reality displays that use a monitor, as current displays do, require a full headset. And they tend to be bulky because they need to accommodate a screen and the hardware necessary to operate it.

"Holography could make virtual and augmented reality displays easily usable, wearable and ultrathin," said Heide. They could transform how we interact with our environments, everything from getting directions while driving, to monitoring a patient during surgery, to accessing plumbing instructions while doing a home repair.

One of the most important challenges is quality. Holographic images are created by a small chip-like device called a spatial light modulator. Until now, these modulators could only create images that are either small and clear or large and fuzzy. This tradeoff between image size and clarity results in a narrow field of view, too narrow to give the user an immersive experience. "If you look towards the corners of the display, the whole image may disappear," said Nathan Matsuda, research scientist at Meta and co-author on the paper.

Heide, Matsuda and Ethan Tseng, doctoral student in computer science, have created a device to improve image quality and potentially solve this problem. Along with their collaborators, they built a second optical element to work in tandem with the spatial light modulator. Their device filters the light from the spatial light modulator to expand the field of view while preserving the stability and fidelity of the image. It creates a larger image with only a minimal drop in quality.

Image quality has been a core challenge preventing the practical applications of holographic displays, said Matsuda. "The research brings us one step closer to resolving this challenge," he said.

The new optical element is like a very small custom-built piece of frosted glass, said Heide. The pattern etched into the frosted glass is the key. Designed using AI and optical techniques, the etched surface scatters light created by the spatial light modulator in a very precise way, pushing some elements of an image into frequency bands that are not easily perceived by the human eye. This improves the quality of the holographic image and expands the field of view.

Read more at Science Daily

Apr 24, 2024

Researchers find oldest undisputed evidence of Earth's magnetic field

A new study, led by the University of Oxford and MIT, has recovered a 3.7-billion-year-old record of Earth's magnetic field, and found that it appears remarkably similar to the field surrounding Earth today. The findings have been published today in the Journal of Geophysical Research.

Without its magnetic field, life on Earth would not be possible since this shields us from harmful cosmic radiation and charged particles emitted by the Sun (the 'solar wind'). But up to now, there has been no reliable date for when the modern magnetic field was first established.

In the new study, the researchers examined an ancient sequence of iron-containing rocks from Isua, Greenland. Iron particles effectively act as tiny magnets that can record both magnetic field strength and direction when the process of crystallization locks them in place. The researchers found that rocks dating from 3.7 billion years ago captured a magnetic field strength of at least 15 microtesla comparable to the modern magnetic field (30 microtesla).

These results provide the oldest estimate of the strength of Earth's magnetic field derived from whole rock samples, which provide a more accurate and reliable assessment than previous studies which used individual crystals.

Lead researcher Professor Claire Nichols (Department of Earth Sciences, University of Oxford) said: 'Extracting reliable records from rocks this old is extremely challenging, and it was really exciting to see primary magnetic signals begin to emerge when we analysed these samples in the lab. This is a really important step forward as we try and determine the role of the ancient magnetic field when life on Earth was first emerging.'

Whilst the magnetic field strength appears to have remained relatively constant, the solar wind is known to have been significantly stronger in the past. This suggests that the protection of Earth's surface from the solar wind has increased over time, which may have allowed life to move onto the continents and leave the protection of the oceans.

Earth's magnetic field is generated by mixing of the molten iron in the fluid outer core, driven by buoyancy forces as the inner core solidifies, which create a dynamo. During Earth's early formation, the solid inner core had not yet formed, leaving open questions about how the early magnetic field was sustained. These new results suggest the mechanism driving Earth's early dynamo was similarly efficient to the solidification process that generates Earth's magnetic field today.

Understanding how Earth's magnetic field strength has varied over time is also key for determining when Earth's inner, solid core began to form. This will help us to understand how rapidly heat is escaping from Earth's deep interior, which is key for understanding processes such as plate tectonics.

A significant challenge in reconstructing Earth's magnetic field so far back in time is that any event which heats the rock can alter preserved signals. Rocks in the Earth's crust often have long and complex geological histories which erase previous magnetic field information. However, the Isua Supracrustal Belt has a unique geology, sitting on top of thick continental crust which protects it from extensive tectonic activity and deformation. This allowed the researchers to build a clear body of evidence supporting the existence of the magnetic field 3.7 billion years ago.

The results may also provide new insights into the role of our magnetic field in shaping the development of Earth's atmosphere as we know it, particularly regarding atmospheric escape of gases. A currently unexplained phenomenon is the loss of the unreactive gas xenon from our atmosphere more than 2.5 billion years ago. Xenon is relatively heavy and therefore unlikely to have simply drifted out of our atmosphere. Recently, scientists have begun to investigate the possibility that charged xenon particles were removed from the atmosphere by the magnetic field.

Read more at Science Daily

Asian monsoon lofts ozone-depleting substances to stratosphere

Powerful monsoon winds, strengthened by a warming climate, are lofting unexpectedly large quantities of ozone-depleting substances high into the atmosphere over East Asia, new research shows.

The study, led by the U.S. National Science Foundation National Center for Atmospheric Research (NSF NCAR) and NASA, found that the East Asian Monsoon delivers more than twice the concentration of very short-lived ozone-depleting substances into the upper troposphere and lower stratosphere than previously reported.

The research team drew on airborne observations taken during a major 2022 Asian field campaign: the Asian Summer Monsoon Chemistry and Climate Impact Project (ACCLIP). The findings raise questions about the pace of the recovery of the ozone layer, which shields Earth from the Sun's harmful ultraviolet radiation.

"It was a real surprise to fly through a plume with all those very short-lived ozone-depleting substances," said NSF NCAR scientist Laura Pan, the lead author of the study. "These chemicals may have a significant impact on what will happen with the ozone layer, and it's critical to quantify them."

The study was published in the Proceedings of the National Academy of Sciences. It was funded by NSF, NASA, and NOAA, and co-authored by a large team of international scientists.

The role of monsoons

For thousands of years, people have viewed the Asian summer monsoon as important because of its impacts on local communities. Recently, however, scientists analyzing satellite observations have begun discovering that monsoon storms and winds play an additional role: carrying pollutants high in the atmosphere, where they can influence the world's climate system.

ACCLIP investigated the chemical content of air that was borne by the two primary monsoons in the region -- the South and the East Asian Monsoon -- from Earth's surface to as high up as the stratosphere. Once at that altitude, the chemicals can have far-reaching climate impacts because air in the stratosphere spreads out globally and remains for months to years, unlike the lower atmosphere where air masses turn over weekly.

The ACCLIP observations revealed that the East Asian Monsoon delivered higher levels of pollutants to the upper atmosphere than the South Asian Monsoon during 2022. The scientists measured carbon monoxide levels of up to 320 parts per billion -- a remarkably high level to be found at an altitude of 15 kilometers (about 9 miles). Carbon monoxide is often a sign of industrial pollution, and the measurements indicated that the East Asian Monsoon was closely aligned with emissions of pollutants at the surface.

Pan, Elliot Atlas of the University of Miami, and their co-authors looked into a class of chemicals known as very short-lived organic chlorine compounds, which can destroy ozone but persist only for a relatively short time in the atmosphere (months to years). In contrast, ozone-depleting chlorofluorocarbons (CFCs) remain in the atmosphere for decades to centuries or more and are therefore viewed as a far more significant threat to the ozone layer.

For that reason, the landmark 1987 Montreal Protocol on Substances that Deplete the Ozone Layer focused on phasing out CFCs and other long-lived substances. The international treaty and subsequent revisions have enabled stratospheric ozone to begin recovering. A 2022 United Nations assessment concluded that the ozone layer, including an ozone hole over the Antarctic, will be largely restored over the next several decades.

The Montreal Protocol, however, did not limit the continued manufacture and use of very short-lived ozone-depleting substances. Emissions of these chemicals have soared in South and East Asia, including highly industrialized regions of East China.

In an unfortunate coincidence, those regions lie directly under the East Asian Monsoon, which, of the world's eight regional monsoons, is the one that is predicted to strengthen the most with global warming.

The combination of the monsoon's powerful updrafts occurring in the same region as the increasing emissions of short-lived chlorine compounds has resulted in the unexpectedly high quantity of the chemicals being swept into the stratosphere.

The analysis of the aircraft measurements by Pan and her co-authors revealed high levels of five short-lived chlorine compounds: dichloromethane (CH2Cl2), chloroform (CHCl3), 1,2-dichloroethane (C2H4Cl2), tetrachloroethene (C2Cl4), and 1,2-dichloropropane (C3H6Cl2).

Pan said more research is needed to analyze the potential implications for ozone recovery. The paper also notes that scientists will need to incorporate the new findings into climate models, as stratospheric ozone has complex effects on Earth's temperature.

Read more at Science Daily

This salt battery harvests osmotic energy where the river meets the sea

Estuaries -- where freshwater rivers meet the salty sea -- are great locations for birdwatching and kayaking. In these areas, waters containing different salt concentrations mix and may be sources of sustainable, "blue" osmotic energy. Researchers in ACS Energy Letters report creating a semipermeable membrane that harvests osmotic energy from salt gradients and converts it to electricity. The new design had an output power density more than two times higher than commercial membranes in lab demonstrations.

Osmotic energy can be generated anywhere salt gradients are found, but the available technologies to capture this renewable energy have room for improvement. One method uses an array of reverse electrodialysis (RED) membranes that act as a sort of "salt battery," generating electricity from pressure differences caused by the salt gradient. To even out that gradient, positively charged ions from seawater, such as sodium, flow through the system to the freshwater, increasing the pressure on the membrane. To further increase its harvesting power, the membrane also needs to keep a low internal electrical resistance by allowing electrons to easily flow in the opposite direction of the ions. Previous research suggests that improving both the flow of ions across the RED membrane and the efficiency of electron transport would likely increase the amount of electricity captured from osmotic energy. So, Dongdong Ye, Xingzhen Qin and colleagues designed a semipermeable membrane from environmentally friendly materials that would theoretically minimize internal resistance and maximize output power.

The researchers' RED membrane prototype contained separate (i.e., decoupled) channels for ion transport and electron transport. They created this by sandwiching a negatively charged cellulose hydrogel (for ion transport) between layers of an organic, electrically conductive polymer called polyaniline (for electron transport). Initial tests confirmed their theory that decoupled transport channels resulted in higher ion conductivity and lower resistivity compared to homogenous membranes made from the same materials. In a water tank that simulated an estuary environment, their prototype achieved an output power density 2.34 times higher than a commercial RED membrane and maintained performance during 16 days of non-stop operation, demonstrating its long-term, stable performance underwater. In a final test, the team created a salt battery array from 20 of their RED membranes and generated enough electricity to individually power a calculator, LED light and stopwatch.

Read more at Science Daily

Researchers create artificial cells that act like living cells

In a new study published in Nature Chemistry, UNC-Chapel Hill researcher Ronit Freeman and her colleagues describe the steps they took to manipulate DNA and proteins -- essential building blocks of life -- to create cells that look and act like cells from the body. This accomplishment, a first in the field, has implications for efforts in regenerative medicine, drug delivery systems, and diagnostic tools.

"With this discovery, we can think of engineering fabrics or tissues that can be sensitive to changes in their environment and behave in dynamic ways," says Freeman, whose lab is in the Applied Physical Sciences Department of the UNC College of Arts and Sciences.

Cells and tissues are made of proteins that come together to perform tasks and make structures. Proteins are essential for forming the framework of a cell, called the cytoskeleton. Without it, cells wouldn't be able to function. The cytoskeleton allows cells to be flexible, both in shape and in response to their environment.

Without using natural proteins, the Freeman Lab built cells with functional cytoskeletons that can change shape and react to their surroundings. To do this, they used a new programmable peptide-DNA technology that directs peptides, the building blocks of proteins, and repurposed genetic material to work together to form a cytoskeleton.

"DNA does not normally appear in a cytoskeleton," Freeman says. "We reprogrammed sequences of DNA so that it acts as an architectural material, binding the peptides together. Once this programmed material was placed in a droplet of water, the structures took shape."

The ability to program DNA in this way means scientists can create cells to serve specific functions and even fine-tune a cell's response to external stressors. While living cells are more complex than the synthetic ones created by the Freeman Lab, they are also more unpredictable and more susceptible to hostile environments, like severe temperatures.

"The synthetic cells were stable even at 122 degrees Fahrenheit, opening up the possibility of manufacturing cells with extraordinary capabilities in environments normally unsuitable to human life," Freeman says.

Instead of creating materials that are made to last, Freeman says their materials are made to task -- perform a specific function and then modify themselves to serve a new function. Their application can be customized by adding different peptide or DNA designs to program cells in materials like fabrics or tissues. These new materials can integrate with other synthetic cell technologies, all with potential applications that could revolutionize fields like biotechnology and medicine.

 Read more at Science Daily

Apr 23, 2024

To find life in the universe, look to deadly Venus

Despite surface temperatures hot enough to melt lead, lava-spewing volcanoes, and puffy clouds of sulfuric acid, uninhabitable Venus offers vital lessons about the potential for life on other planets, a new paper argues.

"We often assume that Earth is the model of habitability, but if you consider this planet in isolation, we don't know where the boundaries and limitations are," said UC Riverside astrophysicist and paper first author Stephen Kane. "Venus gives us that."

Published today in the journal Nature Astronomy, the paper compiles much of the known information about Earth and Venus. It also describes Venus as an anchor point from which scientists can better understand the conditions that preclude life on planets around other stars.

Though it also features a pressure cooker-like atmosphere that would instantly flatten a human, Earth and Venus share some similarities. They have roughly the same mass and radius. Given the proximity to that planet, it's natural to wonder why Earth turned out so differently.

Many scientists assume that insolation flux, the amount of energy Venus receives from the sun, caused a runaway greenhouse situation that ruined the planet.

"If you consider the solar energy received by Earth as 100%, Venus collects 191%. A lot of people think that's why Venus turned out differently," Kane said. "But hold on a second. Venus doesn't have a moon, which is what gives Earth things like ocean tides and influenced the amount of water here."

In addition to some of the known differences, more NASA missions to Venus would help clear up some of the unknowns. Scientists don't know the size of its core, how it got to its present, relatively slow rotation rate, how its magnetic field changed over time, or anything about the chemistry of the lower atmosphere.

"Venus doesn't have a detectable magnetic field. That could be related to the size of its core," Kane said. "Core size also give us information about how a planet cools itself. Earth has a mantle circulating heat from its core. We don't know what's happening inside Venus."

A terrestrial planet's interior also influences its atmosphere. That is the case on Earth, where our atmosphere is largely the result of volcanic outgassing.

NASA does have twin missions to Venus planned for the end of this decade, and Kane is assisting with both of them. The DAVINCI mission will probe the acid-filled atmosphere to measure noble gases and other chemical elements.

"DAVINCI will measure the atmosphere all the way from the top to the bottom. That will really help us build new climate models and predict these kinds of atmospheres elsewhere, including on Earth, as we keep increasing the amount of CO2," Kane said.

The VERITAS mission, led by NASA's Jet Propulsion Laboratory, won't land on the surface but it will allow scientists to create detailed 3D landscape reconstructions, revealing whether the planet has active plate tectonics or volcanoes.

"Currently, our maps of the planet are very incomplete. It's very different to understand how active the surface is, versus how it may have changed through time. We need both kinds of information," Kane said.

Ultimately, the paper advocates for missions like these to Venus for two main reasons. One is the ability, with better data, to use Venus to ensure inferences about life on farther-flung planets are correct.

"The sobering part of the search for life elsewhere in the universe is that we're never going to have in situ data for an exoplanet. We aren't going there, landing, or taking direct measurements of them," Kane said.

"If we think another planet has life on the surface, we might not ever know we're wrong, and we'd be dreaming about a planet with life that doesn't have it. We are only going to get that right by properly understanding the Earth-size planets we can visit, and Venus gives us that chance."

The other reason to research Venus is that it offers a preview of what Earth's future could look like.

Read more at Science Daily

World's oases threatened by desertification, even as humans expand them

Oases are important habitats and water sources for dryland regions, sustaining 10% of the world's population despite taking up about 1.5% of land area. But in many places, climate change and anthropogenic activities threaten oases' fragile existence. New research shows how the world's oases have grown and shrunk over the past 25 years as water availability patterns changed and desertification encroaches on these wet refuges.

"Although the scientific community has always emphasized the importance of oases, there has not been a clear map of the global distribution of oases," said Dongwei Gui, a geoscientist at the Chinese Academy of Science who led the study. "Oasis research has both theoretical and practical significance for achieving United Nations Sustainable Development Goals and promoting sustainable development in arid regions."

The study found that oases around the world grew by more than 220,149 square kilometers (85,000 square miles) from 1995 to 2020, mostly due to intentional oasis expansion projects in Asia. But desertification drove the loss of 134,300 square kilometers (51,854 square miles) of oasis over the same period, also mostly in Asia, leading to a net growth of 86,500 square kilometers (about 33,400 square miles) over the study period.

The findings highlight the risk climate change and anthropogenic stressors pose to these wet sanctuaries and can inform water resource management and sustainable development in arid regions. The study was published in the AGU journal Earth's Future, which publishes interdisciplinary research on the past, present and future of our planet and its inhabitants.

The birth and death of an oasis

Oases are important sources of water for humans, plants and animals in the world's drylands, supporting a majority of productivity and life in deserts. They form when groundwater flows and settles into low-lying areas, or when surface meltwater flows downslope from adjacent mountain ranges and pools. The existence of an oasis depends primarily on having a reliable source of water that is not rainfall. Today, oases are found in 37 countries; 77% of oases are located in Asia, and 13% are found in Australia.

Gui and his co-investigators wanted to understand the global distribution and dynamic changes of oases and see how they respond to a changing environment, such as variations in climate, water resources and human activities. Using data from the European Space Agency's Climate Change Initiative Land Cover Product, the team categorized the land surface into seven categories: forest, grassland, shrub, cropland, water, urban and desert.

The researchers used satellite data to look for green, vegetated areas within dryland areas, indicating an oasis, and tracked changes over 25 years. Changes in the greenness of vegetation indicated changes in land use and oasis health, the latter of which can be influenced by both human activity and climate change. They also looked at changes in land surface type to find conversions of land use.

The researchers found that global oasis area increased by 220,800 square kilometers (85,251 square miles) over the 25-year timeframe. Most of that increase was from humans intentionally converting desert land into oases using runoff water and groundwater pumping, creating grasslands and croplands. The increase was concentrated in China, where management efforts have contributed more than 60% of the growth, Gui said. For example, more than 95% of the population in China's Xinjiang Uygur Autonomous Region lives within an oasis, motivating conservation and a 16,700 square kilometer (6,448 square mile) expansion of the oasis, Gui said.

Countering human efforts to expand oases, desertification contributed to oasis loss. Worldwide, the researchers found there was a loss of more than 134,000 square kilometers (51,738 square miles) of oasis land over the past 25 years. The researchers estimate that changes to oases have directly affected about 34 million people around the world.

Overall, between gains and losses, oases had a net growth of 86,500 square kilometers (33,397 square miles) from 1995 to 2020 -- but most gains were from the artificial expansion of oases, which may not be sustainable in the future.

Long-term oasis sustainability

The study highlighted ways to sustain healthy oases, including suggestions for improving water resource management, promoting sustainable land use and management and encouraging water conservation and efficient use. These efforts are especially important as the climate continues to change, Gui said.

Humans' overexploitation of dwindling groundwater can limit oasis sustainability, as well as long-term glacier loss. While higher temperatures increase glacier melt, temporarily boosting oases' water supplies, "as glaciers gradually disappear, the yield of meltwater will eventually decrease, leading to the shrinkage of oases once again," Gui said.

International cooperation plays a crucial role in oasis sustainability, Gui said.

"Due to the unique mechanism of oasis formation, a river basin often nurtures multiple oases across several countries, making transboundary cooperation key to addressing water scarcity and promoting sustainable development," he said.

Read more at Science Daily

This alloy is kinky

Researchers have uncovered a remarkable metal alloy that won’t crack at extreme temperatures due to kinking, or bending, of crystals in the alloy at the atomic level.  A metal alloy composed of niobium, tantalum, titanium, and hafnium has shocked materials scientists with its impressive strength and toughness at both extremely hot and cold temperatures, a combination of properties that seemed so far to be nearly impossible to achieve. In this context, strength is defined as how much force a material can withstand before it is permanently deformed from its original shape, and toughness is its resistance to fracturing (cracking). The alloy's resilience to bending and fracture across an enormous range of conditions could open the door for a novel class of materials for next-generation engines that can operate at higher efficiencies.

The team, led by Robert Ritchie at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, in collaboration with the groups led by professors Diran Apelian at UC Irvine and Enrique Lavernia at Texas A&M University, discovered the alloy's surprising properties and then figured out how they arise from interactions in the atomic structure. Their work is described in a study that was published April 11, 2024 in Science.

"The efficiency of converting heat to electricity or thrust is determined by the temperature at which fuel is burned -- the hotter, the better. However, the operating temperature is limited by the structural materials which must withstand it," said first author David Cook, a Ph.D. student in Ritchie's lab. "We have exhausted the ability to further optimize the materials we currently use at high temperatures, and there's a big need for novel metallic materials. That's what this alloy shows promise in."

The alloy in this study is from a new class of metals known as refractory high or medium entropy alloys (RHEAs/RMEAs). Most of the metals we see in commercial or industrial applications are alloys made of one main metal mixed with small quantities of other elements, but RHEAs and RMEAs are made by mixing near-equal quantities of metallic elements with very high melting temperatures, which gives them unique properties that scientists are still unraveling. Ritchie's group has been investigating these alloys for several years because of their potential for high-temperature applications.

"Our team has done previous work on RHEAs and RMEAs and we have found that these materials are very strong, but generally possess extremely low fracture toughness, which is why we were shocked when this alloy displayed exceptionally high toughness," said co-corresponding author Punit Kumar, a postdoctoral researcher in the group.

According to Cook, most RMEAs have a fracture toughness less than 10 MPa√m, which makes them some of the most brittle metals on record. The best cryogenic steels, specially engineered to resist fracture, are about 20 times tougher than these materials. Yet the niobium, tantalum, titanium, and hafnium (Nb45Ta25Ti15Hf15) RMEA alloy was able to beat even the cryogenic steel, clocking in at over 25 times tougher than typical RMEAs at room temperature.

But engines don't operate at room temperature. The scientists evaluated strength and toughness at five temperatures total: -196°C (the temperature of liquid nitrogen), 25°C (room temperature), 800°C, 950°C, and 1200°C. The last temperature is about 1/5 the surface temperature of the sun.

The team found that the alloy had the highest strength in the cold and became slightly weaker as the temperature rose, but still boasted impressive figures throughout the wide range. The fracture toughness, which is calculated from how much force it takes to propagate an existing crack in a material, was high at all temperatures.

Unraveling the atomic arrangements

Almost all metallic alloys are crystalline, meaning that the atoms inside the material are arranged in repeating units. However, no crystal is perfect, they all contain defects. The most prominent defect that moves is called the dislocation, which is an unfinished plane of atoms in the crystal. When force is applied to a metal it causes many dislocations to move to accommodate the shape change. For example, when you bend a paper clip which is made of aluminum, the movement of dislocations inside the paper clip accommodates the shape change. However, the movement of dislocations becomes more difficult at lower temperatures and as a result many materials become brittle at low temperatures because dislocations cannot move. This is why the steel hull of the Titanic fractured when it hit an iceberg. Elements with high melting temperatures and their alloys take this to the extreme, with many remaining brittle up to even 800°C. However, this RMEA bucks the trend, withstanding snapping even at temperatures as low as liquid nitrogen (-196°C).

To understand what was happening inside the remarkable metal, co-investigator Andrew Minor and his team analyzed the stressed samples, alongside unbent and uncracked control samples, using four-dimensional scanning transmission electron microscopy (4D-STEM) and scanning transmission electron microscopy (STEM) at the National Center for Electron Microscopy, part of Berkeley Lab's Molecular Foundry.

The electron microscopy data revealed that the alloy's unusual toughness comes from an unexpected side effect of a rare defect called a kink band. Kink bands form in a crystal when an applied force causes strips of the crystal to collapse on themselves and abruptly bend. The direction in which the crystal bends in these strips increases the force that dislocations feel, causing them to move more easily. On the bulk level, this phenomenon causes the material to soften (meaning that less force has to be applied to the material as it is deformed). The team knew from past research that kink bands formed easily in RMEAs, but assumed that the softening effect would make the material less tough by making it easier for a crack to spread through the lattice. But in reality, this is not the case.

"We show, for the first time, that in the presence of a sharp crack between atoms, kink bands actually resist the propagation of a crack by distributing damage away from it, preventing fracture and leading to extraordinarily high fracture toughness," said Cook.

The Nb45Ta25Ti15Hf15 alloy will need to undergo a lot more fundamental research and engineering testing before anything like a jet plane turbine or SpaceX rocket nozzle is made from it, said Ritchie, because mechanical engineers rightfully require a deep understanding of how their materials perform before they use them in the real world. However, this study indicates that the metal has potential to build the engines of the future.

Read more at Science Daily

Breakthrough rice bran nanoparticles show promise as affordable and targeted anticancer agent

Plant-derived nanoparticles have demonstrated significant anticancer effects. Researchers recently developed rice bran-derived nanoparticles (rbNPs) that efficiently suppressed cell proliferation and induced programmed cell death of only cancer cells. Furthermore, rbNPs successfully suppressed the growth of tumors in mice having aggressive adenocarcinoma in their peritoneal cavity, without any adverse effects. Given their low production costs and high efficacy, rbNPs hold great promise for developing affordable and safe anticancer agents.

Several types of conventional cancer therapies, such as radiotherapy or chemotherapy, destroy healthy cells along with cancer cells. In advanced stages of cancer, tissue loss from treatments can be substantial and even fatal. Cutting-edge cancer therapies that employ nanoparticles can specifically target cancer cells, sparing healthy tissue. Recent studies have demonstrated that plant-derived nanoparticles (pdNPs) that have therapeutic effects can be an effective alternative to traditional cancer treatments. However, no pdNPs have been approved as anticancer therapeutic agents till date.

Rice bran is a byproduct generated during rice refining process that has limited utility and low commercial value. However, it contains several compounds with anticancer properties, such as γ-oryzanol and γ-tocotrienol. To explore these therapeutic properties of rice bran, a team of researchers led by Professor Makiya Nishikawa from Tokyo University of Science (TUS) in Japan developed nanoparticles from rice bran and tested their effectiveness in mice models. Their study, published in Volume 22 of Journal of Nanobiotechnology on 16 March 2024, was co-authored by Dr. Daisuke Sasaki, Ms. Hinako Suzuki, Associate Professor Kosuke Kusamori, and Assistant Professor Shoko Itakura from TUS.

"In recent years, an increasing number of new drug modalities are being developed. At the same time, development costs associated with novel therapies have increased dramatically, contributing to the burden of medical expenses. To address this issue, we used rice bran, an industrial waste with anticancer properties, to develop nanoparticles," explains Prof. Nishikawa.

The study evaluated the anticancer effects of rice bran-derived nanoparticles (rbNPs), which were obtained by processing and purifying a suspension of Koshihikari rice bran in water. When a cancer cell line named colon26 was treated with rbNPs, cell division was arrested and programmed cell death was induced, indicating strong anticancer effects of the nanoparticles. The observed anticancer activity of rbNPs can be attributed to γ-tocotrienol and γ-oryzanol, that are easily taken up by cancer cells resulting in cell cycle arrest and programmed cell death. Additionally, rbNPs reduced the expression of proteins, such as β-catenin (a protein associated with Wnt signaling pathway involved in cell proliferation) and cyclin D1, which are known to promote cancer recurrence and metastases. Moreover, the rbNPs reduced the expression of β-catenin only in colon26 cells without affecting the non-cancerous cells.

"A key concern in the context of pdNPs is their low pharmacological activity compared to pharmaceutical drugs. However, rbNPs exhibited higher anticancer activity than DOXIL®, a liposomal pharmaceutical formulation of doxorubicin. Additionally, doxorubicin is cytotoxic to both cancer cells and non-cancerous cells, whereas rbNPs are specifically cytotoxic to cancer cells, suggesting that rbNPs are safer than doxorubicin," highlights Prof.Nishikawa.

To confirm the anticancer properties of rbNPs in the living body, the researchers injected rbNPs into mice having aggressive adenocarcinoma in their peritoneal cavity (enclosed by the diaphragm, abdominal muscles, and pelvis and houses organs like intestines, liver, and kidneys). They observed significant suppression of tumor growth with no adverse effects on the mice. Additionally, the rbNPs significantly inhibited metastatic growth of murine melanoma B16-BL6 cells in a lung metastasis mouse model.

Rice bran has several attributes that make it an excellent source of therapeutic pdNPs. Firstly, it is economic as compared to many other sources of pdNPs. Nearly 40% of the rice bran is discarded in Japan, providing a readily available source of raw material. Secondly, the preparation efficiency of rbNPs is higher than that of previously reported pdNPs. Besides being practical and safe as an anticancer therapeutic, the physicochemical properties of rbNPs are very stable. However, a few parameters, such as establishment of separation technologies at the pharmaceutical level, assessing production process control parameters, and evaluation of efficacy and safety in human cancer cell lines and xenograft animal models, must be investigated prior to clinical trials in humans.

In conclusion, rice bran, an agricultural waste product, is a source of therapeutic pdNPs that are affordable, effective, and safe, and has the potential to revolutionize cancer treatment in the future.

Read more at Science Daily

Apr 19, 2024

Astronomers uncover methane emission on a cold brown dwarf

Using new observations from the James Webb Space Telescope (JWST), astronomers have discovered methane emission on a brown dwarf, an unexpected finding for such a cold and isolated world. Published in the journal Nature, the findings suggest that this brown dwarf might generate aurorae similar to those seen on our own planet as well as on Jupiter and Saturn.

More massive than planets but lighter than stars, brown dwarfs are ubiquitous in our solar neighborhood, with thousands identified. Last year, Jackie Faherty, a senior research scientist and senior education manager at the American Museum of Natural History, led a team of researchers who were awarded time on JWST to investigate 12 brown dwarfs. Among those was CWISEP J193518.59-154620.3 (or W1935 for short) -- a cold brown dwarf 47 light years away that was co-discovered by Backyard Worlds: Planet 9 citizen science volunteer Dan Caselden and the NASA CatWISE team. W1935 is a cold brown dwarf with a surface temperature of about 400° Fahrenheit, or about the temperature at which you'd bake chocolate chip cookies. The mass for W1935 isn't well known but it likely ranges between 6-35 times the mass of Jupiter.

After looking at a number of brown dwarfs observed with JWST, Faherty's team noticed that W1935 looked similar but with one striking exception: it was emitting methane, something that's never been seen before on a brown dwarf.

"Methane gas is expected in giant planets and brown dwarfs but we usually see it absorbing light, not glowing," said Faherty, the lead author of the study. "We were confused about what we were seeing at first but ultimately that transformed into pure excitement at the discovery."

Computer modeling yielded another surprise: the brown dwarf likely has a temperature inversion, a phenomenon in which the atmosphere gets warmer with increasing altitude. Temperature inversions can easily happen to planets orbiting stars, but W1935 is isolated, with no obvious external heat source.

"We were pleasantly shocked when the model clearly predicted a temperature inversion," said co-author Ben Burningham from the University of Hertfordshire. "But we also had to figure out where that extra upper atmosphere heat was coming from."

To investigate, the researchers turned to our solar system. In particular, they looked at studies of Jupiter and Saturn, which both show methane emission and have temperature inversions. The likely cause for this feature on solar system giants is aurorae, therefore, the research team surmised that they had uncovered that same phenomenon on W1935.

Planetary scientists know that one of the major drivers of aurorae on Jupiter and Saturn are high-energy particles from the Sun that interact with the planets' magnetic fields and atmospheres, heating the upper layers. This is also the reason for the aurorae that we see on Earth, commonly referred to as the Northern or Southern Lights since they are most extraordinary near the poles. But with no host star for W1935, a solar wind cannot contribute to the explanation.

There is an enticing additional reason for the aurora in our solar system. Both Jupiter and Saturn have active moons that occasionally eject material into space, interact with the planets, and enhance the auroral footprint on those worlds. Jupiter's moon Io is the most volcanically active world in the solar system, spewing lava fountains dozens of miles high, and Saturn's moon Enceleadus ejects water vapor from its geysers that simultaneously freezes and boils when it hits space. More observations are needed, but the researchers speculate that one explanation for the aurora on W1935 might be an active, yet-to-be discovered moon.

"Every time an astronomer points JWST at an object, there's a chance of a new mind-blowing discovery," said Faherty. "Methane emission was not on my radar when we started this project but now that we know it can be there and the explanation for it so enticing I am constantly on the look-out for it. That's part of how science moves forward."

Read more at Science Daily

Ice age climate analysis reduces worst-case warming expected from rising CO2

As carbon dioxide accumulates in the atmosphere, the Earth will get hotter. But exactly how much warming will result from a certain increase in CO2 is under study. The relationship between CO2 and warming, known as climate sensitivity, determines what future we should expect as CO2 levels continue to climb.

New research led by the University of Washington analyzes the most recent ice age, when a large swath of North America was covered in ice, to better understand the relationship between CO2 and global temperature. It finds that while most future warming estimates remain unchanged, the absolute worst-case scenario is unlikely.

The open-access study was published April 17 in Science Advances.

"The main contribution from our study is narrowing the estimate of climate sensitivity, improving our ability to make future warming projections," said lead author Vince Cooper, a UW doctoral student in atmospheric sciences. "By looking at how much colder Earth was in the ancient past with lower levels of greenhouse gases, we can estimate how much warmer the current climate will get with higher levels of greenhouse gases."

The new paper doesn't change the best-case warming scenario from doubling CO2 -- about 2 degrees Celsius average temperature increase worldwide -- or the most likely estimate, which is about 3 degrees Celsius. But it reduces the worst-case scenario for doubling of CO2 by a full degree, from 5 degrees Celsius to 4 degrees Celsius. (For reference, CO2 is currently at 425 ppm, or about 1.5 times preindustrial levels, and unless emissions drop is headed toward double preindustrial levels before the end of this century.)

As our planet heads toward a doubling of CO2, the authors caution that the recent decades are not a good predictor of the future under global warming. Shorter-term climate cycles and atmospheric pollution's effects are just some reasons that recent trends can't reliably predict the rest of this century.

"The spatial pattern of global warming in the most recent 40 years doesn't look like the long-term pattern we expect in the future -- the recent past is a bad analog for future global warming," said senior author Kyle Armour, a UW associate professor of atmospheric sciences and of oceanography.

Instead, the new study focused on a period 21,000 years ago, known as the Last Glacial Maximum, when Earth was on average 6 degrees Celsius cooler than today. Ice core records show that atmospheric CO2 then was less than half of today's levels, at about 190 parts per million.

"The paleoclimate record includes long periods that were on average much warmer or colder than the current climate, and we know that there were big climate forcings from ice sheets and greenhouse gases during those periods," Cooper said. "If we know roughly what the past temperature changes were and what caused them, then we know what to expect in the future."

Researchers including co-author Gregory Hakim, a UW professor of atmospheric sciences, have created new statistical modeling techniques that allow paleoclimate records to be assimilated into computer models of Earth's climate, similar to today's weather forecasting models. The result is more realistic temperature maps from previous millennia.

For the new study the authors combined prehistoric climate records -- including ocean sediments, ice cores, and preserved pollen -- with computer models of Earth's climate to simulate the weather of the Last Glacial Maximum. When much of North America was covered with ice, the ice sheet didn't just cool the planet by reflecting summer sunlight off the continents, as previous studies had considered.

By altering wind patterns and ocean currents, the ice sheet also caused the northern Pacific and Atlantic oceans to become especially cold and cloudy. Analysis in the new study shows that these cloud changes over the oceans compounded the glacier's global cooling effects by reflecting even more sunlight.

In short, the study shows that CO2 played a smaller role in setting ice age temperatures than previously estimated. The flipside is that the most dire predictions for warming from rising CO2 are less likely over coming decades.

"This paper allows us to produce more confident predictions because it really brings down the upper end of future warming, and says that the most extreme scenario is less likely," Armour said. "It doesn't really change the lower end, or the average estimate, which remain consistent with all the other lines of evidence."

Read more at Science Daily

Marine plankton behavior could predict future marine extinctions

Marine communities migrated to Antarctica during the Earth's warmest period in 66 million years long before a mass-extinction event.

All but the most specialist sea plankton moved to higher latitudes during the Early Eocene Climatic Optimum, an interval of sustained high global temperatures equivalent to worst case global warming scenarios.

When the team, comprised of researchers from the University of Bristol, Harvard University, University of Texas Institute for Geophysics and the University of Victoria, compared biodiversity and global community structure, they found that the community often responds to climate change millions of years before losses of biodiversity.

The study, published today in Nature, suggests that plankton migrated to cooler regions to escape the tropical heat and that only the most highly specialised species were able to remain.

These findings imply that changes on the community scale will be evident long before extinctions in the modern world and that more effort must be placed on monitoring the structure of marine communities to potentially predict future marine extinctions.

Dr Adam Woodhouse from the University of Bristol's School of Earth Sciences, explained: "Considering three billion people live in the tropics, this is not great news.

"We knew that biodiversity amongst marine plankton groups has changed throughout the last 66 million years, but no one had ever explored it on a global, spatial, scale through the lens of a single database.

"We used the Triton dataset, that I created during my PhD, which offered new insights into how biodiversity responds spatially to global changes in climate, especially during intervals of global warmth which are relevant to future warming projections."

Dr Woodhouse teamed up with Dr Anshuman Swain, an ecologist and specialist in the application of networks to biological data. They applied networks to micropalaeontology for the first time ever to document the global spatial changes in community structure as climate has evolved over the Cenozoic, building on previous research on cooling restructured global marine plankton communities.

Dr Woodhouse continued: "The fossil record of marine plankton is the most complete and extensive archive of ancient biological changes available to science. By applying advanced computational analyses to this archive we were able to detail global community structure of the oceans since the death of the dinosaurs, revealing that community change often precedes the extinction of organisms.

"This exciting result suggests that monitoring of ocean community structure may represent an 'early warning system' which precedes the extinction of oceanic life."

Read more at Science Daily

Honey bees experience multiple health stressors out-in-the-field

It's not a single pesticide or virus stressing honey bees, and affecting their health, but exposure to a complex web of multiple interacting stressors encountered while at work pollinating crops, found new research out of York University.

Scientists have been unable to explain increasing colony mortality, even after decades of research examining the role of specific pesticides, parasitic mites, viruses or genetics. This led the research team to wonder if previous studies were missing something by focussing on one stressor at a time.

"Our study is the first to apply systems level or network analyses to honey bee stressors at a massive scale. I think this represents a paradigm shift in the field because we have been so focussed on finding the one big thing, the smoking gun," says corresponding author of the new paper York Faculty of Science Professor Amro Zayed, York Research Chair in Genomics. "But we are finding that bees are exposed to a very complicated network of stressors that change quickly over time and space. It's a level of complexity that we haven't thought about before. To me, that's the big surprise of this study."

The paper, Honey bee stressor networks are complex and dependent on crop and region, published today in Current Biology, takes a much broader look at the interplay of stressors and their effects. The study team also included researchers from the University of British Columbia, Agriculture and Agri-Food Canada, the University of Victoria, the University of Lethbridge, the University of Manitoba, l'Université Laval, the University of Guelph, and the Ontario Beekeepers' Association.

Not all stressors are the same, however. Some stressors are more influential than others -- what researchers call the social media influencers of the bee world -- having an outsized impact on the architecture of a highly complex network and their co-stressors. They also found that most of these influencer stressors are viruses and pesticides that regularly show up in combination with specific other stressors, compounding the negative effects through their interactions.

"Understanding which stressors co-occur and are likely to interact is profoundly important to unravelling how they are impacting the health and mortality of honey bee colonies," says lead author, York Postdoctoral Fellow Sarah French of the Faculty of Science.

"There have been a lot of studies about major pesticides, but in this research, we also saw a lot of minor pesticides that we don't usually think about or study. We also found a lot of viruses that beekeepers don't typically test for or manage. Seeing the influencer stressors interact with all these other stressors, whether it be mites, other pesticides or viruses, was not only interesting, but surprising."

French says the way influencer stressors co-occur with other stressors is similar to the way humans experience co-morbidities, such as when someone is diagnosed with heart disease. They are more likely to also have diabetes or high blood pressure or both, and each one impacts the other. "That's similar to the way we examine bee colonies. We look at everything that's going on in the colony and then compare or amalgamate all the colonies together to look at the broader patterns of what is happening and how everything is related. Two or multiple stressors can really synergize off each other leading to a much greater effect on bee health."

From Québec to British Columbia, honey bee colonies were given the job of pollinating some of Canada's most valuable crops -- apples, canola oil and seed, highbush and lowbush blueberry, soybean, cranberry and corn. The study covered multiple time scales, providing numerous snapshots, rather than the usual single snapshot in time. The research team found that honey bees were exposed to an average of 23 stressors at once that combined to create 307 interactions.

Honey bees are a billion dollar industry. In 2021, honey bees contributed some $7 billion in economic value by pollinating orchards, vegetables, berries and oil seeds like canola, and produced 75 to 90 million pounds of honey. Figuring which stressors would provide the most benefit if managed would go a long way toward developing the right tools to tackle them, something beekeepers are often lacking.

The research is part of the BEECSI: 'OMIC tools for assessing bee health project funded to the tune of $10 million by Genome Canada in 2018 to use genomic tools to develop a new health assessment and diagnosis platform powered by stressor-specific markers.

More research is needed to unravel how the stressors are interacting and impacting honey bee mortality and colony health going forward, says French. "It's really teasing apart which of these compounds might have that relationship and how can we build off this to study those specific relationships."

It can't come soon enough, honey bees are currently facing poor health, colony loss, parasites, pathogens and heightened stressors worldwide. Some beekeepers in this country and the United States face a loss over winter of up to 60 per cent of their colonies.

Read more at Science Daily

Apr 18, 2024

Most massive stellar black hole in our galaxy found

Astronomers have identified the most massive stellar black hole yet discovered in the Milky Way galaxy. This black hole was spotted in data from the European Space Agency's Gaia mission because it imposes an odd 'wobbling' motion on the companion star orbiting it. Data from the European Southern Observatory's Very Large Telescope (ESO's VLT) and other ground-based observatories were used to verify the mass of the black hole, putting it at an impressive 33 times that of the Sun.

Stellar black holes are formed from the collapse of massive stars and the ones previously identified in the Milky Way are on average about 10 times as massive as the Sun. Even the next most massive stellar black hole known in our galaxy, Cygnus X-1, only reaches 21 solar masses, making this new 33-solar-mass observation exceptional.

Remarkably, this black hole is also extremely close to us -- at a mere 2000 light-years away in the constellation Aquila, it is the second-closest known black hole to Earth. Dubbed Gaia BH3 or BH3 for short, it was found while the team were reviewing Gaia observations in preparation for an upcoming data release. "No one was expecting to find a high-mass black hole lurking nearby, undetected so far," says Gaia collaboration member Pasquale Panuzzo, an astronomer at the Observatoire de Paris, part of France's National Centre for Scientific Research (CNRS). "This is the kind of discovery you make once in your research life."

To confirm their discovery, the Gaia collaboration used data from ground-based observatories, including from the Ultraviolet and Visual Echelle Spectrograph (UVES) instrument on ESO's VLT, located in Chile's Atacama Desert. These observations revealed key properties of the companion star, which, together with Gaia data, allowed astronomers to precisely measure the mass of BH3.

Astronomers have found similarly massive black holes outside our galaxy (using a different detection method), and have theorised that they may form from the collapse of stars with very few elements heavier than hydrogen and helium in their chemical composition. These so-called metal-poor stars are thought to lose less mass over their lifetimes and hence have more material left over to produce high-mass black holes after their death. But evidence directly linking metal-poor stars to high-mass black holes has been lacking until now.

Stars in pairs tend to have similar compositions, meaning that BH3's companion holds important clues about the star that collapsed to form this exceptional black hole. UVES data showed that the companion was a very metal-poor star, indicating that the star that collapsed to form BH3 was also metal-poor -- just as predicted.

The research study, led by Panuzzo, is published today in Astronomy & Astrophysics. "We took the exceptional step of publishing this paper based on preliminary data ahead of the forthcoming Gaia release because of the unique nature of the discovery," says co-author Elisabetta Caffau, also a Gaia collaboration member from the CNRS Observatoire de Paris. Making the data available early will let other astronomers start studying this black hole right now, without waiting for the full data release, planned for late 2025 at the earliest.

Further observations of this system could reveal more about its history and about the black hole itself. The GRAVITY instrument on ESO's VLT Interferometer, for example, could help astronomers find out whether this black hole is pulling in matter from its surroundings and better understand this exciting object.

Read more at Science Daily