Apr 27, 2024

Researchers advance detection of gravitational waves to study collisions of neutron stars and black holes

Researchers at the University of Minnesota Twin Cities College of Science and Engineering co-led a new study by an international team that will improve the detection of gravitational waves -- ripples in space and time.

The research aims to send alerts to astronomers and astrophysicists within 30 seconds after the detection, helping to improve the understanding of neutron stars and black holes and how heavy elements, including gold and uranium, are produced.

The findings were recently published in the Proceedings of the National Academy of Sciences (PNAS).

Gravitational waves interact with spacetime by compressing it in one direction while stretching it in the perpendicular direction. That is why current state-of-the-art gravitational wave detectors are L-shaped and measure the relative lengths of the laser using interferometry, a measurement method which looks at the interference patterns produced by the combination of two light sources. Detecting gravitational waves requires measuring the length of the laser to precise measurements: equivalent to measuring the distance to the nearest star, around four light years away, down to the width of a human hair.

This research is part of the LIGO-Virgo-KAGRA (LVK) Collaboration, a network of gravitational wave interferometers across the world.

In the latest simulation campaign, data was used from previous observation periods and simulated gravitational wave signals were added to show the performance of the software and equipment upgrades. The software can detect the shape of signals, track how the signal behaves, and estimate what masses are included in the event, like neutron stars or black holes. Neutron stars are the smallest, most dense stars known to exist and are formed when massive stars explode in supernovas.

Once this software detects a gravitational wave signal, it sends out alerts to subscribers, which usually include astronomers or astrophysicists, to communicate where the signal was located in the sky. With the upgrades in this observing period, scientists are able to send alerts faster, under 30 seconds, after the detection of a gravitational wave.

"With this software, we can detect the gravitational wave from neutron star collisions that is normally too faint to see unless we know exactly where to look," said Andrew Toivonen, a Ph.D. student in the University of Minnesota Twin Cities School of Physics and Astronomy. "Detecting the gravitational waves first will help locate the collision and help astronomers and astrophysicists to complete further research."

Astronomers and astrophysicists could use this information to understand how neutron stars behave, study nuclear reactions between neutron stars and black holes colliding, and how heavy elements, including gold and uranium, are produced.

Read more at Science Daily

Social change may explain decline in genetic diversity of the Y chromosome at the end of the Neolithic period

The emergence in the Neolithic of patrilineal social systems, in which children are affiliated with their father's lineage, may explain a spectacular decline in the genetic diversity of the Y chromosome observed worldwide between 3,000 and 5,000 years ago. In a study to be published on 24 April in Nature Communications, a team of scientists from the CNRS, MNHN and Université Paris Cité suggest that these patrilineal organisations had a greater impact on the Y chromosome than mortality during conflict.

This conclusion was reached after analysing twenty years of anthropological field data -- from contemporary non-warlike patrilineal groups, particularly from the scientists' own fieldwork carried out in Asia -- and modelling various socio-demographic scenarios. The team compared warrior and non-warrior scenarios and showed that two processes play a major role in genetic diversity: the splitting of clans into several sub-clans and differences in social status that lead to the expansion of certain lineages to the detriment of others.

This study calls into question the previously proposed theory that violent clashes, supposedly due to competition between different clans, in which many men died, were at the origin of the loss of genetic diversity of the Y chromosome. The results of this study also provide new hypotheses on human social organisation in the Neolithic and Bronze Age.

Read more at Science Daily

Herring arrives earlier in the Wadden Sea due to climate change

Due to the changing climate, young herring arrive in the Wadden Sea earlier and earlier in spring. That is shown in a new publication by NIOZ ecologists Mark Rademaker, Myron Peck and Anieke van Leeuwen, in this month's journal Global Change Biology. "The fact that we were able to demonstrate this, was only due to very consistently, for more than 60 years, and continuously sampling the fish every spring and every fall with exactly the same fyke every time," Rademaker says. "Recognizing this kind of change requires extreme precision and endurance!"

NIOZ fyke


Since 1960, NIOZ, Royal Netherlands Institute for Sea Research, has been measuring the number and species of fish that swim in the Marsdiep, between Den Helder and Texel, day in and day out using a standard fyke, in spring and fall. These measurements show that the peak of the number of young herring swimming into the Wadden Sea since 1982 comes at least two weeks earlier now. "Such a calculation is difficult with a species of fish that swims in large schools," Rademaker says. "One day there may be only ten herring, while the next there are suddenly ten thousand fish swimming by. So, if you were to accidentally take a measurement just one day or the other, you would get a completely different picture."

Extremely consistent measurement

According to Rademaker, the solution to that problem lies in extremely consistent measurement, almost to the square meter. "Only by carrying out measurements in the same place over and over again, and almost continuously, year after year, can you reliably reveal changes in the long term."

Unique set of data

The research with the 'NIOZ fyke' is unique in the world. Most other monitoring programs measure only once or a few times per month or even per quarter, and then often not even at exactly the same spot. Rademaker: "When I projected that frequency from other research programs onto the data from the NIOZ fyke, picking out a few random measurement days, the changes in the timing of the herring did not show up."

Read more at Science Daily

Illusion helps demystify the way vision works

For the first time, research shows that a certain kind of visual illusion, neon color spreading, works on mice. The study is also the first to combine the use of two investigative techniques called electrophysiology and optogenetics to study this illusion. Results from experiments on mice settle a long-standing debate in neuroscience about which levels of neurons within the brain are responsible for the perception of brightness.

We're all familiar with optical illusions; some are novelties, while some are all around us. Even as you look at the screen in front you, you are being fooled into thinking that you're seeing the color white. What you're really seeing is lots of red, green and blue elements packed so tightly together it gives the impression of being white. Another example is a fast rotating wheel or propeller, which can briefly look like it's reversing direction while it's accelerating to full speed. In any case, it might be surprising to know that optical illusions are not just fun to look at but can also be a useful tool to learn more about eyes, nerves, minds and brains.

Associate Professor Masataka Watanabe from the Department of Systems Innovation at the University of Tokyo is on a mission to understand more about the nature of consciousness. It's a vast subject area so naturally there are many ways to explore it, and amongst other things, he uses optical illusions. His most recent research looked at whether a certain kind of illusion that works on humans would also work on mice. And it turns out, it does. But why is this significant?

"Knowing this kind of illusion, called a neon-color-spreading illusion, works on mice as well as humans, is useful for neuroscientists like myself, as it means that mice can serve as useful test subjects for cases where humans cannot," said Watanabe. "To really understand what goes on inside the brain during perceptual experiences, we need to use certain methods that we cannot use on people. These include electrophysiology, the recording of neural activity with electrodes, and optogenetics, where light pulses enable or disable firing of specific neurons in a living brain."

Watanabe's experiment was the first of its kind to make use of both electrophysiology and optogenetics at the same time in animal test subjects exposed to the neon-color-spreading illusion, which allowed his team to see precisely what structures within the brain are responsible for processing the illusion.

"After a visual stimulus lands on the eye, it's carried to the brain by nerves and is then received by a series of layers of neurons called V1, V2 and so on, where V1 is the first and most basic layer, and V2 and above are considered higher layers," said Watanabe. "There is a long-standing debate in neuroscience about the role higher levels play in the perception of brightness and it was not an easy thing to study. Our experiment on mice has shown us that neurons in V1 responded not just to the illusion, but also to a nonillusory version of the same kind of pattern shown. But only when the illusory version was shown to the mice did neurons in V2 also play a crucial role: that of modulating the activity of neurons in V1, thus proving that V2 neurons do in fact play a role in the perception of brightness."

Read more at Science Daily

Apr 26, 2024

Climate change could become the main driver of biodiversity decline by mid-century

Global biodiversity has declined between 2% and 11% during the 20th century due to land-use change alone, according to a large multi-model study published in Science. Projections show climate change could become the main driver of biodiversity decline by the mid-21st century.

The analysis was led by the German Centre for Integrative Biodiversity Research (iDiv) and the Martin Luther University Halle-Wittenberg (MLU) and is the largest modelling study of its kind to date. The researchers compared thirteen models for assessing the impact of land-use change and climate change on four distinct biodiversity metrics, as well as on nine ecosystem services.

GLOBAL BIODIVERSITY MAY HAVE DECLINED BY 2% TO 11% DUE TO LAND-USE CHANGE ALONE

Land-use change is considered the largest driver of biodiversity change, according to the Intergovernmental Platform on Biodiversity and Ecosystem Services (IPBES). However, scientists are divided over how much biodiversity has changed in past decades. To better answer this question, the researchers modelled the impacts of land-use change on biodiversity over the 20th century. They found global biodiversity may have declined by 2% to 11% due to land-use change alone. This span covers a range of four biodiversity metrics1 calculated by seven different models.

"By including all world regions in our model, we were able to fill many blind spots and address criticism of other approaches working with fragmented and potentially biased data," says first author Prof Henrique Pereira, research group head at iDiv and MLU. "Every approach has its ups and downsides. We believe our modelling approach provides the most comprehensive estimate of biodiversity trends worldwide."

MIXED TRENDS FOR ECOSYSTEM SERVICES

Using another set of five models, the researchers also calculated the simultaneous impact of land-use change on so-called ecosystem services, i.e., the benefits nature provides to humans. In the past century, they found a massive increase in provisioning ecosystem services, like food and timber production. By contrast, regulating ecosystem services, like pollination, nitrogen retention, or carbon sequestration, moderately declined.

CLIMATE AND LAND-USE CHANGE COMBINED MIGHT LEAD TO BIODIVERSITY LOSS IN ALL WORLD REGIONS


The researchers also examined how biodiversity and ecosystem services might evolve in the future. For these projections, they added climate change as a growing driver of biodiversity change to their calculations.

Climate change stands to put additional strain on biodiversity and ecosystem services, according to the findings. While land-use change remains relevant, climate change could become the most important driver of biodiversity loss by mid-century. The researchers assessed three widely-used scenarios -- from a sustainable development to a high emissions scenario. For all scenarios, the impacts of land-use change and climate change combined result in biodiversity loss in all world regions.

While the overall downward trend is consistent, there are considerable variations across world regions, models, and scenarios.

PROJECTIONS ARE NOT PREDICTIONS

"The purpose of long-term scenarios is not to predict what will happen," says co-author Dr Inês Martins from the University of York. "Rather, it is to understand alternatives, and therefore avoid these trajectories, which might be least desirable, and select those that have positive outcomes. Trajectories depend on the policies we choose, and these decisions are made day by day." Martins co-led the model analyses and is an alumna of iDiv and MLU.

The authors also note that even the most sustainable scenario assessed does not deploy all the policies that could be put in place to protect biodiversity in the coming decades. For instance, bioenergy deployment, one key component of the sustainability scenario, can contribute to mitigating climate change, but can simultaneously reduce species habitats. In contrast, measures to increase the effectiveness and coverage of protected areas or large-scale rewilding were not explored in any of the scenarios

MODELS HELP IDENTIFY EFFECTIVE POLICIES

Assessing the impacts of concrete policies on biodiversity helps identify those policies most effective for safeguarding and promoting biodiversity and ecosystem services, according to the researchers. "There are modelling uncertainties, for sure," Pereira adds. "Still, our findings clearly show that current policies are insufficient to meet international biodiversity goals. We need renewed efforts to make progress against one of the world's largest problems, which is human-caused biodiversity change."

Read more at Science Daily

Voluntary corporate emissions targets not enough to create real climate action

Companies' emissions reduction targets should not be the sole measure of corporate climate ambition, according to a new perspective paper.

Relying on emissions can favour more established companies and hinder innovation, say the authors, who suggest updating regulations to improve corporate climate action.

The paper, published today in Science, is by an international team led by Utrecht University, which includes Imperial College London researchers.

Lead author of the study Dr Yann Robiou Du Pont, from the Copernicus Institute of Sustainable Development at Utrecht University, said: "Assessing the climate ambition of companies based only on their emissions reductions may not be meaningful for emerging companies working on green innovation."

Companies can set individual climate goals, typically commitments to reduce greenhouse gas emissions from their activities -- not unlike national governments. To indicate how ambitious these voluntary commitments are, businesses can get them validated as 'Paris-aligned' under the Science Based Targets initiative (SBTi), a collaboration that started in 2015.

This validation means SBTi considers their targets to be aligned to the Paris Agreement, which aims to limit global temperature increase to well below 2°C above preindustrial levels and pursue efforts to limit it to 1.5°C.

The new paper says this approach may inadvertently favour larger existing companies, stifling innovation and skewing the playing field against emerging competitors. This is because Paris-aligned targets for larger, established companies often assume that they can simply keep their current market share of emissions, leaving no capacity for emissions from the activities of emerging companies.

For example, a new solar panel manufacturer that needs to grow its emissions ten years from now while it scales up a new, highly efficient method of building those panels, may be squeezed out of the market because, in this model, their operation would mean overshooting the Paris-aligned climate goal.

Dr Robiou Du Pont said: "These voluntary corporate targets may have been useful to achieve some progress on emissions reduction in the largest companies. But our paper shows that this approach is not sufficient to guide the corporate sector and cannot be the sole basis for regulations assessing if businesses are Paris-compliant."

To level the playing field, the authors say corporate climate targets could be based on other factors than reductions in emissions, such as emissions intensity per unit of economic or physical output. These types of targets however are harder to align to Paris Agreement targets, as they don't cap absolute emissions.

The study also highlights that adopting a target doesn't necessarily cause a drop in actual emissions, as voluntary targets are just that. The authors point to evidence that corporations are already using these voluntary targets, often of questionable credibility, as justification for watering down or delaying mandatory regulations.

Co-author Professor Joeri Rogelj, from the Centre for Environmental Policy and Director of Research at the Grantham Institute at Imperial College London, said: "Companies setting their own individual targets risk complacency that we can't afford. The window to keep the planet to 1.5°C warming is rapidly closing, and even for keeping warming well below the upper Paris limit of 2°C we need concerted action to reduce greenhouse gas emissions now. Voluntary corporate emissions targets alone are not enough for rapid global decarbonization and certainly not a substitute for regulation."

The authors conclude that governments or intergovernmental organisations need to introduce legal frameworks based on a range of indicators that encourage best practices and innovation, as well as stringent requirements on transparency for any assessments.

The toolkit for building those frameworks exists, argue the authors, including carbon pricing, green subsidies and demand-side measures. Regulators should also consider the usefulness of the products that companies produce in the green transition, not only their emissions. Under a revised framework, the more efficient solar panel manufacturer would not have to constrain production, allowing for needed innovation with spillover effects in the future.

Read more at Science Daily

How do birds flock? Researchers do the math to reveal previously unknown aerodynamic phenomenon

In looking up at the sky during these early weeks of spring, you may very well see a flock of birds moving in unison as they migrate north. But how do these creatures fly in such a coordinated and seemingly effortless fashion?

Part of the answer lies in precise, and previously unknown, aerodynamic interactions, reports a team of mathematicians in a newly published study. Its breakthrough broadens our understanding of wildlife, including fish, who move in schools, and could have applications in transportation and energy.

"This area of research is important since animals are known to take advantage of the flows, such as of air or water, left by other members of a group to save on the energy needed to move or to reduce drag or resistance," explains Leif Ristroph, an associate professor at New York University's Courant Institute of Mathematical Sciences and the senior author of the paper, which appears in the journal Nature Communications. "Our work may also have applications in transportation -- like efficient propulsion through air or water -- and energy, such as more effectively harvesting power from wind, water currents, or waves."

The team's results show that the impact of aerodynamics depends on the size of the flying group -- benefiting small groups and disrupting large ones.

"The aerodynamic interactions in small bird flocks help each member to hold a certain special position relative to their leading neighbor, but larger groups are disrupted by an effect that dislodges members from these positions and may cause collisions," notes Sophie Ramananarivo, an assistant professor at École Polytechnique Paris and one of the paper's authors.

Previously, Ristroph and his colleagues uncovered how birds move in groups -- but these findings were drawn from experiments mimicking the interactions of two birds. The new Nature Communications research expanded the inquiry to account for many flyers.

To replicate the columnar formations of birds, in which they line up one directly behind the other, the researchers created mechanized flappers that act like birds' wings. The wings were 3D-printed from plastic and driven by motors to flap in water, which replicated how air flows around bird wings during flight. This "mock flock" propelled through water and could freely arrange itself within a line or queue, as seen in a video of the experiment.

The flows affected group organization in different ways -- depending on the size of the group.

For small groups of up to about four flyers, the researchers discovered an effect by which each member gets help from the aerodynamic interactions in holding its position relative to its neighbors.

"If a flyer is displaced from its position, the vortices or swirls of flow left by the leading neighbor help to push the follower back into place and hold it there," explains Ristroph, director of NYU's Applied Mathematics Laboratory, where the experiments were conducted. "This means the flyers can assemble into an orderly queue of regular spacing automatically and with no extra effort, since the physics does all the work.

"For larger groups, however, these flow interactions cause later members to be jostled around and thrown out of position, typically causing a breakdown of the flock due to collisions among members. This means that the very long groups seen in some types of birds are not at all easy to form, and the later members likely have to constantly work to hold their positions and avoid crashing into their neighbors."

The authors then deployed mathematical modeling to better understand the underlying forces driving the experimental results.

Here, they concluded that flow-mediated interactions between neighbors are, in effect, spring-like forces that hold each member in place -- just as if the cars of a train were connected by springs.

However, these "springs" act in only one direction -- a lead bird can exert force on its follower, but not vice versa -- and this non-reciprocal interaction means that later members tend to resonate or oscillate wildly.

"The oscillations look like waves that jiggle the members forwards and backwards and which travel down the group and increase in intensity, causing later members to crash together," explains Joel Newbolt, who was an NYU graduate student in physics at the time of research.

The team named these new types of waves "flonons," which is based on the similar concept of phonons that refer to vibrational waves in systems of masses linked by springs and which are used to model the motions of atoms or molecules in crystals or other materials.

"Our findings therefore raise some interesting connections to material physics in which birds in an orderly flock are analogous to atoms in a regular crystal," Newbolt adds.

Read more at Science Daily

Why can't robots outrun animals?

Robotics engineers have worked for decades and invested many millions of research dollars in attempts to create a robot that can walk or run as well as an animal. And yet, it remains the case that many animals are capable of feats that would be impossible for robots that exist today.

"A wildebeest can migrate for thousands of kilometres over rough terrain, a mountain goat can climb up a literal cliff, finding footholds that don't even seem to be there, and cockroaches can lose a leg and not slow down," says Dr. Max Donelan, Professor in Simon Fraser University's Department of Biomedical Physiology and Kinesiology. "We have no robots capable of anything like this endurance, agility and robustness."

To understand why, and quantify how, robots lag behind animals, an interdisciplinary team of scientists and engineers from leading research universities completed a detailed study of various aspects of running robots, comparing them with their equivalents in animals, for a paper published in Science Robotics. The paper finds that, by the metrics engineers use, biological components performed surprisingly poorly compared to fabricated parts. Where animals excel, though, is in their integration and control of those components.

Alongside Donelan, the team comprised Drs. Sam Burden, Associate Professor in the Department of Electrical & Computer Engineering at the University of Washington; Tom Libby, Senior Research Engineer, SRI International; Kaushik Jayaram, Assistant Professor in the Paul M Rady Department of Mechanical Engineering at the University of Colorado Boulder; and Simon Sponberg, Dunn Family Associate Professor of Physics and Biological Sciences at the Georgia Institute of Technology.

The researchers each studied one of five different "subsystems" that combine to create a running robot -- Power, Frame, Actuation, Sensing, and Control -- and compared them with their biological equivalents. Previously, it was commonly accepted that animals' outperformance of robots must be due to the superiority of biological components.

"The way things turned out is that, with only minor exceptions, the engineering subsystems outperform the biological equivalents -- and sometimes radically outperformed them," says Libby. "But also what's very, very clear is that, if you compare animals to robots at the whole system level, in terms of movement, animals are amazing. And robots have yet to catch up."

More optimistically for the field of robotics, the researchers noted that, if you compare the relatively short time that robotics has had to develop its technology with the countless generations of animals that have evolved over many millions of years, the progress has actually been remarkably quick.

"It will move faster, because evolution is undirected," says Burden. "Whereas we can very much correct how we design robots and learn something in one robot and download it into every other robot, biology doesn't have that option. So there are ways that we can move much more quickly when we engineer robots than we can through evolution -- but evolution has a massive head start."

More than simply an engineering challenge, effective running robots offer countless potential uses. Whether solving 'last mile' delivery challenges in a world designed for humans that is often difficult to navigate for wheeled robots, carrying out searches in dangerous environments or handling hazardous materials, there are many potential applications for the technology.

Read more at Science Daily

Apr 25, 2024

Eruption of mega-magnetic star lights up nearby galaxy

While ESA's satellite INTEGRAL was observing the sky, it spotted a burst of gamma-rays -- high-energy photons -- coming from the nearby galaxy M82. Only a few hours later, ESA's XMM-Newton X-ray space telescope searched for an afterglow from the explosion but found none. An international team, including researchers from the University of Geneva (UNIGE), realised that the burst must have been an extra-galactic flare from a magnetar, a young neutron star with an exceptionally strong magnetic field. The discovery is published in the journal Nature.

On 15 November 2023, ESA's satellite INTEGRAL spotted a sudden explosion from a rare object. For only a tenth of a second, a short burst of energetic gamma-rays appeared in the sky. "The satellite data were received in the INTEGRAL Science Data Centre (ISDC), based on the Ecogia site of the UNIGE Astronomy Department, from where a gamma-ray burst alert was sent out to astronomers worldwide, only 13 seconds after its detection," explains Carlo Ferrigno, senior research associate in the Astronomy Department at UNIGE Faculty of Science, PI of the ISDC and co-author of the publication.

The IBAS (Integral Burst Alert System) software gave an automatic localisation coinciding with the galaxy M82, 12 million light-years away. This alert system was developed and is operated by scientists and engineers from the UNIGE in collaboration with international colleagues.

A curious signal from a nearby galaxy?

"We immediately realised that this was a special alert. Gamma-ray bursts come from far-away and anywhere in the sky, but this burst came from a bright nearby galaxy," explains Sandro Mereghetti of the National Institute for Astrophysics (INAF-IASF) in Milan, Italy, lead author of the publication and contributor of IBAS. The team immediately requested ESA's XMM-Newton space telescope to perform a follow-up observation of the burst's location as soon as possible. If this had been a short gamma-ray burst, caused by two colliding neutron stars, the collision would have created gravitational waves and have an afterglow in X-rays and visible light.

However, XMM-Newton's observations only showed the hot gas and stars in the galaxy. Using ground-based optical telescopes, including the Italian Telescopio Nazionale Galileo and the French Observatoire de Haute-Provence, they also looked for a signal in visible light, starting only a few hours after the explosion, but again did not find anything. With no signal in X-rays and visible light, and no gravitational waves measured by detectors on Earth (LIGO/VIRGO/KAGRA), the most certain explanation is that the signal came from a magnetar.

Magnetars: mega-magnetic stars, recently dead

"When stars more massive than eight times the Sun die, they explode in a supernova that leaves a black hole or neutron star behind. Neutron stars are very compact stellar remnants with more than the mass of the Sun packed into a sphere with the size of the Canton of Geneva. They rotate quickly and have strong magnetic fields." explains Volodymyr Savchenko, senior research associate in the Astronomy Department at UNIGE Faculty of Science, and co-author of the publication. Some young neutron stars have extra strong magnetic fields, more than 10,000 times that of typical neutron stars. These are called magnetars. They emit energy away in flares, and occasionally these flares are gigantic.

However, in the past 50 years of gamma-ray observations, only three giant flares have been identified as coming from magnetars in our galaxy. These outbursts are very strong: one that was detected in December 2004, came from 30,000 light-years from us but was still powerful enough to affect the upper layers of Earth's atmosphere, like the Solar flares, coming from much closer to us, do.

The flare detected by INTEGRAL is the first firm confirmation of a magnetar flare outside of the Milky Way. M82 is a bright galaxy where star formation takes place. In these regions, massive stars are born, live short turbulent lives and leave behind a neutron star. "The discovery of a magnetar in this region confirms that magnetars are likely young neutron stars," adds Volodymyr Savchenko. The search for more magnetars will continue in other extra-galactic star-forming regions, to?understand these extraordinary astronomical objects. If astronomers can find many more, they can start to understand how often these flares happen and how neutron stars lose energy in the process.

INTEGRAL, a key instrument in a race against time


Outbursts of such short duration can only be captured serendipitously when an observatory is already pointing in the right direction. This makes INTEGRAL with its large field of view, more than 3000 times greater than the sky area covered by the Moon, so important for these detections.

Read more at Science Daily

How light can vaporize water without the need for heat

It's the most fundamental of processes -- the evaporation of water from the surfaces of oceans and lakes, the burning off of fog in the morning sun, and the drying of briny ponds that leaves solid salt behind. Evaporation is all around us, and humans have been observing it and making use of it for as long as we have existed.

And yet, it turns out, we've been missing a major part of the picture all along.

In a series of painstakingly precise experiments, a team of researchers at MIT has demonstrated that heat isn't alone in causing water to evaporate. Light, striking the water's surface where air and water meet, can break water molecules away and float them into the air, causing evaporation in the absence of any source of heat.

The astonishing new discovery could have a wide range of significant implications. It could help explain mysterious measurements over the years of how sunlight affects clouds, and therefore affect calculations of the effects of climate change on cloud cover and precipitation. It could also lead to new ways of designing industrial processes such as solar-powered desalination or drying of materials.

The findings, and the many different lines of evidence that demonstrate the reality of the phenomenon and the details of how it works, are described in the journal PNAS, in a paper by Carl Richard Soderberg Professor of Power Engineering Gang Chen, postdocs Guangxin Lv and Yaodong Tu, and graduate student James Zhang.

The authors say their study suggests that the effect should happen widely in nature -- everywhere from clouds to fogs to the surfaces of oceans, soils, and plants -- and that it could also lead to new practical applications, including in energy and clean water production. "I think this has a lot of applications," Chen says. "We're exploring all these different directions. And of course, it also affects the basic science, like the effects of clouds on climate, because clouds are the most uncertain aspect of climate models."

A newfound phenomenon

The new work builds on research reported last year, which described this new "photomolecular effect" but only under very specialized conditions: on the surface of specially prepared hydrogels soaked with water. In the new study, the researchers demonstrate that the hydrogel is not necessary for the process; it occurs at any water surface exposed to light, whether it's a flat surface like a body of water or a curved surface like a droplet of cloud vapor.

Because the effect was so unexpected, the team worked to prove its existence with as many different lines of evidence as possible. In this study, they report 14 different kinds of tests and measurements they carried out to establish that water was indeed evaporating -- that is, molecules of water were being knocked loose from the water's surface and wafted into the air -- due to the light alone, not by heat, which was long assumed to be the only mechanism involved.

One key indicator, which showed up consistently in four different kinds of experiments under different conditions, was that as the water began to evaporate from a test container under visible light, the air temperature measured above the water's surface cooled down and then leveled off, showing that thermal energy was not the driving force behind the effect.

Other key indicators that showed up included the way the evaporation effect varied depending on the angle of the light, the exact color of the light, and its polarization. None of these varying characteristics should happen because at these wavelengths, water hardly absorbs light at all -- and yet the researchers observed them.

The effect is strongest when light hits the water surface at an angle of 45 degrees. It is also strongest with a certain type of polarization, called transverse magnetic polarization. And it peaks in green light -- which, oddly, is the color for which water is most transparent and thus interacts the least.

Chen and his co-researchers have proposed a physical mechanism that can explain the angle and polarization dependence of the effect, showing that the photons of light can impart a net force on water molecules at the water surface that is sufficient to knock them loose from the body of water. But they cannot yet account for the color dependence, which they say will require further study.

They have named this the photomolecular effect, by analogy with the photoelectric effect that was discovered by Heinrich Hertz in 1887 and finally explained by Albert Einstein in 1905. That effect was one of the first demonstrations that light also has particle characteristics, which had major implications in physics and led to a wide variety of applications, including LEDs. Just as the photoelectric effect liberates electrons from atoms in a material in response to being hit by a photon of light, the photomolecular effect shows that photons can liberate entire molecules from a liquid surface, the researchers say.

"The finding of evaporation caused by light instead of heat provides new disruptive knowledge of light-water interaction," says Xiulin Ruan, professor of mechanical engineering at Purdue University, who was not involved in the study. "It could help us gain new understanding of how sunlight interacts with cloud, fog, oceans, and other natural water bodies to affect weather and climate. It has significant potential practical applications such as high-performance water desalination driven by solar energy. This research is among the rare group of truly revolutionary discoveries which are not widely accepted by the community right away but take time, sometimes a long time, to be confirmed."

Solving a cloud conundrum


The finding may solve an 80-year-old mystery in climate science. Measurements of how clouds absorb sunlight have often shown that they are absorbing more sunlight than conventional physics dictates possible. The additional evaporation caused by this effect could account for the longstanding discrepancy, which has been a subject of dispute since such measurements are difficult to make.

"Those experiments are based on satellite data and flight data," Chen explains. "They fly an airplane on top of and below the clouds, and there are also data based on the ocean temperature and radiation balance. And they all conclude that there is more absorption by clouds than theory could calculate. However, due to the complexity of clouds and the difficulties of making such measurements, researchers have been debating whether such discrepancies are real or not. And what we discovered suggests that hey, there's another mechanism for cloud absorption, which was not accounted for, and this mechanism might explain the discrepancies."

Chen says he recently spoke about the phenomenon at an American Physical Society conference, and one physicist there who studies clouds and climate said they had never thought about this possibility, which could affect calculations of the complex effects of clouds on climate. The team conducted experiments using LEDs shining on an artificial cloud chamber, and they observed heating of the fog, which was not supposed to happen since water does not absorb in the visible spectrum. "Such heating can be explained based on the photomolecular effect more easily," he says.

Lv says that of the many lines of evidence, "the flat region in the air-side temperature distribution above hot water will be the easiest for people to reproduce." That temperature profile "is a signature" that demonstrates the effect clearly, he says.

Zhang adds: "It is quite hard to explain how this kind of flat temperature profile comes about without invoking some other mechanism" beyond the accepted theories of thermal evaporation. "It ties together what a whole lot of people are reporting in their solar desalination devices," which again show evaporation rates that cannot be explained by the thermal input.

The effect can be substantial. Under the optimum conditions of color, angle, and polarization, Lv says, "the evaporation rate is four times the thermal limit."

Already, since publication of the first paper, the team has been approached by companies that hope to harness the effect, Chen says, including for evaporating syrup and drying paper in a paper mill. The likeliest first applications will come in the areas of solar desalinization systems or other industrial drying processes, he says. "Drying consumes 20 percent of all industrial energy usage," he points out.

Read more at Science Daily

Bioluminescence first evolved in animals at least 540 million years ago

Bioluminescence first evolved in animals at least 540 million years ago in a group of marine invertebrates called octocorals, according to the results of a new study from scientists with the Smithsonian's National Museum of Natural History.

The results, published today, April 23, in the Proceedings of the Royal Society B, push back the previous record for the luminous trait's oldest dated emergence in animals by nearly 300 million years, and could one day help scientists decode why the ability to produce light evolved in the first place.

Bioluminescence -- the ability of living things to produce light via chemical reactions -- has independently evolved at least 94 times in nature and is involved in a huge range of behaviors including camouflage, courtship, communication and hunting. Until now, the earliest dated origin of bioluminescence in animals was thought to be around 267 million years ago in small marine crustaceans called ostracods.

But for a trait that is literally illuminating, bioluminescence's origins have remained shadowy.

"Nobody quite knows why it first evolved in animals," said Andrea Quattrini, the museum's curator of corals and senior author on the study.

But for Quattrini and lead author Danielle DeLeo, a museum research associate and former postdoctoral fellow, to eventually tackle the larger question of why bioluminescence evolved, they needed to know when the ability first appeared in animals.

In search of the trait's earliest origins, the researchers decided to peer back into the evolutionary history of the octocorals, an evolutionarily ancient and frequently bioluminescent group of animals that includes soft corals, sea fans and sea pens. Like hard corals, octocorals are tiny colonial polyps that secrete a framework that becomes their refuge, but unlike their stony relatives, that structure is usually soft. Octocorals that glow typically only do so when bumped or otherwise disturbed, leaving the precise function of their ability to produce light a bit mysterious.

"We wanted to figure out the timing of the origin of bioluminescence, and octocorals are one of the oldest groups of animals on the planet known to bioluminesce," DeLeo said. "So, the question was when did they develop this ability?"

Not coincidentally, Quattrini and Catherine McFadden with Harvey Mudd College had completed an extremely detailed, well-supported evolutionary tree of the octocorals in 2022. Quattrini and her collaborators created this map of evolutionary relationships, or phylogeny, using genetic data from 185 species of octocorals.

With this evolutionary tree grounded in genetic evidence, DeLeo and Quattrini then situated two octocoral fossils of known ages within the tree according to their physical features. The scientists were able to use the fossils' ages and their respective positions in the octocoral evolutionary tree to date to figure out roughly when octocoral lineages split apart to become two or more branches. Next, the team mapped out the branches of the phylogeny that featured living bioluminescent species.

With the evolutionary tree dated and the branches that contained luminous species labeled, the team then used a series of statistical techniques to perform an analysis called ancestral state reconstruction.

"If we know these species of octocorals living today are bioluminescent, we can use statistics to infer whether their ancestors were highly probable to be bioluminescent or not," Quattrini said. "The more living species with the shared trait, the higher the probability that as you move back in time that those ancestors likely had that trait as well."

The researchers used numerous different statistical methods for their ancestral state reconstruction, but all arrived at the same result: Some 540 million years ago, the common ancestor of all octocorals were very likely bioluminescent. That is 273 million years earlier than the glowing ostracod crustaceans that previously held the title of earliest evolution of bioluminescence in animals.

DeLeo and Quattrini said that the octocorals' thousands of living representatives and relatively high incidence of bioluminescence suggests the trait has played a role in the group's evolutionary success. While this further begs the question of what exactly octocorals are using bioluminescence for, the researchers said the fact that it has been retained for so long highlights how important this form of communication has become for their fitness and survival.

Now that the researchers know the common ancestor of all octocorals likely already had the ability to produce its own light, they are interested in a more thorough accounting of which of the group's more than 3,000 living species can still light up and which have lost the trait. This could help zero in on a set of ecological circumstances that correlate with the ability to bioluminesce and potentially illuminate its function.

To this end, DeLeo said she and some of her co-authors are working on creating a genetic test to determine if an octocoral species has functional copies of the genes underlying luciferase, an enzyme involved in bioluminescence. For species of unknown luminosity, such a test would enable researchers to get an answer one way or the other more rapidly and more easily.

Aside from shedding light on the origins of bioluminescence, this study also offers evolutionary context and insight that can inform monitoring and management of these corals today. Octocorals are threatened by climate change and resource-extraction activities, particularly fishing, oil and gas extraction and spills, and more recently by marine mineral mining.

This research supports the museum's Ocean Science Center, which aims to advance and share knowledge of the ocean with the world. DeLeo and Quattrini said there is still much more to learn before scientists can understand why the ability to produce light first evolved, and though their results place its origins deep in evolutionary time, the possibility remains that future studies will discover that bioluminescence is even more ancient.

Read more at Science Daily

Holographic displays offer a glimpse into an immersive future

Setting the stage for a new era of immersive displays, researchers are one step closer to mixing the real and virtual worlds in an ordinary pair of eyeglasses using high-definition 3D holographic images, according to a study led by Princeton University researchers.

Holographic images have real depth because they are three dimensional, whereas monitors merely simulate depth on a 2D screen. Because we see in three dimensions, holographic images could be integrated seamlessly into our normal view of the everyday world.

The result is a virtual and augmented reality display that has the potential to be truly immersive, the kind where you can move your head normally and never lose the holographic images from view. "To get a similar experience using a monitor, you would need to sit right in front of a cinema screen," said Felix Heide, assistant professor of computer science and senior author on a paper published April 22 in Nature Communications.

And you wouldn't need to wear a screen in front of your eyes to get this immersive experience. Optical elements required to create these images are tiny and could potentially fit on a regular pair of glasses. Virtual reality displays that use a monitor, as current displays do, require a full headset. And they tend to be bulky because they need to accommodate a screen and the hardware necessary to operate it.

"Holography could make virtual and augmented reality displays easily usable, wearable and ultrathin," said Heide. They could transform how we interact with our environments, everything from getting directions while driving, to monitoring a patient during surgery, to accessing plumbing instructions while doing a home repair.

One of the most important challenges is quality. Holographic images are created by a small chip-like device called a spatial light modulator. Until now, these modulators could only create images that are either small and clear or large and fuzzy. This tradeoff between image size and clarity results in a narrow field of view, too narrow to give the user an immersive experience. "If you look towards the corners of the display, the whole image may disappear," said Nathan Matsuda, research scientist at Meta and co-author on the paper.

Heide, Matsuda and Ethan Tseng, doctoral student in computer science, have created a device to improve image quality and potentially solve this problem. Along with their collaborators, they built a second optical element to work in tandem with the spatial light modulator. Their device filters the light from the spatial light modulator to expand the field of view while preserving the stability and fidelity of the image. It creates a larger image with only a minimal drop in quality.

Image quality has been a core challenge preventing the practical applications of holographic displays, said Matsuda. "The research brings us one step closer to resolving this challenge," he said.

The new optical element is like a very small custom-built piece of frosted glass, said Heide. The pattern etched into the frosted glass is the key. Designed using AI and optical techniques, the etched surface scatters light created by the spatial light modulator in a very precise way, pushing some elements of an image into frequency bands that are not easily perceived by the human eye. This improves the quality of the holographic image and expands the field of view.

Read more at Science Daily

Apr 24, 2024

Researchers find oldest undisputed evidence of Earth's magnetic field

A new study, led by the University of Oxford and MIT, has recovered a 3.7-billion-year-old record of Earth's magnetic field, and found that it appears remarkably similar to the field surrounding Earth today. The findings have been published today in the Journal of Geophysical Research.

Without its magnetic field, life on Earth would not be possible since this shields us from harmful cosmic radiation and charged particles emitted by the Sun (the 'solar wind'). But up to now, there has been no reliable date for when the modern magnetic field was first established.

In the new study, the researchers examined an ancient sequence of iron-containing rocks from Isua, Greenland. Iron particles effectively act as tiny magnets that can record both magnetic field strength and direction when the process of crystallization locks them in place. The researchers found that rocks dating from 3.7 billion years ago captured a magnetic field strength of at least 15 microtesla comparable to the modern magnetic field (30 microtesla).

These results provide the oldest estimate of the strength of Earth's magnetic field derived from whole rock samples, which provide a more accurate and reliable assessment than previous studies which used individual crystals.

Lead researcher Professor Claire Nichols (Department of Earth Sciences, University of Oxford) said: 'Extracting reliable records from rocks this old is extremely challenging, and it was really exciting to see primary magnetic signals begin to emerge when we analysed these samples in the lab. This is a really important step forward as we try and determine the role of the ancient magnetic field when life on Earth was first emerging.'

Whilst the magnetic field strength appears to have remained relatively constant, the solar wind is known to have been significantly stronger in the past. This suggests that the protection of Earth's surface from the solar wind has increased over time, which may have allowed life to move onto the continents and leave the protection of the oceans.

Earth's magnetic field is generated by mixing of the molten iron in the fluid outer core, driven by buoyancy forces as the inner core solidifies, which create a dynamo. During Earth's early formation, the solid inner core had not yet formed, leaving open questions about how the early magnetic field was sustained. These new results suggest the mechanism driving Earth's early dynamo was similarly efficient to the solidification process that generates Earth's magnetic field today.

Understanding how Earth's magnetic field strength has varied over time is also key for determining when Earth's inner, solid core began to form. This will help us to understand how rapidly heat is escaping from Earth's deep interior, which is key for understanding processes such as plate tectonics.

A significant challenge in reconstructing Earth's magnetic field so far back in time is that any event which heats the rock can alter preserved signals. Rocks in the Earth's crust often have long and complex geological histories which erase previous magnetic field information. However, the Isua Supracrustal Belt has a unique geology, sitting on top of thick continental crust which protects it from extensive tectonic activity and deformation. This allowed the researchers to build a clear body of evidence supporting the existence of the magnetic field 3.7 billion years ago.

The results may also provide new insights into the role of our magnetic field in shaping the development of Earth's atmosphere as we know it, particularly regarding atmospheric escape of gases. A currently unexplained phenomenon is the loss of the unreactive gas xenon from our atmosphere more than 2.5 billion years ago. Xenon is relatively heavy and therefore unlikely to have simply drifted out of our atmosphere. Recently, scientists have begun to investigate the possibility that charged xenon particles were removed from the atmosphere by the magnetic field.

Read more at Science Daily

Asian monsoon lofts ozone-depleting substances to stratosphere

Powerful monsoon winds, strengthened by a warming climate, are lofting unexpectedly large quantities of ozone-depleting substances high into the atmosphere over East Asia, new research shows.

The study, led by the U.S. National Science Foundation National Center for Atmospheric Research (NSF NCAR) and NASA, found that the East Asian Monsoon delivers more than twice the concentration of very short-lived ozone-depleting substances into the upper troposphere and lower stratosphere than previously reported.

The research team drew on airborne observations taken during a major 2022 Asian field campaign: the Asian Summer Monsoon Chemistry and Climate Impact Project (ACCLIP). The findings raise questions about the pace of the recovery of the ozone layer, which shields Earth from the Sun's harmful ultraviolet radiation.

"It was a real surprise to fly through a plume with all those very short-lived ozone-depleting substances," said NSF NCAR scientist Laura Pan, the lead author of the study. "These chemicals may have a significant impact on what will happen with the ozone layer, and it's critical to quantify them."

The study was published in the Proceedings of the National Academy of Sciences. It was funded by NSF, NASA, and NOAA, and co-authored by a large team of international scientists.

The role of monsoons

For thousands of years, people have viewed the Asian summer monsoon as important because of its impacts on local communities. Recently, however, scientists analyzing satellite observations have begun discovering that monsoon storms and winds play an additional role: carrying pollutants high in the atmosphere, where they can influence the world's climate system.

ACCLIP investigated the chemical content of air that was borne by the two primary monsoons in the region -- the South and the East Asian Monsoon -- from Earth's surface to as high up as the stratosphere. Once at that altitude, the chemicals can have far-reaching climate impacts because air in the stratosphere spreads out globally and remains for months to years, unlike the lower atmosphere where air masses turn over weekly.

The ACCLIP observations revealed that the East Asian Monsoon delivered higher levels of pollutants to the upper atmosphere than the South Asian Monsoon during 2022. The scientists measured carbon monoxide levels of up to 320 parts per billion -- a remarkably high level to be found at an altitude of 15 kilometers (about 9 miles). Carbon monoxide is often a sign of industrial pollution, and the measurements indicated that the East Asian Monsoon was closely aligned with emissions of pollutants at the surface.

Pan, Elliot Atlas of the University of Miami, and their co-authors looked into a class of chemicals known as very short-lived organic chlorine compounds, which can destroy ozone but persist only for a relatively short time in the atmosphere (months to years). In contrast, ozone-depleting chlorofluorocarbons (CFCs) remain in the atmosphere for decades to centuries or more and are therefore viewed as a far more significant threat to the ozone layer.

For that reason, the landmark 1987 Montreal Protocol on Substances that Deplete the Ozone Layer focused on phasing out CFCs and other long-lived substances. The international treaty and subsequent revisions have enabled stratospheric ozone to begin recovering. A 2022 United Nations assessment concluded that the ozone layer, including an ozone hole over the Antarctic, will be largely restored over the next several decades.

The Montreal Protocol, however, did not limit the continued manufacture and use of very short-lived ozone-depleting substances. Emissions of these chemicals have soared in South and East Asia, including highly industrialized regions of East China.

In an unfortunate coincidence, those regions lie directly under the East Asian Monsoon, which, of the world's eight regional monsoons, is the one that is predicted to strengthen the most with global warming.

The combination of the monsoon's powerful updrafts occurring in the same region as the increasing emissions of short-lived chlorine compounds has resulted in the unexpectedly high quantity of the chemicals being swept into the stratosphere.

The analysis of the aircraft measurements by Pan and her co-authors revealed high levels of five short-lived chlorine compounds: dichloromethane (CH2Cl2), chloroform (CHCl3), 1,2-dichloroethane (C2H4Cl2), tetrachloroethene (C2Cl4), and 1,2-dichloropropane (C3H6Cl2).

Pan said more research is needed to analyze the potential implications for ozone recovery. The paper also notes that scientists will need to incorporate the new findings into climate models, as stratospheric ozone has complex effects on Earth's temperature.

Read more at Science Daily

This salt battery harvests osmotic energy where the river meets the sea

Estuaries -- where freshwater rivers meet the salty sea -- are great locations for birdwatching and kayaking. In these areas, waters containing different salt concentrations mix and may be sources of sustainable, "blue" osmotic energy. Researchers in ACS Energy Letters report creating a semipermeable membrane that harvests osmotic energy from salt gradients and converts it to electricity. The new design had an output power density more than two times higher than commercial membranes in lab demonstrations.

Osmotic energy can be generated anywhere salt gradients are found, but the available technologies to capture this renewable energy have room for improvement. One method uses an array of reverse electrodialysis (RED) membranes that act as a sort of "salt battery," generating electricity from pressure differences caused by the salt gradient. To even out that gradient, positively charged ions from seawater, such as sodium, flow through the system to the freshwater, increasing the pressure on the membrane. To further increase its harvesting power, the membrane also needs to keep a low internal electrical resistance by allowing electrons to easily flow in the opposite direction of the ions. Previous research suggests that improving both the flow of ions across the RED membrane and the efficiency of electron transport would likely increase the amount of electricity captured from osmotic energy. So, Dongdong Ye, Xingzhen Qin and colleagues designed a semipermeable membrane from environmentally friendly materials that would theoretically minimize internal resistance and maximize output power.

The researchers' RED membrane prototype contained separate (i.e., decoupled) channels for ion transport and electron transport. They created this by sandwiching a negatively charged cellulose hydrogel (for ion transport) between layers of an organic, electrically conductive polymer called polyaniline (for electron transport). Initial tests confirmed their theory that decoupled transport channels resulted in higher ion conductivity and lower resistivity compared to homogenous membranes made from the same materials. In a water tank that simulated an estuary environment, their prototype achieved an output power density 2.34 times higher than a commercial RED membrane and maintained performance during 16 days of non-stop operation, demonstrating its long-term, stable performance underwater. In a final test, the team created a salt battery array from 20 of their RED membranes and generated enough electricity to individually power a calculator, LED light and stopwatch.

Read more at Science Daily

Researchers create artificial cells that act like living cells

In a new study published in Nature Chemistry, UNC-Chapel Hill researcher Ronit Freeman and her colleagues describe the steps they took to manipulate DNA and proteins -- essential building blocks of life -- to create cells that look and act like cells from the body. This accomplishment, a first in the field, has implications for efforts in regenerative medicine, drug delivery systems, and diagnostic tools.

"With this discovery, we can think of engineering fabrics or tissues that can be sensitive to changes in their environment and behave in dynamic ways," says Freeman, whose lab is in the Applied Physical Sciences Department of the UNC College of Arts and Sciences.

Cells and tissues are made of proteins that come together to perform tasks and make structures. Proteins are essential for forming the framework of a cell, called the cytoskeleton. Without it, cells wouldn't be able to function. The cytoskeleton allows cells to be flexible, both in shape and in response to their environment.

Without using natural proteins, the Freeman Lab built cells with functional cytoskeletons that can change shape and react to their surroundings. To do this, they used a new programmable peptide-DNA technology that directs peptides, the building blocks of proteins, and repurposed genetic material to work together to form a cytoskeleton.

"DNA does not normally appear in a cytoskeleton," Freeman says. "We reprogrammed sequences of DNA so that it acts as an architectural material, binding the peptides together. Once this programmed material was placed in a droplet of water, the structures took shape."

The ability to program DNA in this way means scientists can create cells to serve specific functions and even fine-tune a cell's response to external stressors. While living cells are more complex than the synthetic ones created by the Freeman Lab, they are also more unpredictable and more susceptible to hostile environments, like severe temperatures.

"The synthetic cells were stable even at 122 degrees Fahrenheit, opening up the possibility of manufacturing cells with extraordinary capabilities in environments normally unsuitable to human life," Freeman says.

Instead of creating materials that are made to last, Freeman says their materials are made to task -- perform a specific function and then modify themselves to serve a new function. Their application can be customized by adding different peptide or DNA designs to program cells in materials like fabrics or tissues. These new materials can integrate with other synthetic cell technologies, all with potential applications that could revolutionize fields like biotechnology and medicine.

 Read more at Science Daily

Apr 23, 2024

To find life in the universe, look to deadly Venus

Despite surface temperatures hot enough to melt lead, lava-spewing volcanoes, and puffy clouds of sulfuric acid, uninhabitable Venus offers vital lessons about the potential for life on other planets, a new paper argues.

"We often assume that Earth is the model of habitability, but if you consider this planet in isolation, we don't know where the boundaries and limitations are," said UC Riverside astrophysicist and paper first author Stephen Kane. "Venus gives us that."

Published today in the journal Nature Astronomy, the paper compiles much of the known information about Earth and Venus. It also describes Venus as an anchor point from which scientists can better understand the conditions that preclude life on planets around other stars.

Though it also features a pressure cooker-like atmosphere that would instantly flatten a human, Earth and Venus share some similarities. They have roughly the same mass and radius. Given the proximity to that planet, it's natural to wonder why Earth turned out so differently.

Many scientists assume that insolation flux, the amount of energy Venus receives from the sun, caused a runaway greenhouse situation that ruined the planet.

"If you consider the solar energy received by Earth as 100%, Venus collects 191%. A lot of people think that's why Venus turned out differently," Kane said. "But hold on a second. Venus doesn't have a moon, which is what gives Earth things like ocean tides and influenced the amount of water here."

In addition to some of the known differences, more NASA missions to Venus would help clear up some of the unknowns. Scientists don't know the size of its core, how it got to its present, relatively slow rotation rate, how its magnetic field changed over time, or anything about the chemistry of the lower atmosphere.

"Venus doesn't have a detectable magnetic field. That could be related to the size of its core," Kane said. "Core size also give us information about how a planet cools itself. Earth has a mantle circulating heat from its core. We don't know what's happening inside Venus."

A terrestrial planet's interior also influences its atmosphere. That is the case on Earth, where our atmosphere is largely the result of volcanic outgassing.

NASA does have twin missions to Venus planned for the end of this decade, and Kane is assisting with both of them. The DAVINCI mission will probe the acid-filled atmosphere to measure noble gases and other chemical elements.

"DAVINCI will measure the atmosphere all the way from the top to the bottom. That will really help us build new climate models and predict these kinds of atmospheres elsewhere, including on Earth, as we keep increasing the amount of CO2," Kane said.

The VERITAS mission, led by NASA's Jet Propulsion Laboratory, won't land on the surface but it will allow scientists to create detailed 3D landscape reconstructions, revealing whether the planet has active plate tectonics or volcanoes.

"Currently, our maps of the planet are very incomplete. It's very different to understand how active the surface is, versus how it may have changed through time. We need both kinds of information," Kane said.

Ultimately, the paper advocates for missions like these to Venus for two main reasons. One is the ability, with better data, to use Venus to ensure inferences about life on farther-flung planets are correct.

"The sobering part of the search for life elsewhere in the universe is that we're never going to have in situ data for an exoplanet. We aren't going there, landing, or taking direct measurements of them," Kane said.

"If we think another planet has life on the surface, we might not ever know we're wrong, and we'd be dreaming about a planet with life that doesn't have it. We are only going to get that right by properly understanding the Earth-size planets we can visit, and Venus gives us that chance."

The other reason to research Venus is that it offers a preview of what Earth's future could look like.

Read more at Science Daily

World's oases threatened by desertification, even as humans expand them

Oases are important habitats and water sources for dryland regions, sustaining 10% of the world's population despite taking up about 1.5% of land area. But in many places, climate change and anthropogenic activities threaten oases' fragile existence. New research shows how the world's oases have grown and shrunk over the past 25 years as water availability patterns changed and desertification encroaches on these wet refuges.

"Although the scientific community has always emphasized the importance of oases, there has not been a clear map of the global distribution of oases," said Dongwei Gui, a geoscientist at the Chinese Academy of Science who led the study. "Oasis research has both theoretical and practical significance for achieving United Nations Sustainable Development Goals and promoting sustainable development in arid regions."

The study found that oases around the world grew by more than 220,149 square kilometers (85,000 square miles) from 1995 to 2020, mostly due to intentional oasis expansion projects in Asia. But desertification drove the loss of 134,300 square kilometers (51,854 square miles) of oasis over the same period, also mostly in Asia, leading to a net growth of 86,500 square kilometers (about 33,400 square miles) over the study period.

The findings highlight the risk climate change and anthropogenic stressors pose to these wet sanctuaries and can inform water resource management and sustainable development in arid regions. The study was published in the AGU journal Earth's Future, which publishes interdisciplinary research on the past, present and future of our planet and its inhabitants.

The birth and death of an oasis

Oases are important sources of water for humans, plants and animals in the world's drylands, supporting a majority of productivity and life in deserts. They form when groundwater flows and settles into low-lying areas, or when surface meltwater flows downslope from adjacent mountain ranges and pools. The existence of an oasis depends primarily on having a reliable source of water that is not rainfall. Today, oases are found in 37 countries; 77% of oases are located in Asia, and 13% are found in Australia.

Gui and his co-investigators wanted to understand the global distribution and dynamic changes of oases and see how they respond to a changing environment, such as variations in climate, water resources and human activities. Using data from the European Space Agency's Climate Change Initiative Land Cover Product, the team categorized the land surface into seven categories: forest, grassland, shrub, cropland, water, urban and desert.

The researchers used satellite data to look for green, vegetated areas within dryland areas, indicating an oasis, and tracked changes over 25 years. Changes in the greenness of vegetation indicated changes in land use and oasis health, the latter of which can be influenced by both human activity and climate change. They also looked at changes in land surface type to find conversions of land use.

The researchers found that global oasis area increased by 220,800 square kilometers (85,251 square miles) over the 25-year timeframe. Most of that increase was from humans intentionally converting desert land into oases using runoff water and groundwater pumping, creating grasslands and croplands. The increase was concentrated in China, where management efforts have contributed more than 60% of the growth, Gui said. For example, more than 95% of the population in China's Xinjiang Uygur Autonomous Region lives within an oasis, motivating conservation and a 16,700 square kilometer (6,448 square mile) expansion of the oasis, Gui said.

Countering human efforts to expand oases, desertification contributed to oasis loss. Worldwide, the researchers found there was a loss of more than 134,000 square kilometers (51,738 square miles) of oasis land over the past 25 years. The researchers estimate that changes to oases have directly affected about 34 million people around the world.

Overall, between gains and losses, oases had a net growth of 86,500 square kilometers (33,397 square miles) from 1995 to 2020 -- but most gains were from the artificial expansion of oases, which may not be sustainable in the future.

Long-term oasis sustainability

The study highlighted ways to sustain healthy oases, including suggestions for improving water resource management, promoting sustainable land use and management and encouraging water conservation and efficient use. These efforts are especially important as the climate continues to change, Gui said.

Humans' overexploitation of dwindling groundwater can limit oasis sustainability, as well as long-term glacier loss. While higher temperatures increase glacier melt, temporarily boosting oases' water supplies, "as glaciers gradually disappear, the yield of meltwater will eventually decrease, leading to the shrinkage of oases once again," Gui said.

International cooperation plays a crucial role in oasis sustainability, Gui said.

"Due to the unique mechanism of oasis formation, a river basin often nurtures multiple oases across several countries, making transboundary cooperation key to addressing water scarcity and promoting sustainable development," he said.

Read more at Science Daily

This alloy is kinky

Researchers have uncovered a remarkable metal alloy that won’t crack at extreme temperatures due to kinking, or bending, of crystals in the alloy at the atomic level.  A metal alloy composed of niobium, tantalum, titanium, and hafnium has shocked materials scientists with its impressive strength and toughness at both extremely hot and cold temperatures, a combination of properties that seemed so far to be nearly impossible to achieve. In this context, strength is defined as how much force a material can withstand before it is permanently deformed from its original shape, and toughness is its resistance to fracturing (cracking). The alloy's resilience to bending and fracture across an enormous range of conditions could open the door for a novel class of materials for next-generation engines that can operate at higher efficiencies.

The team, led by Robert Ritchie at Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, in collaboration with the groups led by professors Diran Apelian at UC Irvine and Enrique Lavernia at Texas A&M University, discovered the alloy's surprising properties and then figured out how they arise from interactions in the atomic structure. Their work is described in a study that was published April 11, 2024 in Science.

"The efficiency of converting heat to electricity or thrust is determined by the temperature at which fuel is burned -- the hotter, the better. However, the operating temperature is limited by the structural materials which must withstand it," said first author David Cook, a Ph.D. student in Ritchie's lab. "We have exhausted the ability to further optimize the materials we currently use at high temperatures, and there's a big need for novel metallic materials. That's what this alloy shows promise in."

The alloy in this study is from a new class of metals known as refractory high or medium entropy alloys (RHEAs/RMEAs). Most of the metals we see in commercial or industrial applications are alloys made of one main metal mixed with small quantities of other elements, but RHEAs and RMEAs are made by mixing near-equal quantities of metallic elements with very high melting temperatures, which gives them unique properties that scientists are still unraveling. Ritchie's group has been investigating these alloys for several years because of their potential for high-temperature applications.

"Our team has done previous work on RHEAs and RMEAs and we have found that these materials are very strong, but generally possess extremely low fracture toughness, which is why we were shocked when this alloy displayed exceptionally high toughness," said co-corresponding author Punit Kumar, a postdoctoral researcher in the group.

According to Cook, most RMEAs have a fracture toughness less than 10 MPa√m, which makes them some of the most brittle metals on record. The best cryogenic steels, specially engineered to resist fracture, are about 20 times tougher than these materials. Yet the niobium, tantalum, titanium, and hafnium (Nb45Ta25Ti15Hf15) RMEA alloy was able to beat even the cryogenic steel, clocking in at over 25 times tougher than typical RMEAs at room temperature.

But engines don't operate at room temperature. The scientists evaluated strength and toughness at five temperatures total: -196°C (the temperature of liquid nitrogen), 25°C (room temperature), 800°C, 950°C, and 1200°C. The last temperature is about 1/5 the surface temperature of the sun.

The team found that the alloy had the highest strength in the cold and became slightly weaker as the temperature rose, but still boasted impressive figures throughout the wide range. The fracture toughness, which is calculated from how much force it takes to propagate an existing crack in a material, was high at all temperatures.

Unraveling the atomic arrangements

Almost all metallic alloys are crystalline, meaning that the atoms inside the material are arranged in repeating units. However, no crystal is perfect, they all contain defects. The most prominent defect that moves is called the dislocation, which is an unfinished plane of atoms in the crystal. When force is applied to a metal it causes many dislocations to move to accommodate the shape change. For example, when you bend a paper clip which is made of aluminum, the movement of dislocations inside the paper clip accommodates the shape change. However, the movement of dislocations becomes more difficult at lower temperatures and as a result many materials become brittle at low temperatures because dislocations cannot move. This is why the steel hull of the Titanic fractured when it hit an iceberg. Elements with high melting temperatures and their alloys take this to the extreme, with many remaining brittle up to even 800°C. However, this RMEA bucks the trend, withstanding snapping even at temperatures as low as liquid nitrogen (-196°C).

To understand what was happening inside the remarkable metal, co-investigator Andrew Minor and his team analyzed the stressed samples, alongside unbent and uncracked control samples, using four-dimensional scanning transmission electron microscopy (4D-STEM) and scanning transmission electron microscopy (STEM) at the National Center for Electron Microscopy, part of Berkeley Lab's Molecular Foundry.

The electron microscopy data revealed that the alloy's unusual toughness comes from an unexpected side effect of a rare defect called a kink band. Kink bands form in a crystal when an applied force causes strips of the crystal to collapse on themselves and abruptly bend. The direction in which the crystal bends in these strips increases the force that dislocations feel, causing them to move more easily. On the bulk level, this phenomenon causes the material to soften (meaning that less force has to be applied to the material as it is deformed). The team knew from past research that kink bands formed easily in RMEAs, but assumed that the softening effect would make the material less tough by making it easier for a crack to spread through the lattice. But in reality, this is not the case.

"We show, for the first time, that in the presence of a sharp crack between atoms, kink bands actually resist the propagation of a crack by distributing damage away from it, preventing fracture and leading to extraordinarily high fracture toughness," said Cook.

The Nb45Ta25Ti15Hf15 alloy will need to undergo a lot more fundamental research and engineering testing before anything like a jet plane turbine or SpaceX rocket nozzle is made from it, said Ritchie, because mechanical engineers rightfully require a deep understanding of how their materials perform before they use them in the real world. However, this study indicates that the metal has potential to build the engines of the future.

Read more at Science Daily

Breakthrough rice bran nanoparticles show promise as affordable and targeted anticancer agent

Plant-derived nanoparticles have demonstrated significant anticancer effects. Researchers recently developed rice bran-derived nanoparticles (rbNPs) that efficiently suppressed cell proliferation and induced programmed cell death of only cancer cells. Furthermore, rbNPs successfully suppressed the growth of tumors in mice having aggressive adenocarcinoma in their peritoneal cavity, without any adverse effects. Given their low production costs and high efficacy, rbNPs hold great promise for developing affordable and safe anticancer agents.

Several types of conventional cancer therapies, such as radiotherapy or chemotherapy, destroy healthy cells along with cancer cells. In advanced stages of cancer, tissue loss from treatments can be substantial and even fatal. Cutting-edge cancer therapies that employ nanoparticles can specifically target cancer cells, sparing healthy tissue. Recent studies have demonstrated that plant-derived nanoparticles (pdNPs) that have therapeutic effects can be an effective alternative to traditional cancer treatments. However, no pdNPs have been approved as anticancer therapeutic agents till date.

Rice bran is a byproduct generated during rice refining process that has limited utility and low commercial value. However, it contains several compounds with anticancer properties, such as γ-oryzanol and γ-tocotrienol. To explore these therapeutic properties of rice bran, a team of researchers led by Professor Makiya Nishikawa from Tokyo University of Science (TUS) in Japan developed nanoparticles from rice bran and tested their effectiveness in mice models. Their study, published in Volume 22 of Journal of Nanobiotechnology on 16 March 2024, was co-authored by Dr. Daisuke Sasaki, Ms. Hinako Suzuki, Associate Professor Kosuke Kusamori, and Assistant Professor Shoko Itakura from TUS.

"In recent years, an increasing number of new drug modalities are being developed. At the same time, development costs associated with novel therapies have increased dramatically, contributing to the burden of medical expenses. To address this issue, we used rice bran, an industrial waste with anticancer properties, to develop nanoparticles," explains Prof. Nishikawa.

The study evaluated the anticancer effects of rice bran-derived nanoparticles (rbNPs), which were obtained by processing and purifying a suspension of Koshihikari rice bran in water. When a cancer cell line named colon26 was treated with rbNPs, cell division was arrested and programmed cell death was induced, indicating strong anticancer effects of the nanoparticles. The observed anticancer activity of rbNPs can be attributed to γ-tocotrienol and γ-oryzanol, that are easily taken up by cancer cells resulting in cell cycle arrest and programmed cell death. Additionally, rbNPs reduced the expression of proteins, such as β-catenin (a protein associated with Wnt signaling pathway involved in cell proliferation) and cyclin D1, which are known to promote cancer recurrence and metastases. Moreover, the rbNPs reduced the expression of β-catenin only in colon26 cells without affecting the non-cancerous cells.

"A key concern in the context of pdNPs is their low pharmacological activity compared to pharmaceutical drugs. However, rbNPs exhibited higher anticancer activity than DOXIL®, a liposomal pharmaceutical formulation of doxorubicin. Additionally, doxorubicin is cytotoxic to both cancer cells and non-cancerous cells, whereas rbNPs are specifically cytotoxic to cancer cells, suggesting that rbNPs are safer than doxorubicin," highlights Prof.Nishikawa.

To confirm the anticancer properties of rbNPs in the living body, the researchers injected rbNPs into mice having aggressive adenocarcinoma in their peritoneal cavity (enclosed by the diaphragm, abdominal muscles, and pelvis and houses organs like intestines, liver, and kidneys). They observed significant suppression of tumor growth with no adverse effects on the mice. Additionally, the rbNPs significantly inhibited metastatic growth of murine melanoma B16-BL6 cells in a lung metastasis mouse model.

Rice bran has several attributes that make it an excellent source of therapeutic pdNPs. Firstly, it is economic as compared to many other sources of pdNPs. Nearly 40% of the rice bran is discarded in Japan, providing a readily available source of raw material. Secondly, the preparation efficiency of rbNPs is higher than that of previously reported pdNPs. Besides being practical and safe as an anticancer therapeutic, the physicochemical properties of rbNPs are very stable. However, a few parameters, such as establishment of separation technologies at the pharmaceutical level, assessing production process control parameters, and evaluation of efficacy and safety in human cancer cell lines and xenograft animal models, must be investigated prior to clinical trials in humans.

In conclusion, rice bran, an agricultural waste product, is a source of therapeutic pdNPs that are affordable, effective, and safe, and has the potential to revolutionize cancer treatment in the future.

Read more at Science Daily