Dec 2, 2023

A new possible explanation for the Hubble tension

The universe is expanding. How fast it does so is described by the so-called Hubble-Lemaitre constant. But there is a dispute about how big this constant actually is: Different measurement methods provide contradictory values. This so-called "Hubble tension" poses a puzzle for cosmologists. Researchers from the Universities of Bonn and St. Andrews are now proposing a new solution: Using an alternative theory of gravity, the discrepancy in the measured values can be easily explained -- the Hubble tension disappears. The study has now been published in the Monthly Notices of the Royal Astronomical Society (MNRAS).

The expansion of the universe causes the galaxies to move away from each other.

The speed at which they do this is proportional to the distance between them.

For instance, if galaxy A is twice as far away from Earth as galaxy B, its distance from us also grows twice as fast.

The US astronomer Edwin Hubble was one of the first to recognize this connection.

In order to calculate how fast two galaxies are moving away from each other, it is therefore necessary to know how far apart they are.

However, this also requires a constant by which this distance must be multiplied.

This is the so-called Hubble-Lemaitre constant, a fundamental parameter in cosmology.

Its value can be determined, for example, by looking at the very distant regions of the universe.

This gives a speed of almost 244,000 kilometers per hour per megaparsec distance (one megaparsec is just over three million light years).

244.000 kilometers per hour per megaparsec -- or 264,000?

"But you can also look at celestial bodies that are much closer to us -- so-called category 1a supernovae, which are a certain type of exploding star," explains Prof.

Dr. Pavel Kroupa from the Helmholtz Institute of Radiation and Nuclear Physics at the University of Bonn.

It is possible to determine the distance of a 1a supernova to Earth very precisely.

We also know that shining objects change color when they move away from us -- and the faster they move, the stronger the change.

This is similar to an ambulance, whose siren sounds deeper as it moves away from us.

If we now calculate the speed of the 1a supernovae from their color shift and correlate this with their distance, we arrive at a different value for the Hubble-Lemaitre constant -- namely just under 264,000 kilometers per hour per megaparsec distance.

"The universe therefore appears to be expanding faster in our vicinity -- that is, up to a distance of around three billion light years -- than in its entirety," says Kroupa.

"And that shouldn't really be the case."

However, there has recently been an observation that could explain this.

According to this, the Earth is located in a region of space where there is relatively little matter -- comparable to an air bubble in a cake.

The density of matter is higher around the bubble. Gravitational forces emanate from this surrounding matter, which pull the galaxies in the bubble towards the edges of the cavity.

"That's why they are moving away from us faster than would actually be expected," explains Dr. Indranil Banik from St. Andrews University.

The deviations could therefore simply be explained by a local "under-density."

In fact, another research group recently measured the average speed of a large number of galaxies that are 600 million light years away from us. "It was found that these galaxies are moving away from us four times faster than the standard model of cosmology allows," explains Sergij Mazurenko from Kroupa's research group, who was involved in the current study.

Bubble in the dough of the universe

This is because the standard model does not provide for such under-densities or "bubbles" -- they should not actually exist.

Instead, matter should be evenly distributed in space. If this were the case, however, it would be difficult to explain which forces propel the galaxies to their high speed.

"The standard model is based on a theory of the nature of gravity put forward by Albert Einstein," says Kroupa.

"However, the gravitational forces may behave differently than Einstein expected." The working groups from the Universities of Bonn and St. Andrews have used a modified theory of gravity in a computer simulation.

This "modified Newtonian dynamics" (abbreviation: MOND) was proposed four decades ago by the Israeli physicist Prof.

Dr. Mordehai Milgrom. It is still considered an outsider theory today.

"In our calculations, however, MOND does accurately predict the existence of such bubbles," says Kroupa.

If one were to assume that gravity actually behaves according to Milgrom's assumptions, the Hubble tension would disappear: There would actually only be one constant for the expansion of the universe, and the observed deviations would be due to irregularities in the distribution of matter.

Read more at Science Daily

New research explores future limits of survival and livability in extreme heat conditions

Commonly associated with longer days and slower paces, this summer's record-smashing heat in Arizona demonstrated a concerning future for the planet's warmest season. From power outages endangering entire neighborhoods and heat-related deaths rising among some of the state's most vulnerable populations, the city of Phoenix found itself in national headlines. As national attention grew, one question became clear: How does anyone live there?

The consequences of extreme heat do not affect Arizona residents alone.

Extreme heat made worldwide news this year, including in November when a 23-year-old woman died of cardiorespiratory arrest at a Taylor Swift concert in Brazil where heat indexes that day exceeded 120 degrees.

Jennifer Vanos, associate professor in the School of Sustainability at Arizona State University, studies extreme heat and its health impacts.

She is the lead author of a new paper published Nov. 29 in Nature Communications. Titled "A physiological approach for assessing human survivability and liveability to heat in a changing climate," the paper explores temperatures at which humans can survive.

The research demonstrates that the current estimated upper temperature and humidity limits used for human survivability may not paint an accurate picture of the impacts of a warming planet on human health.

"For the past decade or so we have been using what we call a 'wet bulb temperature' of 35 degrees Celsius, or 95 degrees Fahrenheit, as the limit for human survivability," said Vanos, also a Senior Global Futures Scientist in the Julie Ann Wrigley Global Futures Laboratory.

The wet-bulb temperature limit for human survival indicates the maximum combinations of temperature and humidity that humans can tolerate without suffering inevitable heat stroke over a fixed duration of exposure.

"The idea is that you could survive for up to six hours at that level of heat exposure," Vanos said.

"That number really oversimplifies what happens physiologically in the body when your body is exposed to that temperature, and it doesn't account for important variables like age or other vulnerability factors."

Vanos said the commonly-used wet-bulb temperature for human survivability assumes the person is indoors or shaded, unclothed, completely sedentary, fully heat acclimatized and of an "average size." These assumptions do not align, in most cases, with how humanity navigates the summer season.

The paper models scenarios that adjust for factors such as humidity, age, activity level and sun exposure, and provides a range of safe temperatures based on a series of characteristics.

"We didn't only want to better understand the conditions that people could survive in," Vanos said.

"We wanted to understand the conditions that allowed people to live their lives. If the only safe way to live in an area is to be completely sedentary, people won't want to live there. Being able to spend time outdoors and live your life without seeing a sustained rise in core temperature is a really important metric to understand today and as we move into the future."

Vanos said Gisel Guzman Echavarria, an ASU student, was instrumental in creating the figures used throughout the paper to demonstrate the research findings.

The research, funded by the National Science Foundation, was conducted by a combination of climate scientists and physiologists, a collaboration that Vanos said was crucial in understanding the intertwined nature of heat and human health.

Ollie Jay, professor and director of the Heat and Health Research Incubator at the University of Sydney, said the combined perspectives allow for a cohesive understanding of exactly how climate outcomes can impact people on the physiological and biophysical level.

"The existing wet-bulb temperature estimate of 35 degrees Celsius is used very commonly, with one example being the Intergovernmental Panel on Climate Change report," said Jay, senior author of the paper.

"These kinds of reports can shape policy efforts, but they are using a model for heat that is a very conservative estimate of what the impacts are going to be on humans. If we start using a more realistic, human-based model, the impacts are going to be more severe. They're going to be more widespread and they're going to happen sooner than we are projecting."

Vanos and Jay agree that the survivability ranges provided in the paper can give an important glimpse into the future: one that includes an increased need for cooling infrastructure, a personalized approach to heat protection and possible heat-driven migration.

Read more at Science Daily

Consensus needed on when global warming reaches 1.5°C

Writing in the journal Nature ahead of COP28, a team of Met Office scientists has emphasised that -- surprisingly -- there is currently no formally agreed way of defining the current level of global warming relevant to the Paris Agreement.

They have proposed a solution.

While the global average temperature in a particular year is well-known, this will not be suitable as an indicator of whether the "Paris 1.5" has been breached or not, because the Paris Agreement refers to long-term warming, not individual years.

But no alternative has yet been formally agreed.

Without an agreement on what will count as breaching the Paris 1.5, there may be confusion and delay in responding.

Professor Richard Betts MBE, of the Met Office and the University of Exeter, is the paper's lead author.

He said: "Clarity on breaching the Paris Agreement guard rails will be crucial.

"Without an agreement on what actually will count as exceeding 1.5°C, we risk distraction and confusion at precisely the time when action to avoid the worst effects of climate change becomes even more urgent."

New indicators for global warming levels

Some of the current suggested metrics rely on long-term averages -- usually over two decades -- of annual global annual temperature.

Professor Betts added: "Using the average global temperature over the last 20 years would mean we would have to wait ten years to confirm whether the 1.5 °C ceiling has been reached: creating a decade of otherwise preventable delay.

"Today we are recommending an indicator combining the last ten years of global temperature observations with an estimate of the projection or forecast for the next ten years.

"If adopted, this could mean a universally agreed measure of global warming that could trigger immediate action to avoid further rises."

Using this suggested approach, the researchers found that the figure for the current global warming level is around 1.26°C, with an uncertainty range of 1.13°C to 1.43°C.

It is more likely than not one of the next five years will reach or even exceed 1.5°C above pre-industrial levels.

But even an anomalously warm year would not mean that we have reached the first of the Paris Agreement guard rails.

The Earth's climate system has a range of natural variability where the annual temperature fluctuates within small margins.

Professor Betts added: "Using an indicator of several years of observations and projections will smooth out the natural variation to reveal the underlying human-induced warming."

2023 global temperature

Provisional estimates of the global average surface temperature for 2023 suggest the year could be on track to be the warmest on record.

The year is likely to exceed the level reached in 2016; currently the warmest year on record.

2023 is expected to continue the run of the warmest years on record since 1850.

Beginning in 2015, the series includes years at both ends of natural climate variability.

Some years, like 2016 and 2023, will have been naturally warmer because of the influence of El Niño -- when a natural warming of parts of the tropical Pacific warms the planet temporarily by a small margin.

But the series also includes years which should have been naturally marginally cooler.

Professor Betts concluded: "The fact that the warmest years on record include both the highs and lows of natural climate variability is yet more evidence that climate change driven by human-induced greenhouse gas emissions dominates the recent climate record."

Global warming dashboard

To complement the newly proposed indicators, a new section has been added to the Met Office Climate Dashboard to illustrate the current level of global warming.

The 'Indicators of Global Warming' dashboard displays eight separate indicators as well as observed global mean temperature using Met Office HadCRUT5 data.

Read more at Science Daily

Human behavior guided by fast changes in dopamine levels

What happens in the human brain when we learn from positive and negative experiences? To help answer that question and better understand decision-making and human behavior, scientists are studying dopamine.

Dopamine is a neurotransmitter produced in the brain that serves as a chemical messenger, facilitating communication between nerve cells in the brain and the body. It is involved in functions such as movement, cognition and learning. While dopamine is most known for its association with positive emotions, scientists are also exploring its role in negative experiences.

Now, a new study from researchers at Wake Forest University School of Medicine shows that dopamine release in the human brain plays a crucial role in encoding both reward and punishment prediction errors. This means that dopamine is involved in the process of learning from both positive and negative experiences, allowing the brain to adjust and adapt its behavior based on the outcomes of these experiences.

The study was published today in Science Advances.

"Previously, research has shown that dopamine plays an important role in how animals learn from 'rewarding' (and possibly 'punishing') experiences. But, little work has been done to directly assess what dopamine does on fast timescales in the human brain," said Kenneth T. Kishida, Ph.D., associate professor of physiology and pharmacology and neurosurgery at Wake Forest University School of Medicine. "This is the first study in humans to examine how dopamine encodes rewards and punishments and whether dopamine reflects an 'optimal' teaching signal that is used in today's most advanced artificial intelligence research."

For the study, researchers on Kishida's team utilized fast-scan cyclic voltammetry, an electrochemical technique, paired with machine learning, to detect and measure dopamine levels in real-time (i.e., 10 measurements per second). However, this method is challenging and can only be performed during invasive procedures such as deep-brain stimulation (DBS) brain surgery. DBS is commonly employed to treat conditions such as Parkinson's disease, essential tremor, obsessive-compulsive disorder and epilepsy.

Kishida's team collaborated with Atrium Health Wake Forest Baptist neurosurgeons Stephen B. Tatter, M.D., and Adrian W. Laxton, M.D., who are also both faculty members in the Department of Neurosurgery at Wake Forest University School of Medicine, to insert a carbon fiber microelectrode deep into the brain of three participants at Atrium Health Wake Forest Baptist Medical Center who were scheduled to receive DBS to treat essential tremor.

While the participants were awake in the operating room, they played a simple computer game. As they played the game, dopamine measurements were taken in the striatum, a part of the brain that is important for cognition, decision-making, and coordinated movements.

During the game, participants' choices were either rewarded or punished with real monetary gains or losses. The game was divided into three stages in which participants learned from positive or negative feedback to make choices that maximized rewards and minimized penalties. Dopamine levels were measured continuously, once every 100 milliseconds, throughout each of the three stages of the game.

"We found that dopamine not only plays a role in signaling both positive and negative experiences in the brain, but it seems to do so in a way that is optimal when trying to learn from those outcomes. What was also interesting, is that it seems like there may be independent pathways in the brain that separately engage the dopamine system for rewarding versus punishing experiences. Our results reveal a surprising result that these two pathways may encode rewarding and punishing experiences on slightly shifted timescales separated by only 200 to 400 milliseconds in time," Kishida said.

Kishida believes that this level of understanding may lead to a better understanding of how the dopamine system is affected in humans with psychiatric and neurological disorders. Kishida said additional research is needed to understand how dopamine signaling is altered in psychiatric and neurological disorders.

"Traditionally, dopamine is often referred to as 'the pleasure neurotransmitter,"' Kishida said. "However, our work provides evidence that this is not the way to think about dopamine. Instead, dopamine is a crucial part of a sophisticated system that teaches our brain and guides our behavior. That dopamine is also involved in teaching our brain about punishing experiences is an important discovery and may provide new directions in research to help us better understand the mechanisms underlying depression, addiction, and related psychiatric and neurological disorders."

Read more at Science Daily

Dec 1, 2023

Meteorites likely source of nitrogen for early Earth

Micrometeorites originating from icy celestial bodies in the outer Solar System may be responsible for transporting nitrogen to the near-Earth region in the early days of our solar system. That discovery was published today in Nature Astronomy by an international team of researchers, including University of Hawai'i at Manoa scientists, led by Kyoto University.

Nitrogen compounds, such as ammonium salts, are abundant in material born in regions far from the sun, but evidence of their transport to Earth's orbital region had been poorly understood.

"Our recent findings suggests the possibility that a greater amount of nitrogen compounds than previously recognized was transported near Earth, potentially serving as building blocks for life on our planet," says Hope Ishii, study co-author and affiliate faculty at the Hawai'i Institute of Geophysics and Planetology in the UH Manoa School of Ocean and Earth Science and Technology (SOEST).

Like all asteroids, Ryugu is a small, rocky object that orbits the sun.

The Japan Aerospace Exploration Agency's Hayabusa2 spacecraft explored Ryugu and brought material from its surface back to Earth in 2020.

This intriguing asteroid is rich in carbon and has undergone significant space weathering caused by micrometeorite collisions and exposure to charged ions streaming from the sun.

In this study, the scientists aimed to discover clues about the materials arriving near Earth's orbit, where Ryugu is currently located, by examining the evidence of space weathering in Ryugu samples.

Using an electron microscope, they found that the surface of the Ryugu samples are covered with tiny minerals composed of iron and nitrogen (iron nitride: Fe4N).

"We proposed that tiny meteorites, called micrometeorites, containing ammonia compounds were delivered from icy celestial bodies and collided with Ryugu," said Toru Matsumoto, lead author of the study and assistant professor at Kyoto University.

"The micrometeorite collisions trigger chemical reactions on magnetite and lead to the formation of the iron nitride."

Read more at Science Daily

Small marine creatures swimming in plastic chemicals not reproducing

Plastic waste in the water might be stopping -- or interrupting -- some shrimp-like creatures from reproducing.

In a unique study, the ability of 'shrimp like' creatures to reproduce successfully was found to be compromised by chemicals found in everyday plastics.

Research showed that little critters, known as marine amphipod Echinogammarus marinus, changed their mating behaviour when exposed to toxic plastic additives.

Until now, most research into plastic pollution has focused on visual plastics; what can get trapped in plastics and the dangers of ingesting large particles.

Scientists from the University of Portsmouth have taken a different approach and investigated the chemicals that are used as ingredients in plastics.

Professor Alex Ford, from the Institute of Marine Sciences at the University of Portsmouth, says: "This unsuccessful mating behaviour has serious repercussions, not only for the species being tested but potentially for the population as a whole. These animals form pairs to reproduce. Once they were exposed to a chemical, they would break apart from their mate and take much longer -in some cases days -- to repair, and sometimes not at all.

"These creatures are commonly found on European shores, where they make up a substantial amount of the diet of fish and birds. If they are compromised it will have an effect on the whole food chain."

There are over 350,000 chemicals in use around the world in everyday products.

Ten thousand of these are used to enhance plastics. Chemicals can be used to make plastics more flexible, add colour, give sun protection or make plastic flameproof.

Around one third of these chemicals are known to be toxic to human's immune, nervous or reproductive systems.

The study, published in the journal Environmental Pollution, tested four widely used chemicals found in plastics.

These plastic additives are used in a variety of common products, for example, phthalates (DEHP and DBP) which are found in medical supplies, food packaging and toys.

Triphenyl phosphate (TPHP) is mainly used as a flame retardant in products like nail polish and electronic equipment, including cables, and N-butyl benzenesulfonamide (NBBS) is used in nylon, medical devices, cooking utensils and films.

Bidemi Green-Ojo, lead author and PhD Researcher in Environmental Toxicology at the University of Portsmouth, says: "We chose these four additives because the suspected danger they pose to human health is well documented. Two of the chemicals we investigated (DHP and DEHP) are regulated and not allowed to be used in products in Europe. The other two chemicals have no current restrictions on them and are found in many household products. We wanted to test the effects these chemicals had on aquatic mating behaviour."

The 'shrimp like' creatures which have been studied are known to pair up and typically lock together for two days while mating.

Pairs of them were exposed to each chemical, and researchers monitored their behaviour over four days, measuring the time it took for the creatures to mate.

They found that at best it took much longer for the creatures to re-pair, and at worst they didn't re-pair.

The experiment found that all the plastic additives had the capacity to reduce the overall percentage of animals that formed pairs.

The ones which did form pairs took longer to make contact and re-pair.

Two of the chemicals caused a concentration-dependent effect on shrimps' sperm, resulting in a decline of up to 60 per cent in sperm count of those exposed to elevated levels of the chemicals.

"Although the animals we tested were exposed to much higher concentrations than you would normally find in the environment, the results indicate these chemicals can affect sperm count," explains Professor Ford.

"It is conceivable that if we did the experiment on shrimps that had been exposed for a longer period or during critical stages in their life history, it would affect their sperm levels and quality."

Bidemi Green-Ojo adds: "We must understand more about these chemicals and how they affect behaviour. Many types of behaviour -- such as feeding, fight or flight mode, and reproduction -- are essential in an animal's life, and any abnormal behaviour may reduce the chances of survival.

Read more at Science Daily

What makes sustainable consumption so difficult

While many people want to achieve major long-term goals -- such as improving their diet, quitting smoking or adopting a more sustainable lifestyle -- they often find it difficult to do so. Is it all down to a lack of self-discipline? No, it's not, according to social psychologist Professor Wilhelm Hofmann from Ruhr University Bochum, Germany. For a review article in Nature Reviews Psychology, Hofmann has analyzed numerous research studies and highlighted the extent to which the physical and social environment influence individual behavior. He criticizes the fact that many psychological studies still tend to focus on the individual while ignoring crucial structural factors. The article was published online on November 20, 2023.

Environmental factors have enormous impact on decisions

Traditional approaches such as self-determination theory focus on personal autonomy.

This means that an individual's freedom of choice must be preserved at all costs.

"The public policy recommendations that result from this are to make no restrictions, provide sufficient information about the identified risks and side effects of the various options and then trust that people will make the right decisions and act appropriately," says Hofmann.

But this formula doesn't work.

To illustrate this, the Bochum-based psychologist cites the example of an eco-conscious consumer who'd like to reduce their meat consumption, but occasionally also finds themselves tempted by a meat dish.

"In conventional psychology, this is regarded as a conflict within the individual," he explains.

If the person could only muster enough willpower, they would achieve their long-term goal.

According to Hofmann, this view is misguided, because decisions are very much influenced by the environment: For example, if there are five meat dishes in the canteen, but only one vegetarian option -- and the latter might even be more expensive.

People also wish to conform to social norms: If many of your friends and relatives drive a big car, you're more likely to want one yourself.

It's not enough to hope for individual discipline

In his article, Hofmann combines psychological research with public policy research to illustrate that psychological research has implications for other areas and should take a broader view.

In particular, he argues that we need to be more aware of the fact that people don't have the power to shape many of their own environments.

"Many people try to live in a more sustainable manner, but fail to do so in reality," says Wilhelm Hofmann.

Unsustainable options are often cheaper, more visible and more available than sustainable ones.

"Relying on individual discipline, willingness to make sacrifices and a sense of guilt won't get us very far. We need to question and change the structures that contribute to social problems such as the overuse of natural resources and make sustainable behavior more difficult. And in order to achieve this, we need sound and effective political decisions." Many people would like to see more regulation so that they no longer have to swim against the tide.

Growing awareness of the problem, combined with the realization that some social challenges and crises can't be solved through personal responsibility or free markets, is driving the desire for government intervention and solutions.

In essence, society needs to agree on good rules in order to provide individuals with the best possible support on the path to the desired change towards greater sustainability.

Greater focus on the common good

"The accelerating climate crisis is the best example of how the unlimited exercise of personal consumer freedoms leads to negative consequences for society as a whole," explains Wilhelm Hofmann.

"We've forgotten to a certain extent to look at the collective benefit, i.e. the common good, and need to recognize the importance of good regulation once again. By this I mean that we need to agree on effective and fair rules that protect us from risks and that apply to everyone equally. Such as are standard practice in road traffic, for example."

Read more at Science Daily

One of the largest magnetic storms in history quantified: Aurorae covered much of the night sky from the Tropics to the Polar Regions

In early November of this year, aurora borealis were observed at surprisingly low latitudes, as far south as Italy and Texas. Such phenomena indicate the impacts of a solar coronal mass ejection on the Earth's magnetic field and atmosphere. Far more dramatic than this recent light show was, it was nothing compared to a huge solar storm in February 1872. The resulting auroral display from that event ringed the globe and produced auroras observed in sites as close to the equator as Bombay and Khartoum. An international team consisting of scientists from nine counties has now published a detailed study of this historically important event, tracing its solar origin and widespread terrestrial impacts. Telegraph communications were widely disrupted by this storm, but in today's technologically dependent society, such a storm would disrupt power grids and satellite communications. Their findings confirm that such extreme storms are more common than previously thought.

In the modern world, we are increasingly dependent on technological infrastructure such as power grids, communication systems, and satellites.

However, this dependency makes us increasingly vulnerable to the effects of large geomagnetic storms.

"The longer the power supply could be cut off, the more society, especially those living in urban areas, will struggle to cope," Designated Assistant Professor Hayakawa, the lead author of the study, explains.

Such storms could be big enough to knock out the power grid, communication systems, airplanes, and satellites in the worst case.

"Could we maintain our life without such infrastructure?" Hayakawa comments: "Well, let us just say that it would be extremely challenging."

Such extreme storms are rare. In recent studies, two such storms stand out: the Carrington storm in September 1859 and the New York Railroad storm in May 1921.

The new study suggests that another storm, the Chapman-Silverman storm in February 1872, should also be considered as one of these extreme events.

At the time, the storm was big enough to affect the technological infrastructure even in the tropics.

Telegraph communications on the submarine cable in the Indian Ocean between Bombay (Mumbai) and Aden were disrupted for hours.

Similar disturbances were reported on the land line between Cairo and Khartoum.

The multidisciplinary team, consisting of 22 scientists, was led by Nagoya University in Japan (Hisashi Hayakawa), the US National Solar Observatory (Edward Cliver), and the Royal Observatory of Belgium (Frédéric Clette). The 22 researchers used historical records and modern techniques to assess the Chapman-Silverman storm from its solar origin to its terrestrial impacts.

For the solar origin, the group turned to largely forgotten sunspot records from historical archives, especially Belgian and Italian records.

For terrestrial impacts, they used geomagnetic field measurements recorded in places as diverse as Bombay (Mumbai), Tiflis (Tbilisi), and Greenwich to assess temporal evolution and storm intensity.

They also examined hundreds of accounts of visual aurora in different languages caused by the storm.

One of the more interesting aspects of the 1872 storm was that it likely originated in a medium-sized, but complex, sunspot group near the solar disk centre as confirmed by analyses of solar records from Belgium and Italy.

These findings suggest that even a medium-sized sunspot group triggered one of the most extreme magnetic storms in history.

Hayakawa and his colleagues extended their investigations of the historical aurorae by combing through records in libraries, archives, and observatories around the world.

They identified more than 700 auroral records that indicated that the night sky was illuminated by magnificent auroral displays from the polar regions to the tropics (down to ≈ 20° in latitude in both hemispheres).

"Our findings confirm the Chapman-Silverman storm in February 1872 as one of the most extreme geomagnetic storms in recent history. Its size rivalled those of the Carrington storm in September 1859 and the NY Railroad storm in May 1921," Hayakawa said.

"This means that we now know that the world has seen at least three geomagnetic superstorms in the last two centuries. Space weather events that could cause such a major impact represent a risk that cannot be discounted."

Hayakawa said: "Such extreme events are rare. On the one hand, we are fortunate to have missed such superstorms in the modern time. On the other hand, the occurrence of three such superstorms in 6 decades shows that the threat to modern society is real. Therefore, the preservation and analysis of historical records is important to assess, understand, and mitigate the impact of such events."

Read more at Science Daily

Nov 30, 2023

An astronomical waltz reveals a sextuplet of planets

An international collaboration between astronomers using the CHEOPS and TESS space satellites, including NCCR PlanetS members from the University of Bern and the University of Geneva, have found a key new system of six transiting planets orbiting a bright star in a harmonic rhythm. This rare property enabled the team to determine the planetary orbits which initially appeared as an unsolvable riddle.

CHEOPS is a joint mission by ESA and Switzerland, under the leadership of the University of Bern in collaboration with the University of Geneva. Thanks to a collaboration with scientists working with data from NASA's satellite TESS, the international team could uncover the planetary system orbiting the nearby star HD110067. A very distinctive feature of this system is its chain of resonances: the planets orbit their host star in perfect harmony. Part of the research team are researchers from the University of Bern and the University of Geneva who are also members of the National Center of Competence in Research (NCCR) PlanetS. The findings have just been published in Nature.

The planets in the HD110067 system revolve around the star in a very precise waltz. When the closest planet to the star makes three full revolutions around it, the second one makes exactly two during the same time. This is called a 3:2 resonance. "Amongst the over 5000 exoplanets discovered orbiting other stars than our Sun, resonances are not rare, nor are systems with several planets. What is extremely rare though, is to find systems where the resonances span such a long chain of six planets" points out Dr. Hugh Osborn, CHEOPS fellow at the University of Bern, leader of CHEOPS observation programme involved in the study, and co-author of the publication. This is precisely the case of HD110067 whose planets form a so-called "resonant chain" in successive pairs of 3:2, 3:2, 3:2, 4:3, and 4:3 resonances, resulting in the closest planet completing six orbits while the outer-most planet does one.

A seemingly unsolvable puzzle

Although multiple planets were initially detected thanks to their transits, the exact arrangement of the planets was unclear at first. However, the precise gravitational dance enabled the scientists' team to solve the puzzle of HD110067. Prof. Adrien Leleu from the University of Geneva, in charge of analysing the orbital resonances, and co-author of the study, explains: "A transit occurs when a planet, from our point of view, passes in front of its host star, blocking a minute fraction of the starlight, creating an apparent dip of its brightness." From the first observations carried out by NASA's TESS satellite, it was possible to determine that the two inner planets called 'b' and 'c' have orbital periods of 9 and 14 days respectively. However, no conclusions could be drawn for the other four detected planets as two were seen to transit once in 2020 and once in 2022 with a large 2-year gap in the data, and the other two transited only once in 2022.

The solution to the puzzle for those four additional planets finally began to emerge thanks to observations with the CHEOPS space telescope. While TESS aims at scanning all of the sky bit by bit to find short-period exoplanets, CHEOPS is a targeted mission, focusing on a single star at a time with exquisite precision. "Our CHEOPS observations enabled us to find that the period of planet 'd' is 20.5 days. Also, it ruled out multiple possibilities for the remaining three outer planets, 'e', 'f' and 'g'," reveals Osborn.

Predicting the precise waltz of the planets

That is when the team realized that the three inner planets of HD110067 are dancing in a precise 3:2, 3:2 chain of resonances: when the innermost planet revolves nine times around the star, the second revolves six times and the third planet four times.

The team then considered the possibility that the three other planets could also be part of the chain of resonances. "This led to dozens of possibilities for their orbital period," explains Leleu, "but combining existing observational data from TESS and CHEOPS, with our model of the gravitational interactions between the planets, we could exclude all solutions but one: the 3:2, 3:2, 3:2, 4:3, 4:3 chain." The scientists could therefore predict that the outer three planets ('e', 'f' and 'g') have orbital periods of 31, 41 days, and 55 days.

This prediction allowed to schedule observations with a variety of ground-based telescopes. Further transits of planet 'f' were observed, revealing it was precisely where theory predicted it based on the resonant-chain. Finally, reanalysis of the data from TESS revealed two hidden transits, one from each of planets 'f' and 'g', exactly at the times expected by the predictions, confirming the periods of the six planets. Additional CHEOPS observations of each planet, and in particular planet 'e' are scheduled in the near future.

A key system for the future

From the handful of resonant-chain systems found so far, CHEOPS has highly contributed to the understanding of not only HD110067, but also of TOI-178. Another well-known example of a resonant-chain system is the TRAPPIST-1 system which hosts seven rocky planets. However, TRAPPIST-1 is a small and incredibly faint star which makes any additional observations very difficult. HD110067, on the other hand, is more than 50 times brighter than TRAPPIST-1.

Read more at Science Daily

Landscape dynamics determine the evolution of biodiversity on Earth

Movement of rivers, mountains, oceans and sediment nutrients at the geological timescale are the central drivers of Earth's biodiversity, new research published today in Nature has revealed.

The research also shows that biodiversity evolves at similar rates to the pace of plate tectonics, the slow geological processes that drive the shape of continents, mountains and oceans.

"That is a rate incomparably slower than the current rates of extinction caused by human activity," said lead author Dr Tristan Salles from the School of Geosciences.

The research looks back over 500 million years of Earth's history to the period just after the Cambrian explosion of life, which established the main species types of modern life.

Dr Salles said: "Earth's surface is the living skin of our planet. Over geological time, this surface evolves with rivers fragmenting the landscape into an environmentally diverse range of habitats.

"However, these rivers not only carve canyons and form valleys, but play the role of Earth's circulatory system as the main conduits for nutrient and sediment transfer from sources (mountains) to sinks (oceans).

"While modern science has a growing understanding of global biodiversity, we tend to view this through the prism of narrow expertise," Dr Salles said. "This is like looking inside a house from just one window and thinking we understand its architecture.

"Our model connects physical, chemical and biological systems over half a billion years in five-million-year chunks at a resolution of five kilometres. This gives an unprecedented understanding of what has driven the shape and timing of species diversity," he said.

The discovery in 1994 of the ancient Wollemi pine species in a secluded valley in the Blue Mountains west of Sydney gives us a glimpse into the holistic role that time, geology, hydrology, climate and genetics play in biodiversity and species survival.

The idea that landscapes play a role in the trajectory of life on Earth can be traced back to German naturalist and polymath Alexander von Humboldt. His work inspired Charles Darwin and Alfred Wallace, who were the first to note that animal species boundaries correspond to landscape discontinuities and gradients.

"Fast forwarding nearly 200 years, our understanding of how the diversity of marine and terrestrial life was assembled over the past 540 million years is still emerging," University of Sydney PhD student Beatriz Hadler Boggiani said.

"Biodiversity patterns are well identified from the fossil record and genetic studies. Yet, many aspects of this evolution remain enigmatic, such as the 100 million years delay between the expansion of plants on continents and the rapid diversification of marine life."

In groundbreaking research a team of scientists -- from the University of Sydney, ISTerre at the French state research organisation CNRS and the University of Grenoble Alpes in France -- has proposed a unified theory that connects the evolution of life in the marine and terrestrial realms to sediment pulses controlled by past landscapes.

"Because the evolution of the Earth's surface is set by the interplay between the geosphere and the atmosphere, it records their cumulative interactions and should, therefore, provide the context for biodiversity to evolve," said Dr Laurent Husson from University of Grenoble Alpes.

Instead of considering isolated pieces of the environmental puzzle independently, the team developed a model that combines them and simulates at high resolution the compounding effect of these forces.

"It is through calibration of this physical memory etched in the Earth's skin with genetics, fossils, climate, hydrology and tectonics by which we have investigated our hypothesis," Dr Salles said.

Using open-source scientific code published by the team in Science in March, the detailed simulation was calibrated using modern information about landscape elevations, erosion rates, major river waters and the geological transport of sediment (known as sediment flux).

This allowed the team to evaluate their predictions over 500 million years using a combination of geochemical proxies and testing different tectonic and climatic reconstructions. The geoscientists then compared the predicted sediment pulses to the evolution of life in both the marine and terrestrial realms obtained from a compilation of paleontological data.

"In a nutshell, we reconstructed Earth landforms over the Phanerozoic era, which started 540 million years ago, and looked at the correlations between the evolving river networks, sediment transfers and known distribution of marine and plant families," University of Grenoble PhD student Manon Lorcery said.

When comparing predicted sediment flux into the oceans with marine biodiversity, the analysis shows a strong, positive correlation.

On land, the authors designed a model integrating sediment cover and landscape variability to describe the capacity of the landscape to host diverse species. Here again, they found a striking correlation between their proxy and plant diversification for the past 450 million years.

In his 1864 novel A Journey to the Centre of the Earth, Jules Verne attributes this to his fictitious hero, Professor Otto Lidenbrock:

"Animal life existed upon the Earth only in the secondary period, when a sediment of soil had been deposited by the rivers and taken the place of the incandescent rocks of the primitive period."

Read more at Science Daily

Climate: Why disinformation is so persistent

Melting of glaciers, rising sea levels, extreme heat waves: the consequences of climate change are more visible than ever, and the scientific community has confirmed that humans are responsible. Yet studies show that a third of the population still doubts or disputes these facts. The cause is disinformation spread by certain vested interests. To try and prevent this phenomenon, a team from the University of Geneva (UNIGE) has developed and tested six psychological interventions on nearly 7,000 participants from twelve countries. The research, published in the journal Nature Human Behavior, highlights the extremely persuasive nature of disinformation and the need to strengthen our efforts to combat it.

Fighting disinformation about climate change is a major challenge for society.

Although scientific consensus on human responsibility -- reaffirmed by the sixth report of the Intergovernmental Panel on Climate Change (IPCC) -- has been in place for decades, a third of the population still doubts or disputes it. This phenomenon can be explained by the disinformation spread by certain companies and lobbies over the last 50 years.

''For instance, these messages can take the form of an unfounded questioning of the scientific consensus or an overestimation of the socio-financial burden of climate policies,'' explains Tobia Spampatti, a PhD Student and Teaching and Research Assistant in the Consumer Decision and Sustainable Behavior Lab (CDSB Lab) at the Faculty of Psychology and Educational Sciences and at the Swiss Center for Affective Sciences of the UNIGE.

Many psychological factors

This phenomenon weakens the support of a part of the population for climate policies.

To combat this, Tobia Spampatti and researchers from the UNIGE developed a theoretical framework to describe the formation and updating of (anti)scientific information.

This framework, built on previous theoretical takes on the psychology of misinformation (Philippe Mueller et al. and Ulrich Ecker et al. in 2022), takes into account the source of the message, its content, its recipients, and the psychological factors that can influence their processing.

This theoretical framework aims to identify the entry points for disinformation to access a person's ''psyche'', and can be used to intervene and block, or encourage, people to accept information.

''As individuals, we do not process scientific messages as neutral receivers of information, but by weighing them up against our prior beliefs, desired outcomes, emotional ties and socio-cultural and ideological backgrounds.

Depending on the configuration of these psychological factors, anti-scientific beliefs can be amplified and become resistant to correction,'' explains Tobia Spampatti, first author of the study.

Six preventive strategies put to the test

On this basis, the researchers developed six psychological intervention strategies aimed at preventing climate disinformation from affecting people's climate-related beliefs and behaviors.

They were tested on 6,816 participants in twelve different countries.

Each strategy was linked to a particular theme (scientific consensus, trust in climate scientists, transparent communication, moralizing climate action, accuracy, positive emotions towards climate action). The participants were divided into eight groups: six subjected to one of these strategies, one to disinformation without prevention, and a control group.

The ''trust in climate scientists'' group, for example, received verified information demonstrating the credibility of IPCC scientists.

The "transparent communication" group, meanwhile, was presented with information on both the advantages and the disadvantages of climate mitigation actions.

Each group was then exposed to twenty pieces of false or biased information, ten on climate science and ten on climate policy.

The UNIGE scientists then measured their impact after these preventive interventions by asking the participants about their feelings regarding climate mitigation actions.

Low preventive effect


''We found that the protective effect of our strategies is small and disappears after the second exposure to disinformation.

Climate disinformation used in this study has a negative influence on people's belief in climate change and their sustainable behaviour'', says Tobias Brosch, Associate Professor in the CDSB Lab at the Faculty of Psychology and Educational Sciences and at the Swiss Center for Affective Sciences in the UNIGE, and final author of the study.

''Disinformation is therefore extremely persuasive, seemingly more so than scientific information.

Only the 'accuracy' group, who were asked to think in depth about the accuracy of the information they encountered online, showed a slight advantage''.

Read more at Science Daily

Brittle stars can learn just fine -- even without a brain

We humans are fixated on big brains as a proxy for smarts. But headless animals called brittle stars have no brains at all and still manage to learn through experience, new research reveals.

Relatives of starfish, brittle stars spend most of their time hiding under rocks and crevices in the ocean or burrowing in the sand.

These shy marine creatures have no brain to speak of -- just nerve cords running down each of their five wiggly arms, which join to form a nerve ring near their mouth.

"There's no processing center," said lead author Julia Notar, who did the research as part of her biology Ph.D. in professor Sönke Johnsen's lab at Duke University.

"Each of the nerve cords can act independently," Notar said.

"It's like instead of a boss, there's a committee."

In the case of brittle stars, that seems to be enough to learn by association, Notar, Johnsen and former Duke undergraduate Madeline Go report in the journal Behavioral Ecology and Sociobiology.

This type of learning involves associating different stimuli via a process called classical conditioning.

A famous example is Pavlov's dog experiments, which showed that dogs repeatedly fed at the ringing of a bell would eventually start drooling at the mere sound of a bell, even when no food was around.

Humans do this all the time. If you hear the "ding" of a smartphone over and over again with each new alert, eventually the sound starts to have a special meaning.

Just hearing someone's phone ping or buzz with the same chime as yours is enough to make you reflexively reach for your own phone in anticipation of the next text, email, or Instagram post.

Classical conditioning has been demonstrated in a handful of previous studies in starfish.

But most echinoderms -- a group of some 7,000 species that includes brittle stars and similarly brainless starfish, sea urchins and sea cucumbers -- have not been tested.

To find out if brittle stars are capable of learning, the researchers put 16 black brittle stars (Ophiocoma echinata) in individual water tanks and used a video camera to record their behavior.

Half the brittle stars were trained by dimming the lights for 30 minutes whenever the animals were fed.

Every time the lights went out, the researchers would put a morsel of shrimp -- "which they love" -- in the tanks, placed just out of reach.

The other half got just as much shrimp and also experienced a 30-minute dark period, but never at the same time -- the animals were fed under lit conditions.

Whether it was light or dark, the animals spent most of their time hiding behind the filters in their tanks; only coming out at mealtime.

But only the trained brittle stars learned to associate darkness with food.

Early in the 10-month-long experiment, the animals stayed hidden when the lights went out.

But over time, the animals made such a connection between the darkness and mealtime that they reacted as if food was on its way and crept out of hiding whenever the lights went out, even before any food was put in the tanks.

These brittle stars had learned a new association: lights out meant that food was likely to show up. They didn't need to smell or taste the shrimp to react.

Just sensing the lights go dim was enough to make them come when called for dinner.

They still remembered the lesson even after a 13-day 'break' without training, i.e., dimming the lights over and over again without feeding them.

Notar said the results are "exciting" because "classical conditioning hasn't really been shown definitively in this group of animals before."

"Knowing that brittle stars can learn means they're not just robotic scavengers like little Roombas cleaning up the ocean floor," Notar said.

"They're potentially able to expect and avoid predators or anticipate food because they're learning about their environment."

As a next step, Notar hopes to start to tease apart how they manage to learn and remember using a nervous system that is so different from our own.

"People ask me all the time, 'how do they do it?'" Notar said.

Read more at Science Daily

Nov 29, 2023

Astronomers discover disc around star in another galaxy

In a remarkable discovery, astronomers have found a disc around a young star in the Large Magellanic Cloud, a galaxy neighbouring ours. It's the first time such a disc, identical to those forming planets in our own Milky Way, has ever been found outside our galaxy. The new observations reveal a massive young star, growing and accreting matter from its surroundings and forming a rotating disc. The detection was made using the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile, in which the European Southern Observatory (ESO) is a partner.

"When I first saw evidence for a rotating structure in the ALMA data I could not believe that we had detected the first extragalactic accretion disc, it was a special moment," says Anna McLeod, an associate professor at Durham University in the UK and lead author of the study published today in Nature.

"We know discs are vital to forming stars and planets in our galaxy, and here, for the first time, we're seeing direct evidence for this in another galaxy."

This study follows up observations with the Multi Unit Spectroscopic Explorer (MUSE) instrument on ESO's Very Large Telescope (VLT), which spotted a jet from a forming star -- the system was named HH 1177 -- deep inside a gas cloud in the Large Magellanic Cloud.

"We discovered a jet being launched from this young massive star, and its presence is a signpost for ongoing disc accretion," McLeod says.

But to confirm that such a disc was indeed present, the team needed to measure the movement of the dense gas around the star.

As matter is pulled towards a growing star, it cannot fall directly onto it; instead, it flattens into a spinning disc around the star.

Closer to the centre, the disc rotates faster, and this difference in speed is the smoking gun that shows astronomers an accretion disc is present.

"The frequency of light changes depending on how fast the gas emitting the light is moving towards or away from us," explains Jonathan Henshaw, a research fellow at Liverpool John Moores University in the UK, and co-author of the study.

"This is precisely the same phenomenon that occurs when the pitch of an ambulance siren changes as it passes you and the frequency of the sound goes from higher to lower."

The detailed frequency measurements from ALMA allowed the authors to distinguish the characteristic spin of a disc, confirming the detection of the first disc around an extragalactic young star.

Read more at Science Daily

Commitments needed to solve aviation's impact on our climate

Concerted efforts and commitments are needed to solve the complex trade-offs involved in reducing the impact of aviation on the climate, according to new research.

Non-CO2 emissions from aircraft -- largely of nitrogen oxides, soot and water vapour -- are known to add to global warming effects alongside the aviation sector's other CO2 emissions.

Soot triggers the formation of contrails and 'contrail cirrus', which are line-shaped clouds produced by aircraft engine exhaust.

This causes an increase in high clouds that can warm the Earth's atmosphere.

In a comprehensive assessment of the potential solutions to limit the non-CO2 emissions produced by aircraft, scientists warn there is 'no silver bullet' and a committed and co-ordinated effort from a range of stakeholders is urgently required.

The research, published today (November 28) in the Royal Society for Chemistry's journal Environmental Science: Atmospheres, outlines aviation's non-CO2 effects on the atmosphere, both in terms of climate and air quality, and how these may change in the future, as well as the effects of future technologies and fuels.

The findings are the result of a two-year study by Manchester Metropolitan University, the University of Oxford, the University of Reading and Imperial College London.

David Lee, Professor in Atmospheric Science at Manchester Metropolitan, said: "What we highlight is the inherent uncertainties that remain in some of these very complex effects on climate from non-CO2 emissions.

"More importantly, reducing the impact of emissions on the climate is not straightforward as practically all routes forward with conventional liquid hydrocarbon fuels involve 'trade-offs', mostly at the expense of emitting more CO2, whether it be technological or operational efforts.

"These trade-offs and uncertainties mean that there are no simple silver bullets or low-hanging fruit to solve the problem. What is often forgotten is, that while the non-CO2 climate impacts of, for example, an individual flight are short lived, a substantial proportion of the emitted CO2 persists for a very long time, literally tens of millennia. This means it is a difficult balancing act if reducing non-CO2 emissions leads to an increase in CO2 emissions."

Professor Keith Shine, Regius Professor of Meteorology and Climate Science at the University of Reading, is an author of the new paper.

He said: "Given the many uncertainties in the size of aviation non-CO2 climate effects, it is premature to adopt any strategy that aims to decrease non-CO2 climate effects but, at the same time, risks increasing CO2 emissions. We must be mindful that aviation affects local air quality as well as climate. Sometimes measures that improve one will be to the detriment of the other."

Aviation is responsible for around 2.5% of the global CO2 emissions caused by human activity.

However, due to the amount non-CO2 emissions it produces, it is responsible for around 3.5% of change in the energy balance of the atmosphere -- known as radiative forcing -- or around 4% of the increase in global mean temperatures.

The sector is difficult to decarbonise because of its strong dependence on fossil kerosene -- jet fuel -- and the long timescales involved in developing new aircraft and replacing older fleets.

Given the aviation sector's strong growth after the COVID-19 pandemic, this contribution to climate change is set to increase, when other sectors are battling to reduce emissions.

In the latest assessment researchers argue for more work to be performed on the complex trade-offs in order to urgently search for solutions.

This difficulty has recently been recognised by the UK government which, through the Natural Environment Research Council (NERC), has announced a £10 million research programme to help inform policy decisions in this area.

Read more at Science Daily

How shifting climates may have shaped early elephants' trunks

Researchers have provided new insights into how ancestral elephants developed their dextrous trunks.

The study, published today as an eLife Reviewed Preprint, combines multiple analyses to reconstruct feeding behaviours in the extinct longirostrine elephantiforms- elephant-like mammals characterised by elongated lower jaws and tusks. The work is described by the editors as fundamental to our understanding of how the elongated lower jaw and long trunks evolved in these animals during the Miocene epoch, around 11-20 million years ago. It provides compelling evidence for the diversity of these structures in longirostrine gomphotheres, and their likely evolutionary responses to global climatic changes.

The findings may also shed light on why modern day elephants are the only animals able to feed themselves using their trunks.

Longirostrine gomphotheres are part of the proboscidean family -- a group of mammals including elephants and known for their elongated and versatile trunks. Longirostrine gomphotheres are notable as they underwent a prolonged evolutionary phase characterised by an exceptionally elongated lower jaw, or mandible, which is not found in later proboscideans. It is thought that their elongated mandible and trunk may have co-evolved in this group, but this change among early to late proboscideans remains incompletely understood.

"During the Early to Middle Miocene, gomphotheres flourished across Northern China," says lead author Dr. Chunxiao Li, a postdoctoral researcher at the University of Chinese Academy of Sciences, Beijing, China. "Across species there was huge diversity in the structure of the long mandible. We sought to explain why proboscideans evolved the long mandible and why it subsequently regressed. We also wanted to explore the role of the trunk in these animals' feeding behaviours, and the environmental background for the co-evolution of their mandibles and trunks."

Li and colleagues used comparative functional and eco-morphological investigations, as well as a feeding preference analysis, to reconstruct the feeding behaviour of three major families of longirostrine gomphotheres: Amebelodontidae, Choerolophodontidae and Gomphotheriidae.

To construct the feeding behaviours and determine the relation between the mandible and trunk, the team examined the crania and lower jaws of the three groups, sourced from three different museums. The structure of the mandible and tusks differed across the three groups, indicating differences in feeding behaviours. The mandibles of Amebelodontidae were generally shovel-like and the tusks were flat and wide. Gomphotheriidae had clubbed lower tusks and a more narrow mandible, while Choerolophodontidae completely lacked mandibular tusks and their lower jaw was long and trough-like.

Next, the team conducted an analysis of the animals' enamel isotopes to determine the distribution and ecological niches of the three families. The results indicated that Choerolphontidae lived in a relatively closed environment, whereas Platybelodon, a member of the Amebelodontidae family, lived in a more open habitat, such as grasslands. Gomphotheriidae appeared to fill a niche somewhere in between these closed and open habitats.

A Finite Element analysis helped the team determine the advantages and disadvantages of the mandible and tusk structure between each group. Their data indicated that the Choerolophodontidae mandible was specialised for cutting horizontally or slanted-growing plants, which may explain the absence of mandibular tusks. The Gomphotheriidae mandible was found to be equally suited for cutting plants growing in all directions. Platybelodon had structures specialised for cutting vertically growing plants, such as soft-stemmed herbs, which would have been more common in open environments.

The three families also showed differences in their stages of trunk evolution, which could be inferred from the narial structure -- the region surrounding the nostrils. The narial region in Choerolophodontidae suggested that they had a relatively primitive, clumsy trunk. In Gomphotheriidae, the narial region was most similar to modern day elephants, suggesting they had a relatively flexible trunk. The trunks of Platybelodons may be the first example of a proboscidean trunk with the ability to coil and grasp. The evolutionary level of the trunk appeared to relate to the ability of the mandible to cut horizontally, strongly suggesting a co-evolution between the trunk and the mandible in longirostrine gomphotheres.

During the Mid-Miocene Climate Transition, which caused regional drying and the expansion of more open ecosystems, Choerolophodontidae experienced a sudden regional extinction and Gomphotheriidae numbers also declined in Northern China. The study suggests that the development of the coiling and grasping trunk in Platybelodon allowed this group to survive in greater numbers in their open environments. This may also explain why other animals with trunks, such as tapirs, never developed such dextrous trunks as elephants, as they never moved into open lands.

"Our cross-disciplinary team is dedicated to introducing multiple quantitative research methods to explore paleontology," says co-author Ji Zhang, associate professor of structural engineering at Huazhong University of Science and Technology, Wuhan, China. "Modern computational mechanics and statistics have injected new vitality into traditional fossil research."

The main limitation of this work is the lack of discussion comparing the team's results with the development of gigantism and long limbs in proboscideans from the same period, according to eLife's editors. The authors add that such analysis could add to our understanding of how changing feeding behaviours related to wider differences in the animals' body shapes and sizes during this time.

Read more at Science Daily

Understanding subjective beliefs could be vital to tailoring more effective treatments for depression and ADHD

Taking into account whether people believe they are receiving a real treatment or a fake one (placebo) could provide better insights that could help improve interventions for conditions such as depression and ADHD.

A team of psychologists, led by Professor Roi Cohen Kadosh from the University of Surrey, analysed five independent studies that covered different types of neurostimulation treatments to understand the role of patients' subjective beliefs.

These patients included both clinical patients being treated for ADHD and depression, as well as healthy adults.

The study found that patients' beliefs about whether they were receiving real or placebo treatments explained the treatment outcomes in four of the five studies.

On some occasions, the subjects' beliefs explained the treatment's results better than the actual treatment itself.

Assumptions about the treatment intensity also played a significant role in the treatment.

Professor Roi Cohen Kadosh from the University of Surrey said that the results have provided a twist that scientists must consider in future research:

"The common wisdom is that the same medical treatment would produce similar results across patients, but our latest study suggests a fascinating twist. While you'd expect uniform improvements in a group of people with depression undergoing the same neurostimulation treatment, outcomes can vary widely.

"What's truly eye-opening is that this variability could be largely influenced by the participants' own beliefs about the treatment they're receiving. In essence, if an individual believes they're receiving an effective treatment -- even when given a placebo -- that belief alone might contribute to significant improvements in their condition."

In the first study analysed, 121 participants were treated with different forms of Transcranial Magnetic Stimulation (rTMS) for depression.

The results showed that participants' perceptions about receiving real or placebo treatment mattered more than the actual type of rTMS in reducing depression.

The second study involved 52 older people with late-life depression who received either a real or placebo of deep rTMS.

Surrey researchers found that the effect of treatment on reducing depression scores depended on the combination of the participants' perceptions about receiving real or placebo treatment and the actual treatment they received.

In the third dataset, researchers investigated the effects of home-based Transcranial Direct Current Stimulation (tDCS) treatment on 64 adults diagnosed with ADHD.

At the end of the study, participants' beliefs about the treatment they thought they had received were also collected.

This study differed from the first two as both the subjects' beliefs and the actual treatment had a dual effect on reducing inattention scores.

In the fourth study, 150 healthy participants got varying doses of tDCS for mind wandering.

Those who believed they got a more potent dose reported more mind wandering, even if the actual treatment wasn't a factor.

The fifth study analysed the impact of transcranial random noise stimulation on working memory.

Unlike previous studies, participants' beliefs didn't affect the results, highlighting the varying influence of beliefs in brain stimulation research.

Thus, Roi Cohen Kadosh and his team show how subjective beliefs can vary in their effect on research -- from fully explaining results beyond the actual treatment, to interacting with the treatment, to having no influence at all.

Dr Shachar Hochman, a co-author on this work from the University of Surrey, said:

"The concept that a placebo or sham treatment can mimic genuine treatment effects is well-established in science. While researchers have closely monitored this phenomenon, it has been typically catalogued separately from the in-depth analyses of the actual treatment outcomes. What sets our study apart is that we have brought together these two datasets -- subjective beliefs and objective treatment measures. This has the potential to reveal new insights into treatment efficacy."

Read more at Science Daily

Nov 28, 2023

A gamma-ray pulsar milestone inspires innovative astrophysics and applications

The U.S. Naval Research Laboratory (NRL), in conjunction with the international Fermi Large Area Telescope Collaboration, announce the discovery of nearly 300 gamma ray pulsars in the publication of their Third Catalog of Gamma Ray Pulsars. This milestone comes 15 years since the launch of Fermi in 2008, when there were fewer than ten known gamma-ray pulsars.

"Work on this important catalog has been going on in our group for years," said Paul Ray, Ph.D., head of the High Energy Astrophysics and Applications Section at NRL.

"Our scientists and postdocs have been able to both discover and analyze the timing behavior and spectra of many of these newfound pulsars as part of our quest to further our understanding of these exotic stars that we are able to use as cosmic clocks."

Pulsars are formed when massive stars have burned though their fuel supply and become unable to resist the inward pull of their own gravity.

This results in the star collapsing into a dense, spinning magnetized neutron star.

Their spinning magnetic fields send out beams of gamma rays, the most energetic form of light.

As these beams sweep across the Earth, the highly sensitive Fermi gamma-ray telescope can observe their periodic pulses of energy.

With more than 15 years of data, Fermi has transformed the field of pulsar research.

"We have been very excited about how many millisecond pulsars (MSPs) we have been able to detect using these gamma rays," said Matthew Kerr, Ph.D., an NRL astrophysicist.

"We are able to study these objects that began as young pulsars in a binary system. Like a spinning top, they eventually slowed down and became inert. Over the past hundreds of millions of years, their binary companions dumped matter on to them, causing their speed to increase again, very dramatically and far faster than before, "recycling" these pulsars into MSPs. These high speed MSPs are now some of Nature's most precise timekeepers."

Scientists have been using these cosmic clocks in experiments called Pulsar Timing Arrays.

By searching for tiny deviations in the times at which the pulses arrive, scientist have been able to search for ripples in spacetime.

These ripples, known as gravitational waves, are produced when very massive objects, like pulsars, accelerate very quickly.

Very strong gravitational wave sources indicate a cataclysmic crash of dense, compact objects such as neutron stars and black holes.

Recently, several pulsar timing array collaborations, including several NRL researchers, published the first compelling evidence for very low-frequency gravitational waves, likely from the merger of supermassive black holes.

"These are such exciting results," said Thankful Cromartie, Ph.D., a National Research Council Research Associate at NRL.

"These low frequency gravitational waves allow us to peer into the centers of massive galaxies and better understand how they were formed."

The pulsar timing array results have important practical applications as well.

The spacetime distortions set a limit on how precisely we can use pulsars for critical navigation and timing.

In pulsar-based navigation, these spinning pulsars play much the same role as GPS satellites do, but we are able to use them far beyond the Earth's orbit.

"Now we know where that ultimate stability limit is," said Dr. Ray.

Using Fermi's gamma ray detection abilities are also having an impact on pulsar timing array work.

"Previously, once we found an MSP we had to hand it off to radio astronomers to monitor with huge telescopes," said Dr. Kerr.

"What we have found is that Fermi is sensitive enough by itself to constrain these gravitational waves and, unlike radio waves, which are bent like the light in a prism as they travel to earth, the gamma rays shoot straight to us. This reduces potential systemic errors in measurements."

For Megan DeCesar, Ph.D., a George Mason University scientist working at NRL, the most intriguing aspect of the new work in the dramatic increase of "spider" pulsars.

"Spider pulsars are named after arachnids that eat their smaller mates," DeCesar said.

"Something similar can happen when a neutron star and its binary companion are very close to each other and the MSP "recycling" process gets a little carried away. The intense radiation and particle wind from the pulsar eats away at the surface of the other star, resulting in a puffball of evaporated material."

When compared to radio observations, Fermi is particularly adept at finding these "spiders" as, in many cases, radio waves are eclipsed as the pulsar beam passes the remnants of the companion star.

Gamma rays, however, are capable of passing right through. "While it may be that spider systems are also intrinsically brighter in gamma rays, studying them will help us to understand their origins and the bonanza of discoveries we have made with Fermi," said DeCesar.

Read more at Science Daily

Was 'witchcraft' in the Devil's Church in Koli based on acoustic resonance? The crevice cave has a unique soundscape

The national park of Koli in eastern Finland is home to a famous, 34-metre-long crevice cave known as Pirunkirkko, or Devil's Church in English. In folklore, this crevice cave was known as a place where local sages would meet to contact the spirit world. Even today, the place is visited by practitioners of shamanism, who organise drumming sessions in the cave.

A new article by Riitta Rainio, a researcher of archaeology at the University of Helsinki, and Elina Hytönen-Ng, a researcher of cultural studies at the University of Eastern Finland, investigates the acoustics of the Devil's Church and explores whether the acoustic properties of the cave could explain the beliefs associated with it, and why it was chosen as a place for activities and rituals involving sound.

The researchers found that the Devil's Church houses a distinct resonance phenomenon that amplifies and lengthens sound at a specific frequency.

This phenomenon may have significantly impacted the beliefs and experiences associated with the cave.

Resonance as a booster of healing rituals and drumming sessions

The researchers reviewed historical archives showing that several known sages and healers operated in the Koli area.

The most famous of the sages was a man known as Kinolainen, sometimes also referred to as Tossavainen, who used the Devil's Church for magical rituals.

"According to folklore, Kinolainen would take his patients to the 'church' to talk with the Devil about the causes and cures of their ailments. This kind of a healing ritual often included loud yelling, stomping, shooting and banging," Rainio says, summarising traditional records.

Hytönen-Ng also interviewed and observed a modern-day practitioner of shamanism who uses the Devil's Church for rituals.

According to the practitioner, there is a special energy in the cave, creating a strong connection to the surrounding nature and to one's own roots.

"The practitioner told in the interview that drumming sessions especially at the back of the cave have opened up 'new horizons'."

According to Rainio, acoustic measurements conducted in the corridor-like, smooth walled back of the cave show a strong resonance phenomenon.

The phenomenon is caused by a standing wave between the smooth parallel walls, generating a tone at the natural frequency of the cave, 231 Hz, that stays audible for around one second after sharp impulses, such as clapping, drumming or loud bangs.

Tones vocalised in the cave near the 231 Hz frequency are amplified and lengthened by the cave.

"We recorded the shamanic practitioner and found that they repeatedly vocalised tones at 231 Hz, which were then amplified by the cave at its natural frequency."

A rare phenomenon in the natural environment

Resonance is a common phenomenon in the built environment, especially in small rooms, but it is rare in the natural environment where smooth and solid, parallel surfaces are rarely found.

According to the researchers, it can therefore be assumed that the resonance occurring in the innermost part of the Devil's Church has been an exceptional sound phenomenon for the people living in the region centuries ago.

Similar distinct resonances in the natural environment have been measured, for example, in the Palaeolithic caves of France and Spain, occurring especially near paintings on cave walls.

Rainio and Hytönen-Ng suspect that a resonance-amplified, persistent tone has probably been audible on the background of rituals performed in the Devil's Church.

According to the researchers, the effect of this resonance may have been subtle and unconscious, yet it may have significantly shaped the beliefs and experiences associated with the cave.

"Where a researcher of acoustics hears as resonance, people of the past may have sensed the presence of a spirit, and a shamanic practitioner may feel the presence of an exceptional energy, each according to their background."

The study thus provides an example of how resonance can be used to establish concrete communication and dialogue with a physical space, site or the natural environment.

Read more at Science Daily

Macaque trials offer hope in pneumonia vaccine development

The global impact of the coronavirus pandemic has ignited a renewed focus on emerging and re-emerging infectious diseases. Researchers at Osaka Metropolitan University are making great strides in combating pneumococcal pneumonia, one of the leading causes of respiratory deaths worldwide.

Despite the existence of vaccines against pneumococcal infections such as otitis media, sinusitis, and meningitis, the prevalence of pneumococcal pneumonia remains high.

Currently, around 100 new serotypes of Streptococcus pneumoniae have been identified, and the increase in pneumococcal infections caused by serotypes not covered by the vaccine has become a concern.

This situation underscores the need for a more versatile vaccine.

Building on their previous success in mucosal responses in 2019, in which they developed a mucosal vaccine that caninduce antigen-specific mucosal immune responses, mainly immunoglobulin A (IgA), on the target mucosal surface, a research team led by Professor Satoshi Uematsu and Associate Professor Kosuke Fujimoto from the Department of Immunology and Genomics at the Graduate School of Medicine, Osaka Metropolitan University, has this time set out to bridge the gap in pneumococcal pneumonia vaccination efficacy.

To successfully develop a novel pneumococcal vaccine, the research team combined its proprietary mucosal vaccine technology with pneumococcal surface proteins that can cover a wide range of serotypes.

Experiments conducted on mice and macaques have demonstrated the vaccine's efficacy in suppressing pneumococcal pneumonia in the target animal groups.

Read more at Science Daily

Future floods: Global warming intensifies heavy rain -- even more than expected

The intensity and frequency of extreme rainfall increases exponentially with global warming, a new study finds. The analysis by researchers from the Potsdam Institute of Climate Impact Research (PIK) shows that state-of-the-art climate models significantly underestimate how much extreme rainfall increases under global warming -- meaning that extreme rainfall could increase quicker than climate models suggest.

"Our study confirms that the intensity and frequency of heavy rainfall extremes are increasing exponentially with every increment of global warming," explains Max Kotz, lead-author of the study published in the Journal of Climate. These changes follow the physical theory of the classic Clausius-Clapeyron relation of 1834, which established that warmer air can hold more water vapour.

"State-of-the-art climate models vary on how strongly extreme rainfall scales with global warming and that they underestimate it compared to historical observations."

"Climate impacts on society have been calculated using climate models. Now our findings suggest that these impacts could be much worse than we thought. Extreme rainfall will be heavier and more frequent. Society needs to be prepared for this," says PIK department head and author of the study Anders Levermann.

Changes in the frequency and intensity of daily rainfall extremes over land can impact social welfare, the economy and social stability, given their link to flooding but also ground-water availability, which can cause considerable loss of life and financial losses.

Stronger increases of extremes across tropical regions

The researchers at PIK analysed the intensity and frequency of daily precipitation extremes over land in 21 state-of-the-art climate simulations (CMIP-6) and compared the changes projected by CMIP-6 models to those observed historically.

The method they applied draws on pattern-filtering techniques, allowing them to separate which changes in the climate system are forced by human emissions, and which are not.

Read more at Science Daily

Nov 27, 2023

High-power fiber lasers emerge as a pioneering technology

Optical scientists have found a new way to significantly increase the power of fibre lasers while maintaining their beam quality, making them a future key defence technology against low-cost drones and for use in other applications such as remote sensing.

Researchers from the University of South Australia (UniSA), the University of Adelaide (UoA) and Yale University have demonstrated the potential use of multimode optical fibre to scale up power in fibre lasers by three-to-nine times but without deteriorating the beam quality so that it can focus on distant targets.

The breakthrough is published in Nature Communications.

Co-first author Dr Linh Nguyen, a researcher at UniSA's Future Industries Institute, says the new approach will allow the industry to continue squeezing out extremely high power from fibre lasers, make them more useful for the defence industry, and for remote sensing applications and gravitational wave detection.

"High-power fibre lasers are vital in manufacturing and defence, and becoming more so with the proliferation of cheap, unmanned aerial vehicles (drones) in modern battlefields," Dr Nguyen says.

"A swarm of cheap drones can quickly drain the missile resource, leaving military assets and vehicles with depleted firing power for more combat-critical missions. High-power fibre lasers, with their extremely low-cost-per-shot and speed of light action, are the only feasible defence solution in the long run.

"This is known as asymmetric advantage: a cheaper approach can defeat a more expensive, high-tech system by playing the large number."

In delivering an asymmetric advantage this advanced capability has the potential to provide a strong deterrent effect, aligning well with the objectives of the Defence Strategic Review and AUKUS Pillar 2 objectives.

Dr Ori Henderson-Sapir, project investigator at the UoA's Institute for Photonics and Advanced Sensing, says that Australia has a long history of developing innovative fibre optics technologies.

"Our research launches Australia into a world-leading position to develop the next generation of high-power fibre lasers, not only for defence applications, but to aid new scientific discoveries."

Read more at Science Daily

'Not dead yet': Experts identify interventions that could rescue 1.5°C

To meet the goals of the Paris Agreement and limit global heating to 1.5°C, global annual emissions will need to drop radically over the coming decades. Today [22 Nov], a new paper from climate economists at the University of Oxford says that this goal could still be within our reach. They identify key "sensitive intervention points" that could unlock significant progress towards the Paris Agreement with the least risk and highest impact. These include:

  • Investing in clean energy technologies with consistent cost declines
  • Enacting central bank policies to reduce the value of polluting assets
  • Improving climate-related financial risk disclosure.


'This is not to suggest that reaching the Paris goals will be straightforward, or easy, but like Achilles' heel, our research points to the areas that could have an outsized impact,' says lead author Dr Penny Mealy, associate at the Institute for New Economic Thinking, University of Oxford.

'We need climate policies which are pragmatic and practical, designed with an understanding of where the economy and technologies are capable of quickly transforming our economies for the better.

These are those policy areas. This is how we design policy for 1.5°C,' affirms co-author Dr Pete Barbrook-Johnson of the Smith School of Enterprise and the Environment.

The research also highlights the areas where interventions will be more difficult and less impactful, including nuclear fission, which would be slow to roll out and could have unintended consequences; and carbon capture and storage, which presents both high barriers and risks.

To reach their conclusions, the authors devised a new framework for identifying sensitive intervention points, or SIPs, that have the characteristics necessary to radically decarbonize our global economy.

SIPs include critical tipping points -- like renewable energy becoming cheaper than coal; critical points in networks -- like powerful political figures or important technologies, and critical points in time or "windows of opportunity" that might prime the existing systems for change, such as the Covid-19 pandemic.

These intervention points must be assessed by the ease with which they can be implemented, their impact potential, and the potential for creating risks.

The authors stress that, while the framework is highly applicable to climate change, it could also be applied to solving other economic and social problems.

The ratings provided for each SIP intervention were applied subjectively based on discussions with experts, literature research, and modelling.

The framework can and should be applied regularly to reassess priorities as new data and insights become available, the authors say.

Read more at Science Daily

Apology psychology: Breaking gender stereotypes leads to more effective communication

From social media to the workplace, non-stereotypical apologies can help repair trust, according to new study involving a University of Arizona researcher.

Saying "I'm sorry," especially in the workplace, can be tricky terrain. Delivering an effective apology can help resolve conflicts, restore trust and promote collaboration among coworkers.

But what works best?

A research team including a University of Arizona faculty member says that to make your next apology more effective, use language that goes against gender stereotypes.

Sarah Doyle, associate professor in the Department of Management and Organizations in the Eller College of Management, said the team wanted to find out what constitutes an effective apology in the workplace -- and whether the content of a successful apology looks different depending on the gender of the apologizer. The research was published in the Journal of Applied Psychology.

The team used past research to define "masculine" and "feminine" language, including a study from 2003 that defined masculine language as having more agency and being more assertive, confident and self-assured, and feminine language as warm, communal and nurturing. The team labeled apologies with more masculine language as "agentic," and those with more feminine language as "communal." Overall, Doyle's team found that those who "violated" gender stereotypes were seen as delivering more effective apologies.

"We found that women delivering masculine-style apologies benefited because they were seen as displaying higher levels of assertiveness and enhancing their perceived competence," Doyle said. "The men delivering apologies with more stereotypically feminine language were seen as having greater interpersonal sensitivity that enhanced their perceived benevolence or warmth."

Starting with celebrities

The team began its series of four studies by searching through a platform that is a well-known hotspot for celebrity apologies: X, formerly known as Twitter. They ultimately examined 87 apology tweets from celebrities, including rapper and singer Lizzo, comedian Kevin Hart, actor Tyler Posey and television personality Kendra Wilkinson. Public reaction to those tweets supported the idea of apologizers benefiting by violating gender stereotypes, especially for the women in the sample, Doyle said.

"The female celebrities who delivered apologies that were higher in these masculine qualities were especially likely to receive these benefits," Doyle said. "There were higher 'like' counts and the sentiments in response to those apology tweets were much more positive."

For women delivering an apology on the platform, a one-point increase in agentic language, as measured on a five-point scale, returned an average of more than 17,000 additional likes, Doyle said.

Everyday apologies


In the second study, 366 working adults participated in a scenario in which their accountant sends them an email apologizing for making a mistake on their taxes. Individuals were randomly assigned to one of four groups classified by a male or female accountant delivering a stereotypically masculine or feminine apology. Participants then rated different components of the apology and determined whether they would like to continue using the accountant. The data lined up with the results from the first study, showing, for both male and female apologizers, that the counter-stereotypical apology was more effective.

The third study involved 441 individuals participating in the same accounting scenario but asked them to respond to the accountant's apology and determine whether they wanted to keep working with them. The fourth study was similar to the third, but used a scenario involving a paperwork error by a nurse to see if using a more traditionally female occupation would change the results. The data from each study showed counter-stereotypical apologies were seen as more effective, especially for female apologizers.

Across the studies using the accounting or nursing scenarios, researchers found that, for women, delivering a counter-stereotypical apology increased the apology's perceived effectiveness by an average of 9.7%. For men, using a counter-stereotypical apology increased perceived effectiveness by an average of 8.2%.

"It's important to mention that we did not find that men and women are penalized for giving a stereotypical apology," Doyle said, "Rather, they benefit from giving a counter-stereotypical one. Thus, any apology is likely to be better than no apology at all."

Sorry to ask, but what did we learn?

Put simply, there are a lot of different ways to apologize, and it can help to think it through, Doyle said.

"I think people assume that 'I'm sorry' is a consistent and effective way to apologize, but there are a lot of different ways to say that," Doyle explained. "Not all apologies are the same, and it can help to be a little bit more deliberate about the language that you're using and the content that is included in your apology."

The research team is hoping the results can lead people to think beyond how often we apologize, and to put more focus on how we communicate.

"Much of the literature suggests women apologize too much and men don't apologize enough," Doyle said. "But I think the frequency conversation is a bit oversimplified. It's not just about whether people should apologize more or less, but how we can construct apologies differently. It's what you include in that apology that's really going to matter."

Readm ore at Science Daily

Ultra-processed foods and higher risk of mouth, throat and esophagus cancers

Eating more ultra-processed foods (UPFs) may be associated with a higher risk of developing cancers of upper aerodigestive tract (including the mouth, throat and esophagus), according to a new study led by researchers from the University of Bristol and the International Agency for Research on Cancer (IARC). The authors of this international study, which analysed diet and lifestyle data on 450,111 adults who were followed for approximately 14 years, sayobesity associated with the consumption of UPFs may not be the only factor to blame. The study is published today [22 November] in the European Journal of Nutrition.

Several studies have identified an association between UPF consumption and cancer, including a recent study which looked at the association between UPFs and 34 different cancers in the largest cohort study in Europe, the European Prospective Investigation into Cancer and Nutrition (EPIC) cohort.

As more evidence emerges about the associations between eating UPFs and adverse health outcomes, researchers from the Bristol Medical School and IARC wanted to explore this further.

Since many UPFs have an unhealthy nutritional profile, the team sought to establish whether the association between UPF consumption and head and neck cancer and esophageal adenocarcinoma (a cancer of the esophagus) in EPIC could be explained by an increase in body fat.

Results from the team's analyses showed that eating 10% more UPFs is associated with a 23% higher risk of head and neck cancer and a 24% higher risk of esophageal adenocarcinoma in EPIC.

Increased body fat only explained a small proportion of the statistical association between UPF consumption and the risk of these upper-aerodigestive tract cancers.

Fernanda Morales-Berstein, a Wellcome Trust PhD student at the University of Bristol and the study's lead author, explained: "UPFs have been associated with excess weight and increased body fat in several observational studies. This makes sense, as they are generally tasty, convenient and cheap, favouring the consumption of large portions and an excessive number of calories. However, it was interesting that in our study the link between eating UPFs and upper-aerodigestive tract cancer didn't seem to be greatly explained by body mass index and waist-to-hip ratio."

The authors suggest that other mechanisms could explain the association.

For example, additives including emulsifiers and artificial sweeteners which have been previously associated with disease risk, and contaminants from food packaging and the manufacturing process, may partly explain the link between UPF consumption and upper-aerodigestive tract cancer in this study.

However, Fernanda Morales-Berstein and colleagues did add caution regarding their findings and suggest that the associations between UPF consumption and upper-aerodigestive tract cancers found in the study could be affected by certain types of bias.

This would explain why they found evidence of an association between higher UPF consumption and increased risk of accidental deaths, which is highly unlikely to be causal.

George Davey Smith, Professor of Clinical Epidemiology and Director of the MRC Integrative Epidemiology Unit at the University of Bristol, and co-author on the paper, said: "UPFs are clearly associated with many adverse health outcomes, yet whether they actually cause these, or whether underlying factors such as general health-related behaviours and socioeconomic position are responsible for the link, is still unclear, as the association with accidental deaths draws attention to."

Inge Huybrechts, Team head of the Lifestyle exposures and interventions team at IARC, added: "Cohorts with long-term dietary follow-up intake assessments, considering also contemporary consumption habits, are needed to replicate these study's findings, as the EPIC dietary data were collected in the 1990s, when the consumption of UPFs was still relatively low. As such associations may potentially be stronger in cohorts including recent dietary follow-up assessments."

Further research is needed to identify other mechanisms, such as food additives and contaminants, which may explain the links observed.

However, based on the finding that body fat did not greatly explain the link between UPF consumption and upper-aerodigestive tract cancer risk in this study, Fernanda Morales-Berstein, suggested: "Focussing solely on weight loss treatment, such as Semaglutide, is unlikely to greatly contribute to the prevention of upper-aerodigestive tract cancers related to eating UPFs."

Dr Helen Croker, Assistant Director of Research and Policy at World Cancer Research Fund, added: "This study adds to a growing pool of evidence suggesting a link between UPFs and cancer risk. The association between a higher consumption of UPFs and an increased risk of developing upper-aerodigestive tract cancer supports our Cancer Prevention Recommendations to eat a healthy diet, rich in wholegrains, vegetables, fruit, and beans."

Read more at Science Daily