Jan 6, 2024

New images reveal what Neptune and Uranus really look like

Neptune is fondly known for being a rich blue and Uranus green -- but a new study has revealed that the two ice giants are actually far closer in colour than typically thought.

The correct shades of the planets have been confirmed with the help of research led by Professor Patrick Irwin from the University of Oxford, which has been published today in the Monthly Notices of the Royal Astronomical Society.

He and his team found that both worlds are in fact a similar shade of greenish blue, despite the commonly-held belief that Neptune is a deep azure and Uranus has a pale cyan appearance.

Astronomers have long known that most modern images of the two planets do not accurately reflect their true colours.

The misconception arose because images captured of both planets during the 20th century -- including by NASA's Voyager 2 mission, the only spacecraft to fly past these worlds -- recorded images in separate colours.

The single-colour images were later recombined to create composite colour images, which were not always accurately balanced to achieve a "true" colour image, and -- particularly in the case of Neptune -- were often made "too blue."

In addition, the early Neptune images from Voyager 2 were strongly contrast enhanced to better reveal the clouds, bands, and winds that shape our modern perspective of Neptune.

Professor Irwin said: "Although the familiar Voyager 2 images of Uranus were published in a form closer to 'true' colour, those of Neptune were, in fact, stretched and enhanced, and therefore made artificially too blue."

"Even though the artificially-saturated colour was known at the time amongst planetary scientists -- and the images were released with captions explaining it -- that distinction had become lost over time."

"Applying our model to the original data, we have been able to reconstitute the most accurate representation yet of the colour of both Neptune and Uranus."

In the new study, the researchers used data from Hubble Space Telescope's Space Telescope Imaging Spectrograph (STIS) and the Multi Unit Spectroscopic Explorer (MUSE) on the European Southern Observatory's Very Large Telescope. In both instruments, each pixel is a continuous spectrum of colours.

This means that STIS and MUSE observations can be unambiguously processed to determine the true apparent colour of Uranus and Neptune.

The researchers used these data to re-balance the composite colour images recorded by the Voyager 2 camera, and also by the Hubble Space Telescope's Wide Field Camera 3 (WFC3).

This revealed that Uranus and Neptune are actually a rather similar shade of greenish blue. The main difference is that Neptune has a slight hint of additional blue, which the model reveals to be due to a thinner haze layer on that planet.

The study also provides an answer to the long-standing mystery of why Uranus's colour changes slightly during its 84-year orbit of the Sun.

The authors came to their conclusion after first comparing images of the ice giant to measurements of its brightness, which were recorded by the Lowell Observatory in Arizona from 1950 -- 2016 at blue and green wavelengths.

These measurements showed that Uranus appears a little greener at its solstices (i.e. summer and winter), when one of the planet's poles is pointed towards our star. But during its equinoxes -- when the Sun is over the equator -- it has a somewhat bluer tinge.

Part of the reason for this was known to be because Uranus has a highly unusual spin.

It effectively spins almost on its side during its orbit, meaning that during the planet's solstices either its north or south pole points almost directly towards the Sun and Earth.

This is important, the authors said, because any changes to the reflectivity of the polar regions would therefore have a big impact on Uranus's overall brightness when viewed from our planet.

What astronomers were less clear about is how or why this reflectivity differs.

This led the researchers to develop a model which compared the spectra of Uranus's polar regions to its equatorial regions.

It found that the polar regions are more reflective at green and red wavelengths than at blue wavelengths, partly because methane, which is red absorbing, is about half as abundant near the poles than the equator.

However, this wasn't enough to fully explain the colour change so the researchers added a new variable to the model in the form of a 'hood' of gradually thickening icy haze which has previously been observed over the summer, sunlit pole as the planet moves from equinox to solstice.

Astronomers think this is likely to be made up of methane ice particles.

When simulated in the model, the ice particles further increased the reflection at green and red wavelengths at the poles, offering an explanation as to why Uranus is greener at the solstice.

Professor Irwin said: "This is the first study to match a quantitative model to imaging data to explain why the colour of Uranus changes during its orbit."

"In this way, we have demonstrated that Uranus is greener at the solstice due to the polar regions having reduced methane abundance but also an increased thickness of brightly scattering methane ice particles."

Dr Heidi Hammel, of the Association of Universities for Research in Astronomy (AURA), who has spent decades studying Neptune and Uranus but was not involved in the study, said: "The misperception of Neptune's colour, as well as the unusual colour changes of Uranus, have bedevilled us for decades. This comprehensive study should finally put both issues to rest."

The ice giants Uranus and Neptune remain a tantalising destination for future robotic explorers, building on the legacy of Voyager in the 1980s.

Professor Leigh Fletcher, a planetary scientist from the University of Leicester and co-author of the new study, said: "A mission to explore the Uranian system -- from its bizarre seasonal atmosphere, to its diverse collection of rings and moons -- is a high priority for the space agencies in the decades to come."

However, even a long-lived planetary explorer, in orbit around Uranus, would only capture a short snapshot of a Uranian year.

Read more at Science Daily

'Juvenile T. rex' fossils are a distinct species of small tyrannosaur

A new analysis of fossils believed to be juveniles of T. rex now shows they were adults of a small tyrannosaur, with narrower jaws, longer legs, and bigger arms than T. rex. The species, Nanotyrannus lancensis, was first named decades ago but later reinterpreted as a young T. rex.

The first skull of Nanotyrannus was found in Montana in 1942, but for decades, paleontologists have gone back and forth on whether it was a separate species, or simply a juvenile of the much larger T. rex.

Dr Nick Longrich, from the Milner Centre for Evolution at the University of Bath (UK), and Dr Evan Saitta, from the University of Chicago (USA) re-analysed the fossils, looking at growth rings, the anatomy of Nanotyrannus, and a previously unrecognized fossil of a young T. rex.

Measuring the growth rings in Nanotyrannus bones, they showed that they became more closely packed towards the outside of the bone -- its growth was slowing. It suggests these animals were nearly full size; not fast-growing juveniles.

Modelling the growth of the fossils showed the animals would have reached a maximum of around 900-1500 kilograms and five metres -- about 15 per cent of the size of the giant T. rex, which grew to 8,000 kilograms and nine metres or more.

The researchers have published their findings in Fossil Studies.

"When I saw these results I was pretty blown away," said Longrich. "I didn't expect it to be quite so conclusive.

"If they were young T. rex they should be growing like crazy, putting on hundreds of kilograms a year, but we're not seeing that.

"We tried modeling the data in a lot of different ways and we kept getting low growth rates. This is looking like the end for the hypothesis that these animals are young T. rex."

Supporting the existence of distinct species, the researchers found no evidence of fossils combining features of both the Nanotyrannus and T. rex - which would exist if the one turned into the other. Every fossil they examined could be confidently identified as one species or the other.

Neither did the patterns of growth in other tyrannosaurs fit with the hypothesis that these were young T. rex.

Dr Longrich said: "If you look at juveniles of other tyrannosaurs, they show many of the distinctive features of the adults. A very young Tarbosaurus - a close relative of T. rex - shows distinctive features of the adults.

"In the same way that kittens look like cats and puppies look like dogs, the juveniles of different tyrannosaurs are distinctive. And Nanotyrannus just doesn't look anything like a T. rex.

"It could be growing in a way that's completely unlike any other tyrannosaur, or any other dinosaur- but it's more likely it's just not a T. rex."

But that raises a mystery -- if Nanotyrannus isn't a juvenile Tyrannosaurus, then why hasn't anyone ever found a young T. rex?

"That's always been one of the big questions. Well, it turns out we actually had found one," said Longrich. "But the fossil was collected years ago, stuck in a box of unidentified bones in a museum drawer, and then forgotten."

The research led Longrich and co-author Evan Saitta to a previous fossil discovery, stored in a museum in San Francisco which they identified as a juvenile Tyrannosaurus.

That young T. rex is represented by a skull bone -- the frontal bone -- with distinctive features that ally it with Tyrannosaurus, but which aren't seen in Nanotyrannus. It comes from a small animal, one with a skull about 45 cm long and a body length of around 5 metres.

Dr Longrich said: "Yes, it's just one specimen, and just one bone, but it only takes one. T. rex skull bones are very distinctive, nothing else looks like it. Young T. rex exist, they're just incredibly rare, like juveniles of most dinosaurs."

The researchers argue these findings are strong evidence that Nanotyrannus is a separate species, one not closely related to Tyrannosaurus. It was more lightly-built and long-limbed than its thick-set relative. It also had larger arms, unlike the famously short-armed T. rex.

"The arms are actually longer than those of T. rex. Even the biggest T. rex, has shorter arms and smaller claws than in these little Nanotyrannus. This was an animal where the arms were actually pretty formidable weapons. It's really just a completely different animal -- small, fast, agile.

"T. rex relied on size and strength, but this animal relied on speed."

The long arms and other features suggest it was only distantly related to T. rex - and may have sat outside the family Tyrannosauridae, which T. rex is part of, in its own family of predatory dinosaurs.

The new study is the latest in a series of publications on the problem, going back decades.

Longrich said: "Nanotyrannus is highly controversial in paleontology. Not long ago, it seemed like we'd finally settled this problem, and it was a young T. rex.

"I was very skeptical about Nanotyrannus myself until about six years ago when I took a close look at the fossils and was surprised to realise we'd gotten it wrong all these years."

The authors suggest that, given how difficult it is to tell dinosaurs apart based on their often-incomplete skeletons, we may be underestimating the diversity of dinosaurs, and other fossil species.

Read more at Science Daily

The evolution of photosynthesis better documented thanks to the discovery of the oldest thylakoids in fossil cyanobacteria

Researchers at the University of Liège (ULiège) have identified microstructures in fossil cells that are 1.75 billion years old. These structures, called thylakoid membranes, are the oldest ever discovered. They push back the fossil record of thylakoids by 1.2 billion years and provide new information on the evolution of cyanobacteria which played a crucial role in the accumulation of oxygen on the early Earth. This major discovery is presented in the journal Nature.

Catherine Demoulin, Yannick Lara, Alexandre Lambion and Emmanuelle Javaux from the Early Life Traces & Evolution laboratory of the Astrobiology Research Unit at ULiège examined enigmatic microfossils called Navifusa majensis (N.majensis) in shales from the McDermott Formation in Australia, which are 1.75 billion years old, and in 1 billion year old formations of DRCongo and arctic Canada.

Ultrastructural analyses in fossil cells from 2 formations (Australia, Canada) revealed the presence of internal membranes with an arrangement, fine structure and dimensions permitting to interpret them unambiguously as thylakoid membranes, where oxygenic photosynthesis occurs.

These observations permitted to identify N majensis as a fossil cyanobacterium.

This discovery puts into perspective the role of cyanobacteria with thylakoid membranes in early Earth oxygenation.

They played an important role in the early evolution of life and were active during the Great Oxygenation Event (GOE), around 2.4 billion years ago.

However, the chronology of the origins of oxygenic photosynthesis and the type of cyanobacteria (protocyanobacteria -- With or without thylakoids -- ) involved remain debated, and the ULiège researchers' discovery offers a new approach to clarify these issues.

"The oldest known fossil thylakoids date back to around 550 million years. The ones we have identified therefore extend the fossil record by 1.2 billion years," explains Professor Emmanuelle Javaux, paleobiologist and astrobiologist, director of the Early Life Traces & Evolution laboratory at ULiège.

"The discovery of preserved thylakoids in N. majensis provides direct evidence of a minimum age of around 1.75 billion years for the divergence between cyanobacteria with thylakoids and those without."

But the ULiège team's discovery raises the possibility to discover thylakoids in even older cyanobacterial fossils, and to test the hypothesis that the emergence of thylakoids may have played a major role in the great oxygenation of the early Earth around 2.4 billion years ago.

This approach also permits to examine the role of dioxygen in the evolution of complex life (eucaryote) on our planet, including the origin and early diversification of algae that host chloroplasts derived from cyanobacteria.

Read more at Science Daily

Jan 5, 2024

Magnetic fields in the cosmos: Dark matter could help us discover their origin

The mini-halos of dark matter scattered throughout the Cosmos could function as highly sensitive probes of primordial magnetic fields. This is what emerges from a theoretical study conducted by SISSA and published in Physical Review Letters. Present on immense scales, magnetic fields are found everywhere in the Universe. However, their origin are still subjects of debate among scholars. An intriguing possibility is that magnetic fields originated near the birth of the universe itself, that is they are primordial magnetic fields.

In the study, researchers showed that if magnetic fields are indeed primordial then it could cause an increase in dark matter density perturbations on small scales.

The ultimate effect of this process would be the formation of mini-halos of dark matter, which, if detected would hint towards a primordial nature of magnetic fields.

Thus, in an apparent paradox, the invisible part of our Universe could be useful in resolving the nature of a component of the visible one.

Shedding light on the formation of Magnetic Fields

"Magnetic fields are ubiquitous in the Cosmos," explains Pranjal Ralegankar of SISSA, the author of the research.

"A possible theory regarding their formation suggests that those observed so far could be produced in the early stages of our Universe. However, this proposition lacks explanation in the standard model of physics. To shed light on this aspect and find a way to detect "primordial" magnetic fields, with this work we propose a method that we could define as 'indirect.' Our approach is based on a question: What is the influence of magnetic fields on dark matter?" It is known that there is no direct interaction.

Still, as Ralegankar explains, "there is an indirect one that occurs through gravity."

Right from the primordial Universe

Primordial magnetic fields can enhance density perturbations of electrons and protons in the primordial Universe.

When these become too large, they influence the magnetic fields themselves.

The consequence is the suppression of fluctuations on a small scale.

Ralegankar explains: "In the study, we show something unexpected. The growth in baryon density gravitationally induces the growth of dark matter perturbations without the possibility of subsequent cancellation. This would result in their collapse on small scales, producing mini-halos of dark matter." The consequence, continues the author, is that although fluctuations in the density of baryonic matter are cancelled, they would leave traces through the mini-halos, all solely through gravitational interactions.

Read more at Science Daily

Early primates likely lived in pairs

Primates -- and this includes humans -- are thought of as highly social animals. Many species of monkeys and apes live in groups. Lemurs and other Strepsirrhines, often colloquially referred to as "wet-nosed" primates, in contrast, have long been believed to be solitary creatures, and it has often been suggested that other forms of social organization evolved later. Previous studies have therefore attempted to explain how and when pair-living evolved in primates.

More recent research, however, indicates that many nocturnal Strepsirrhines, which are more challenging to investigate, are not in fact solitary but live in pairs of males and females.

But what does this mean for the social organization forms of the ancestors of all primates?

And why do some species of monkey live in groups, while others are pair-living or solitary?

Different forms of social organization


Researchers at the Universities of Zurich and Strasbourg have now examined these questions.

For their study, Charlotte Olivier from the Hubert Curien Pluridisciplinary Institute collected detailed information on the composition of social units in primate populations in the wild.

Over several years, the researchers built a detailed database, which covered almost 500 populations from over 200 primate species, from primary field studies.

More than half of the primate species recorded in the database exhibited more than one form of social organization.

"The most common social organization were groups in which multiple females and multiple males lived together, for example chimpanzees or macaques, followed by groups with only one male and multiple females -- such as in gorillas or langurs," says last author Adrian Jaeggi from the University of Zurich.

"But one-quarter of all species lived in pairs."

Smaller ancestors coupled up

Taking into account several socioecological and life history variables such as body size, diet or habitat, the researchers calculated the probability of different forms of social organization, including for our ancestors who lived some 70 million years ago.

The calculations were based on complex statistical models developed by Jordan Martin at UZH's Institute of Evolutionary Medicine.

To reconstruct the ancestral state of primates, the researchers relied on fossils, which showed that ancestral primates were relatively small-bodied and arboreal -- factors that strongly correlate with pair-living.

"Our model shows that the ancestral social organization of primates was variable and that pair-living was by far the most likely form," says Martin.

Only about 15 percent of our ancestors were solitary, he adds.

"Living in larger groups therefore only evolved later in the history of primates."

Read more at Science Daily

Researchers rely on Earth's magnetic field to verify an event mentioned in the Old Testament

A breakthrough achieved by researchers from four Israeli universities -- Tel Aviv University, The Hebrew University of Jerusalem, Bar-Ilan University and Ariel University- will enable archaeologists to identify burnt materials discovered in excavations and estimate their firing temperatures. Applying their method to findings from ancient Gath (Tell es-Safi in central Israel), the researchers validated the Biblical account: "About this time Hazael King of Aram went up and attacked Gath and captured it. Then he turned to attack Jerusalem" (2 Kings 12, 18). They explain that unlike previous methods, the new technique can determine whether a certain item (such as a mud brick) underwent a firing event even at relatively low temperatures, from 200°C and up. This information can be crucial for correctly interpreting the findings.

The multidisciplinary study was led by Dr. Yoav Vaknin from the Sonia & Marco Nadler Institute of Archaeology, Entin Faculty of Humanities, at Tel Aviv University, and the Palaeomagnetic Laboratory at The Hebrew University. Other contributors included: Prof. Ron Shaar from the Institute of Earth Sciences at The Hebrew University, Prof. Erez Ben-Yosef and Prof. Oded Lipschits from the Sonia & Marco Nadler Institute of Archaeology at Tel Aviv University, Prof. Aren Maeir from the Martin (Szusz) Department of Land of Israel Studies and Archaeology at Bar-Ilan University and Dr. Adi Eliyahu Beharfrom the Department of Land of Israel Studies and Archaeology and the Department of Chemical Sciences at Ariel University. The paper has been published in the scientific journal PLOS ONE.

Prof. Lipschits: "Throughout the Bronze and Iron Ages the main building material in most parts of the Land of Israel was mud bricks. This cheap and readily available material was used to build walls in most buildings, sometimes on top of stone foundations. That's why it's so important to understand the technology used in making these bricks."

Dr. Vaknin adds: "During the same era dwellers of other lands, such as Mesopotamia where stone was hard to come by, would fire mud bricks in kilns to increase their strength and durability. This technique is mentioned in the story of the Tower of Babel in the Book of Genesis: "They said one to another, Come, let us make bricks and fire them thoroughly. So they used brick for stone"(Genesis 11, 3). Most researchers, however, believe that this technology did not reach the Land of Israel until much later, with the Roman conquest. Until that time the inhabitants used sun-dried mud bricks. Thus, when bricks are found in an archaeological excavation, several questions must be asked: First, have the bricks been fired, and if so, were they fired in a kiln prior to construction or in situ, in a destructive conflagration event? Our method can provide conclusive answers."

The new method relies on measuring the magnetic field recorded and 'locked' in the brick as it burned and cooled down. Dr. Vaknin: "The clay from which the bricks were made contains millions of ferromagnetic particles -- minerals with magnetic properties that behave like so many tiny 'compasses' or magnets. In a sun-dried mud brick the orientation of these magnets is almost random, so that they cancel out one another. Therefore, the overall magnetic signal of the brick is weak and not uniform. Heating to 200°C or more, as happens in a fire, releases the magnetic signals of these magnetic particles and, statistically, they tend to align with the earth's magnetic field at that specific time and place. When the brick cools down, these magnetic signals remain locked in their new position and the brick attains a strong and uniformly oriented magnetic field, which can be measured with a magnetometer. This is a clear indication that the brick has, in fact, been fired.

In the second stage of the procedure, the researchers gradually 'erase' the brick's magnetic field, using a process called thermal demagnetization. This involves heating the brick in a special oven in a palaeomagnetic laboratory that neutralizes the earth's magnetic field. The heat releases the magnetic signals, which once again arrange themselves randomly, canceling each other out, and the total magnetic signal becomes weak and loses its orientation.

Dr. Vaknin: "We conduct the process gradually. At first, we heat the sample to a temperature of 100°C, which releases the signals of only a small percentage of the magnetic minerals. We then cool it down and measure the remaining magnetic signal. We then repeat the procedure at temperatures of 150°C, 200°C, and so on, proceeding in small steps, up to 700°C. In this way the brick's magnetic field is gradually erased. The temperature at which the signal of each mineral is 'unlocked' is approximately the same as the temperature at which it was initially 'locked', and ultimately, the temperature at which the magnetic field is fully erased was reached during the original fire."

The researchers tested the technique in the laboratory: they fired mud bricks under controlled conditions of temperature and magnetic field, measured each brick's acquired magnetic field, then gradually erased it. They found that the bricks were completely demagnetized at the temperature at which they had been burned -- proving that the method works.

Dr. Vaknin: "Our approach enables identifying burning which occurred at much lower temperatures than any other method. Most techniques used for identifying burnt bricks are based on actual changes in the minerals, which usually occur at temperatures higher than 500°C -- when some minerals are converted into others."

Dr. Eliyahu Behar: "One of the common methods for identifying mineralogical changes in clay (the main component of mud bricks) due to exposure to high temperatures is based on changes in the absorption of infrared radiation by the various minerals. In this study we used this method as an additional tool to verify the results of the magnetic method." Dr. Vaknin: "Our method is much more sensitive than others because it targets changes in the intensity and orientation of the magnetic signal, which occur at much lower temperatures. We can begin to detect changes in the magnetic signal at temperatures as low as 100°C, and from 200°C and up the findings are conclusive."

In addition, the method can determine the orientation in which the bricks cooled down. Dr. Vaknin: "When a brick is fired in a kiln before construction, it records the direction of the earth's magnetic field at that specific time and place. In Israel this means north and downward. But when builders take bricks from a kiln and build a wall, they lay them in random orientations, thus randomizing the recorded signals. On the other hand, when a wall is burned in-situ, as might happen when it is destroyed by an enemy, the magnetic fields of all bricks are locked in the same orientation."

After proving the method's validity, the researchers applied it to a specific archaeological dispute: was a specific brick structure discovered at Tell es-Safi -- identified as the Philistine city of Gath, home of Goliath -- built of pre-fired bricks or burned on location? The prevalent hypothesis, based on the Old Testament, historical sources, and Carbon-14 dating attributes the destruction of the structure to the devastation of Gath by Hazael, King of Aram Damascus, around 830 BCE. However, a previous paper by researchers including Prof. Maeir, head of the Tell es-Safi excavations, proposed that the building had not burned down, but rather collapsed over decades, and that the fired bricks found in the structure had been fired in a kiln prior to construction. If this hypothesis were correct, this would be the earliest instance of brick-firing technology discovered in the Land of Israel.

To settle the dispute, the current research team applied the new method to samples from the wall at Tell es-Safi and the collapsed debris found beside it. The findings were conclusive: the magnetic fields of all bricks and collapsed debris displayed the same orientation -- north and downwards. Dr. Vaknin: "Our findings signify that the bricks burned and cooled down in-situ, right where they were found, namely in a conflagration in the structure itself, which collapsed within a few hours. Had the bricks been fired in a kiln and then laid in the wall, their magnetic orientations would have been random. Moreover, had the structure collapsed over time, not in a single fire event, the collapsed debris would have displayed random magnetic orientations. We believe that the main reason for our colleagues' mistaken interpretation was their inability to identify burning at temperatures below 500°C. Since heat rises, materials at the bottom of the building burned at relatively low temperatures, below 400°C, and consequently the former study did not identify them as burnt -- leading to the conclusion that the building had not been destroyed by fire. At the same time, bricks in upper parts of the wall, where temperatures were much higher, underwent mineralogical changes and were therefore identified as burnt -- leading the researchers to conclude that they had been fired in a kiln prior to construction. Our method allowed us to determine that all bricks in both the wall and debris had burned during the conflagration: those at the bottom burned at relatively low temperatures, and those that were found in higher layers or had fallen from the top -at temperatures higher than 600°C."

Read more at Science Daily

Scientists engineer plant microbiome to protect crops against disease

Breakthrough could dramatically cut the use of pesticides and unlock other opportunities to bolster plant health

Scientists have engineered the microbiome of plants for the first time, boosting the prevalence of 'good' bacteria that protect the plant from disease.

The findings published in Nature Communications by researchers from the University of Southampton, China and Austria, could substantially reduce the need for environmentally destructive pesticides.

There is growing public awareness about the significance of our microbiome -- the myriad of microorganisms that live in and around our bodies, most notably in our guts.

Our gut microbiomes influence our metabolism, our likelihood of getting ill, our immune system, and even our mood.

Plants too host a huge variety of bacteria, fungi, viruses, and other microorganisms that live in their roots, stems, and leaves.

For the past decade, scientists have been intensively researching plant microbiomes to understand how they affect a plant's health and its vulnerability to disease.

"For the first time, we've been able to change the makeup of a plant's microbiome in a targeted way, boosting the numbers of beneficial bacteria that can protect the plant from other, harmful bacteria," says Dr Tomislav Cernava, co-author of the paper and Associate Professor in Plant-Microbe Interactions at the University of Southampton.

"This breakthrough could reduce reliance on pesticides, which are harmful to the environment. We've achieved this in rice crops, but the framework we've created could be applied to other plants and unlock other opportunities to improve their microbiome. For example, microbes that increase nutrient provision to crops could reduce the need for synthetic fertilisers."

The international research team discovered that one specific gene found in the lignin biosynthesis cluster of the rice plant is involved in shaping its microbiome.

Lignin is a complex polymer found in the cell walls of plants -- the biomass of some plant species consists of more than 30 per cent lignin.

First, the researchers observed that when this gene was deactivated, there was a decrease in the population of certain beneficial bacteria, confirming its importance in the makeup of the microbiome community.

The researchers then did the opposite, over-expressing the gene so it produced more of one specific type of metabolite -- a small molecule produced by the host plant during its metabolic processes.

This increased the proportion of beneficial bacteria in the plant microbiome.

When these engineered plants were exposed to Xanthomonas oryzae -- a pathogen that causes bacterial blight in rice crops, they were substantially more resistant to it than wild-type rice.

Bacterial blight is common in Asia and can lead to substantial loss of rice yields.

It's usually controlled by deploying polluting pesticides, so producing a crop with a protective microbiome could help bolster food security and help the environment.

Read more at Science Daily

Jan 4, 2024

Is oxygen the cosmic key to alien technology?

In the quest to understand the potential for life beyond Earth, researchers are widening their search to encompass not only biological markers, but also technological ones. While astrobiologists have long recognized the importance of oxygen for life as we know it, oxygen could also be a key to unlocking advanced technology on a planetary scale.

In a new study published in Nature Astronomy, Adam Frank, the Helen F. and Fred H. Gowen Professor of Physics and Astronomy at the University of Rochester and the author of The Little Book of Aliens (Harper, 2023), and Amedeo Balbi, an associate professor of astronomy and astrophysics at the University of Roma Tor Vergata, Italy, outline the links between atmospheric oxygen and the potential rise of advanced technology on distant planets.

"We are ready to find signatures of life on alien worlds," Frank says.

"But how do the conditions on a planet tell us about the possibilities for intelligent, technology-producing life?"

"In our paper, we explore whether any atmospheric composition would be compatible with the presence of advanced technology," Balbi says.

"We found that the atmospheric requirements may be quite stringent."

Igniting cosmic technospheres

Frank and Balbi posit that, beyond its necessity for respiration and metabolism in multicellular organisms, oxygen is crucial to developing fire -- and fire is a hallmark of a technological civilization.

They delve into the concept of "technospheres," expansive realms of advanced technology that emit telltale signs -- called "technosignatures" -- of extraterrestrial intelligence.

On Earth, the development of technology demanded easy access to open-air combustion -- the process at the heart of fire, in which something is burned by combining a fuel and an oxidant, usually oxygen.

Whether it's cooking, forging metals for structures, crafting materials for homes, or harnessing energy through burning fuels, combustion has been the driving force behind industrial societies.

Tracing back through Earth's history, the researchers found that the controlled use of fire and the subsequent metallurgical advancements were only possible when oxygen levels in the atmosphere reached or exceeded 18 percent.

This means that only planets with significant oxygen concentrations will be capable of developing advanced technospheres, and, therefore, leaving detectable technosignatures.

The oxygen bottleneck

The levels of oxygen required to biologically sustain complex life and intelligence are not as high as the levels necessary for technology, so while a species might be able to emerge in a world without oxygen, it will not be able to become a technological species, according to the researchers.

"You might be able to get biology -- you might even be able to get intelligent creatures -- in a world that doesn't have oxygen," Frank says, "but without a ready source of fire, you're never going to develop higher technology because higher technology requires fuel and melting."

Enter the "oxygen bottleneck," a term coined by the researchers to describe the critical threshold that separates worlds capable of fostering technological civilizations from those that fall short.

That is, oxygen levels are a bottleneck that impedes the emergence of advanced technology.

"The presence of high degrees of oxygen in the atmosphere is like a bottleneck you have to get through in order to have a technological species," Frank says.

"You can have everything else work out, but if you don't have oxygen in the atmosphere, you're not going to have a technological species."

Targeting extraterrestrial hotspots

The research, which addresses a previously unexplored facet in the cosmic pursuit of intelligent life, underscores the need to prioritize planets with high oxygen levels when searching for extraterrestrial technosignatures.

"Targeting planets with high oxygen levels should be prioritized because the presence or absence of high oxygen levels in exoplanet atmospheres could be a major clue in finding potential technosignatures," Frank says.

"The implications of discovering intelligent, technological life on another planet would be huge," adds Balbi.

"Therefore, we need to be extremely cautious in interpreting possible detections. Our study suggests that we should be skeptical of potential technosignatures from a planet with insufficient atmospheric oxygen."

Read more at Science Daily

Microbial awakening restructures high-latitude food webs as permafrost thaws

Alaska is on the front lines of climate change, experiencing some of the fastest rates of warming of any place in the world. And when temperatures rise in the state's interior -- a vast high-latitude region spanning 113 million acres -- permafrost there not only thaws, releasing significant amounts of its stored carbon back into the atmosphere where it further accelerates rising temperatures, but it decays. This decomposition has the potential to infuse above- and belowground food webs with carbon, which can affect energy flow between these critical ecological linkages and affect the species they support.

One of these species is the tundra vole, one of four Arctic or boreal forest animals that Philip Manlick, a research wildlife biologist with the USDA Forest Service Pacific Northwest Research Station in Juneau, Alaska, examined as part of his new study published today in the journal Nature Climate Change. Along with collaborators from the University of New Mexico and the University of Texas at Austin, Manlick used a novel technique to quantify the impacts of climate change on energy flow and carbon fluxes between plant-supported aboveground, or green, food webs and microbe-driven belowground, or brown, food webs using two species of vole, a shrew, and a spider as windows into the complex worlds.

"Understanding how energy moves through food webs helps us understand how ecosystems function and how animals might respond to stressors like climate change," Manlick said. "In Arctic and boreal ecosystems, it's well known that the climate is warming, permafrost is melting, and microbes are flourishing. But we know very little about the impacts of this process on terrestrial food webs and the animals they support."

A Novel Technique With Promise

The novel technique at the heart of the study involved measuring unique carbon isotope "fingerprints" in essential amino acids that only plants, bacteria, and fungi can produce. Animals can only acquire these molecules through their diets. This allowed these essential amino acids to serve as a biomarker that helped the researchers track how carbon was moving between green and brown food webs, which, ultimately, helped them detect changes.

"Scientists often argue about the importance of animals to ecosystem processes like carbon cycling, but when they eat resources from different food webs, they move carbon between storage pools," Manlick said. "In the future, we think this tool can be used to trace the fate of carbon through food webs to understand the functional roles of animals in ecosystem functions, like nutrient cycling."

The study analyzed bone collagen from museum specimens of tundra and red-backed voles and masked shrews from the Bonanza Creek Experimental Forest near Fairbanks, Alaska, in 1990 and 2021, a sample that represented animals exposed to long-term climate warming. To study the effects of short-term climate warming on animals, the researchers sampled Arctic wolf spiders near Toolik Lake, Alaska. Some of the spiders were gathered as controls and others were exposed to 2 °C warming in outdoor compartmentalized habitats called "mesocosms" in which the scientists could increase temperature on a micro scale to simulate climate warming.

At just over 12,000 acres, and encompassing interior forest and flood-plain habitats, Bonanza Creek Experimental Forest is an ideal site for studying the impacts of climate change on boreal forests and food webs because it provides a long-term record of change in interior Alaska. It was established by the USDA Forest Service 60 years ago and has been a National Science Foundation Long-term Ecological Research site since 1987. For Manlick, the site offers an opportunity to study how these boreal forest changes are affecting the animals living there and how the animals, themselves, affect forest processes through foraging and food web dynamics.

Significant Shift in Energy Source

Through their isotope analyses, Manlick and his colleagues detected significant changes in carbon assimilation in the mammals -- notably a shift from plant-based food webs to fungal-based food webs. In other words, fungi replaced plants as the main energy source -- with small mammals, like the shrews, assimilating up to 90 percent of their total carbon intake from fungal carbon, a more than 40-percent increase over historical specimens.

The same was true for the Arctic wolf spiders. They, too, shifted from plant-based to fungal-based food webs as the main source of their energy, assimilating more than 50 percent brown carbon under warming conditions, compared to 26 percent at control sites.

"Our study presents clear evidence that climate warming alters carbon flow and food web dynamics among aboveground consumers in Arctic tundra and boreal forest ecosystems -- across species, ecosystems, and long- and short-term warming scenarios," Manlick said. "And we show that these changes are the consequence of a change from predominantly green, plant-based food webs to brown, microbe-based food webs."

What's behind the shift?

The scientists suspect brown carbon is being transferred to aboveground consumers, like the mammals and spiders, in a series of predation events known as trophic pathways. Increased warming results in increased decomposition in both permafrost on the tundra and in boreal forests; fungi feed on this decomposing plant matter and are, in turn, consumed by arthropods, mites, and earthworms that transfer the fungal carbon upward in the food web where they, in turn, are consumed by the voles, shrews, and spiders.

"Climate warming significantly alters the flow of energy through food webs, such that animals who were historically supported by plant-based food webs are now supported by fungal-based food webs derived from belowground decomposition," Manlick said.

Animals Can Alter Carbon Cycling

Manlick and his colleagues' work underscores that animals serve as a crucial link between green and brown food webs; it also shows that climate warming alters this link across species in the Arctic and in boreal forests. The potential implications of these climate-induced shifts are greater than the small size of these species might imply.

"Shifts in these interactions can have indirect effects on nutrient cycling and ecosystem function," Manlick said.

For example, if voles are getting more of their energy from belowground sources, they may be consuming fewer plants, which could increase carbon storage in aboveground ecosystems.

"Much of the current work in high latitudes has focused on 'Arctic greening,' or the idea that climate warming is leading to more plant growth and greener ecosystems. We found the exact opposite pattern -- food webs are 'browning,'" he said.

Read more at Science Daily

Evaluating the truthfulness of fake news through online searches increases the chances of believing misinformation

Conventional wisdom suggests that searching online to evaluate the veracity of misinformation would reduce belief in it. But a new study by a team of researchers shows the opposite occurs: Searching to evaluate the truthfulness of false news articles actually increases the probability of believing misinformation.

The findings, which appear in the journal Nature, offer insights into the impact of search engines' output on their users -- a relatively under-studied area.

"Our study shows that the act of searching online to evaluate news increases belief in highly popular misinformation -- and by notable amounts," says Zeve Sanderson, founding executive director of New York University's Center for Social Media and Politics (CSMaP) and one of the paper's authors.

The reason for this outcome may be explained by search-engine outputs -- in the study, the researchers found that this phenomenon is concentrated among individuals for whom search engines return lower-quality information.

"This points to the danger that 'data voids' -- areas of the information ecosystem that are dominated by low quality, or even outright false, news and information -- may be playing a consequential role in the online search process, leading to low return of credible information or, more alarming, the appearance of non-credible information at the top of search results," observes lead author Kevin Aslett, an assistant professor at the University of Central Florida and a faculty research affiliate at CSMaP.

In the newly published Nature study, Aslett, Sanderson, and their colleagues studied the impact of using online search engines to evaluate false or misleading views -- an approach encouraged by technology companies and government agencies, among others.

To do so, they recruited participants through both Qualtrics and Amazon's Mechanical Turk -- tools frequently used in running behavioral science studies -- for a series of five experiments and with the aim of gauging the impact of a common behavior: searching online to evaluate news (SOTEN).

The first four studies tested the following aspects of online search behavior and impact:
 

  • The effect of SOTEN on belief in both false or misleading and true news directly within two days an article's publication (false popular articles included stories on COVID-19 vaccines, the Trump impeachment proceedings, and climate events)
  • Whether the effect of SOTEN can change an individual's evaluation after they had already assessed the veracity of a news story
  • The effect of SOTEN months after publication
  • The effect of SOTEN on recent news about a salient topic with significant news coverage -- in the case of this study, news about the Covid-19 pandemic


A fifth study combined a survey with web-tracking data in order to identify the effect of exposure to both low- and high-quality search-engine results on belief in misinformation.

By collecting search results using a custom web browser plug-in, the researchers could identify how the quality of these search results may affect users' belief in the misinformation being evaluated.

The study's source credibility ratings were determined by NewsGuard, a browser extension that rates news and other information sites in order to guide users in assessing the trustworthiness of the content they come across online.

Across the five studies, the authors found that the act of searching online to evaluate news led to a statistically significant increase in belief in misinformation.

This occurred whether it was shortly after the publication of misinformation or months later.

This finding suggests that the passage of time -- and ostensibly opportunities for fact checks to enter the information ecosystem -- does not lessen the impact of SOTEN on increasing the likelihood of believing false news stories to be true.

Moreover, the fifth study showed that this phenomenon is concentrated among individuals for whom search engines return lower-quality information.

"The findings highlight the need for media literacy programs to ground recommendations in empirically tested interventions and search engines to invest in solutions to the challenges identified by this research," concludes Joshua A. Tucker, professor of politics and co-director of CSMaP, another of the paper's authors.

Read more at Science Daily

Human beliefs about drugs could have dose-dependent effects on the brain

Mount Sinai researchers have shown for the first time that a person's beliefs related to drugs can influence their own brain activity and behavioral responses in a way comparable to the dose-dependent effects of pharmacology.

The implications of the study, which directly focused on beliefs about nicotine, are profound.

They range from elucidating how the neural mechanisms underlying beliefs may play a key role in addiction, to optimizing pharmacological and nonpharmacological treatments by leveraging the power of human beliefs.

The study was published in the journal Nature Mental Health.

"Beliefs can have a powerful influence on our behavior, yet their effects are considered imprecise and rarely examined by quantitative neuroscience methods," says Xiaosi Gu, PhD, Associate Professor of Psychiatry, and Neuroscience, at the Icahn School of Medicine at Mount Sinai, and senior author of the study.

"We set out to investigate if human beliefs can modulate brain activities in a dose-dependent manner similar to what drugs do, and found a high level of precision in how beliefs can influence the human brain. This finding could be crucial for advancing our knowledge about the role of beliefs in addiction as well as a broad range of disorders and their treatments."

To explore this dynamic, the Mount Sinai team, led by Ofer Perl, PhD, a postdoctoral fellow in Dr. Gu's lab when the study was conducted, instructed nicotine-dependent study participants to believe that an electronic cigarette they were about to vape contained either low, medium, or high strengths of nicotine, when in fact the level remained constant.

Participants then underwent functional neuroimaging (fMRI) while performing a decision-making task known to engage neural circuits activated by nicotine.

The scientists found that the thalamus, an important binding site for nicotine in the brain, showed a dose-dependent response to the subject's beliefs about nicotine strength, providing compelling evidence to support the relationship between subjective beliefs and biological substrates in the human brain.

This effect was previously thought to apply only to pharmacologic agents.

A similar dose-dependent effect of beliefs was also found in the functional connectivity between the thalamus and the ventromedial prefrontal cortex, a brain region that is considered important for decision-making and belief states.

"Our findings provide a mechanistic explanation for the well-known variations in individual responses to drugs," notes Dr. Gu, "and suggest that subjective beliefs could be a direct target for the treatment of substance use disorders. They could also advance our understanding of how cognitive interventions, such as psychotherapy, work at the neurobiological level in general for a wide range of psychiatric conditions beyond addiction."

Dr. Gu, who is one of the world's foremost researchers in the emerging field of computational psychiatry, cites another way in which her team's research could inform clinical care.

"The finding that human beliefs about drugs play such a pivotal role suggests that we could potentially enhance patients' responses to pharmacological treatments by leveraging these beliefs," she explains.

Significantly, the work of the Mount Sinai team can also be viewed in a much broader context: harnessing beliefs in a systematic manner to better serve mental health treatment and research in general.

Read more at Science Daily

Jan 3, 2024

Sodium's high-pressure transformation can tell us about the interiors of stars, planets

Travel deep enough below Earth's surface or inside the center of the Sun, and matter changes on an atomic level.

The mounting pressure within stars and planets can cause metals to become nonconducting insulators.

Sodium has been shown to transform from a shiny, gray-colored metal into a transparent, glass-like insulator when squeezed hard enough.

Now, a University at Buffalo-led study has revealed the chemical bonding behind this particular high-pressure phenomenon.

While it's been theorized that high pressure essentially squeezes sodium's electrons out into the spaces between atoms, researchers' quantum chemical calculations show that these electrons still very much belong to the surrounding atoms and are chemically bonded to each other.

"We're answering a very simple question of why sodium becomes an insulator, but predicting how other elements and chemical compounds behave at very high pressures will potentially give insight into bigger-picture questions," says Eva Zurek, Ph.D., professor of chemistry in the UB College of Arts and Sciences and co-author of the study, which was published in Angewandte Chemie, a journal of the German Chemical Society.

"What's the interior of a star like? How are planets' magnetic fields generated, if indeed any exist? And how do stars and planets evolve? This type of research moves us closer to answering these questions."

The study confirms and builds upon the theoretical predictions of the late renowned physicist Neil Ashcroft, whose memory the study is dedicated to.

It was once thought that materials always become metallic under high pressure -- like the metallic hydrogen theorized to make up Jupiter's core -- but Ashcroft and Jeffrey Neaton's seminal paper two decades ago found some materials, like sodium, can actually become insulators or semiconductors when squeezed.

They theorized that sodium's core electrons, thought to be inert, would interact with each other and the outer valence electrons when under extreme pressure.

"Our work now goes beyond the physics picture painted by Ashcroft and Neaton, connecting it with chemical concepts of bonding," says the UB-led study's lead author, Stefano Racioppi, Ph.D., a postdoctoral researcher in the UB Department of Chemistry.

Pressures found below Earth's crust can be difficult to replicate in a lab, so using supercomputers in UB's Center for Computational Research, the team ran calculations on how electrons behave in sodium atoms when under high pressure.

The electrons become trapped within the interspatial regions between atoms, known as an electride state.

This causes sodium's physical transformation from shiny metal to transparent insulator, as free-flowing electrons absorb and retransmit light but trapped electrons simply allow the light to pass through.

However, researchers' calculations showed for the first time that the emergence of the electride state can be explained through chemical bonding.

The high pressure causes electrons to occupy new orbitals within their respective atoms.

These orbitals then overlap with each other to form chemical bonds, causing localized charge concentrations in the interstitial regions.

While previous studies offered an intuitive theory that high pressure squeezed electrons out of atoms, the new calculations found that the electrons are still part of surrounding atoms.

"We realized that these are not just isolated electrons that decided to leave the atoms. Instead, the electrons are shared between the atoms in a chemical bond," Racioppi says.

Read more at Science Daily

Targeted household cleaning can reduce toxic chemicals post-wildfire

After the last embers of a campfire dim, the musky smell of smoke remains. Whiffs of that distinct smokey smell may serve as a pleasant reminder of the evening prior, but in the wake of a wildfire, that smell comes with ongoing health risks.

Wildfire smoke is certainly more pervasive than a small campfire, and the remnants can linger for days, weeks and months inside homes and businesses.

New research from Portland State's Elliott Gall, associate professor in Mechanical and Materials Engineering, examined how long harmful chemicals found in wildfire smoke can persist and the most effective ways to remove them with everyday household cleaners.

Wildfires create compounds called polycyclic aromatic hydrocarbons (PAHs), which are formed in the combustion process at high temperatures.

These compounds are highly toxic.

"They are associated with a wide variety of long-term adverse health consequences like cancer, potential complications in pregnancy and lung disease," Gall said.

"So if these compounds are depositing or sticking onto surfaces, there are different routes of exposure people should be aware of. By now, most people in Portland are probably thinking about how to clean their air during a wildfire smoke event, but they might not be thinking about other routes of exposure after the air clears."

Public messaging is fairly consistent on what to do during a fire to reduce exposure to smoke -- close windows and doors, run an air purifier and consider wearing a mask -- but messaging is limited about what to do post-wildfire.

Gall's study published in Environmental Science & Technology looked at the accumulation and retention of PAHs over a period of four months on three different indoor materials: glass, cotton and air filters.

"We looked at a limited number of materials and we intentionally included some that are common in indoor environments," Gall said.

Initial findings showed that levels of PAHs remained elevated for weeks after exposure.

After materials were loaded with PAHs from wildfire smoke, it took 37 days for PAHs to decrease by 74% for air filters, 81% for cotton and 88% for glass.

That reduction is significant but it takes time and means increased health risks from elongated exposure.

However, laundering cotton materials just one time after exposure to smoke reduced PAHs on the material by 80%. Using a commercial glass cleaner on glass materials like windows and cups reduced PAHs between 60% and 70%.

Unlike glass and cotton, air filters can't be cleaned and need to be replaced after an extreme smoke event.

"Even if there's potentially some more life in them, over time PAHs can partition off the filter and be emitted back into your space," Gall said.

"While it may be a slow process, our study shows partitioning of PAHs from filters and other materials loaded with smoke may result in concentrations of concern in air. And while that partitioning is occurring, dermal contact and ingestion of PAHs from the materials may be important. One example might be holding and drinking from a glass that was exposed to wildfire smoke."

Gall said it was important to consider the effect of cleaning solutions available to the average person.

Although the findings also open the door to additional questions.

Read more at Science Daily

First dive survey of Lake Tahoe's lakebed finds high amounts of plastic and other litter

Plastic litter is a growing problem around the world, and new research shows that the bottom of Lake Tahoe is no exception. In one of the first studies to utilize scuba divers to collect litter from a lakebed, 673 plastic items were counted from just a small fraction of the lake.

In the study, published in the November issue of the journal Applied Spectroscopy, researchers from DRI and the UC Davis Tahoe Environmental Research Center teamed up with the nonprofit Clean Up the Lake to take a close look at the litter.

First, scientists broke it down into categories based on use (such as food containers and water bottles), followed by the chemical composition of the plastic.

The knowledge gained can help scientists better understand the source of large pieces of litter in the lake, as well as whether they're a significant source of microplastics as larger pieces break down and degrade.

Previous research found that the waters of Lake Tahoe contain high levels of microplastics, defined as plastics smaller than a pencil eraser.

"There's very little work on submerged plastic litter in lakes," said Monica Arienzo, Ph.D., associate research professor of hydrology at DRI and one of the study's lead authors.

"And I think that's a real issue, because when we think about how plastics may be moving in freshwater systems, there's a good chance that they'll end up in a lake."

To collect the litter, research divers swam transects along the lakebed near Lake Tahoe Nevada State Park and Zephyr Cove, covering 9.3 kilometers.

They found an average of 83 pieces of plastic litter per kilometer, with the lakebed near Hidden Beach and South Sand Harbor showing significantly more (140 items/km and 124 items/km, respectively). No stretches of the lakebed surveyed were free of plastic litter.

The most common plastic litter categories were food containers, bottles, plastic bags, and toys, along with many items that couldn't be categorized.

"There's a lot of education we can do, as well as continuing to work on reducing the use of those plastics," Arienzo says.

"Because we have to start thinking about turning that plastic pipe off."

Arienzo and co-author Julia Davidson, then an undergraduate student working in Arienzo's lab, also identified the types of plastic that made up 516 of the litter samples.

Using an instrument that uses infrared light to fingerprint and identify the material, they found that the six most common plastics were polyvinyl chloride (PVC), polystyrene, polyester/polyethylene terephthalate, polyethylene, polypropylene, and polyamide.

Collecting this information can contribute to Arienzo's ongoing microplastics research in the region, helping to identify the sources of the small plastic fragments.

"When we study microplastics, we only have the chemical information, or the plastic type," Davidson says.

"We don't know where it came from -- a plastic bag, toy, or otherwise -- because it's just a tiny piece of plastic. But now we can use this litter data to point to the dominant types of plastics and compare them to microplastic data."

The study can help inform efforts by Tahoe-area communities to address plastic litter, such as South Lake Tahoe's 2022 ban on single-use plastic bottles and Truckee's ban on single-use food containers.

The research also highlights ways that scientists can work with nonprofits to collect data that can address local environmental concerns.

Read more at Science Daily

Evolution might stop humans from solving climate change

Central features of human evolution may stop our species from resolving global environmental problems like climate change, says a new study led by the University of Maine.

Humans have come to dominate the planet with tools and systems to exploit natural resources that were refined over thousands of years through the process of cultural adaptation to the environment. University of Maine evolutionary biologist Tim Waring wanted to know how this process of cultural adaptation to the environment might influence the goal of solving global environmental problems. What he found was counterintuitive.

The project sought to understand three core questions: how human evolution has operated in the context of environmental resources, how human evolution has contributed to the multiple global environmental crises and how global environmental limits might change the outcomes of human evolution in the future.

Waring's team outlined their findings in a new paper published in Philosophical Transactions of the Royal Society B. Other authors of the study include Zach Wood, UMaine alumni, and Eörs Szathmáry, a professor at Eötvös LorándUniversity in Budapest, Hungary.

Human expansion


The study explored how human societies' use of the environment changed over our evolutionary history. The research team investigated changes in the ecological niche of human populations, including factors such as the natural resources they used, how intensively they were used, what systems and methods emerged to use those resources and the environmental impacts that resulted from their usage.

This effort revealed a set of common patterns. Over the last 100,000 years, human groups have progressively used more types of resources, with more intensity, at greater scales and with greater environmental impacts. Those groups often then spread to new environments with new resources.

The global human expansion was facilitated by the process of cultural adaptation to the environment. This leads to the accumulation of adaptive cultural traits -- social systems and technology to help exploit and control environmental resources such as agricultural practices, fishing methods, irrigation infrastructure, energy technology and social systems for managing each of these.

"Human evolution is mostly driven by cultural change, which is faster than genetic evolution. That greater speed of adaptation has made it possible for humans to colonize all habitable land worldwide," says Waring, associate professor with the UMaine Senator George J. Mitchell Center for Sustainability Solutions and the School of Economics.

Moreover, this process accelerates because of a positive feedback process: as groups get larger, they accumulate adaptive cultural traits more rapidly, which provides more resources and enables faster growth.

"For the last 100,000 years, this has been good news for our species as a whole." Waring says, "but this expansion has depended on large amounts of available resources and space."

Today, humans have also run out of space. We have reached the physical limits of the biosphere and laid claim to most of the resources it has to offer. Our expansion also is catching up with us. Our cultural adaptations, particularly the industrial use of fossil fuels, have created dangerous global environmental problems that jeopardize our safety and access to future resources.

Global limits

To see what these findings mean for solving global challenges like climate change, the research team looked at when and how sustainable human systems emerged in the past. Waring and his colleagues found two general patterns. First, sustainable systems tend to grow and spread only after groups have struggled or failed to maintain their resources in the first place. For example, the U.S. regulated industrial sulfur and nitrogen dioxide emissions in 1990, but only after we had determined that they caused acid rain and acidified many water bodies in the Northeast. This delayed action presents a major problem today as we threaten other global limits. For climate change, humans need to solve the problem before we cause a crash.

Second, researchers also found evidence that strong systems of environmental protection tend to address problems within existing societies, not between them. For example, managing regional water systems requires regional cooperation, regional infrastructure and technology, and these arise through regional cultural evolution. The presence of societies of the right scale is, therefore, a critical limiting factor.

Tackling the climate crisis effectively will probably require new worldwide regulatory, economic and social systems -- ones that generate greater cooperation and authority than existing systems like the Paris Agreement. To establish and operate those systems, humans need a functional social system for the planet, which we don't have.

"One problem is that we don't have a coordinated global society which could implement these systems," says Waring, "We only have sub-global groups, which probably won't suffice. But you can imagine cooperative treaties to address these shared challenges. So, that's the easy problem."

The other problem is much worse, Waring says. In a world filled with sub-global groups, cultural evolution among these groups will tend to solve the wrong problems, benefitting the interests of nations and corporations and delaying action on shared priorities. Cultural evolution among groups would tend to exacerbate resource competition and could lead to direct conflict between groups and even global human dieback.

"This means global challenges like climate change are much harder to solve than previously considered," says Waring. "It's not just that they are the hardest thing our species has ever done. They absolutely are. The bigger problem is that central features in human evolution are likely working against our ability to solve them. To solve global collective challenges we have to swim upstream."

Looking forward

Waring and his colleagues think that their analysis can help navigate the future of human evolution on a limited Earth. Their paper is the first to propose that human evolution may oppose the emergence of collective global problems and further research is needed to develop and test this theory.

Waring's team proposes several applied research efforts to better understand the drivers of cultural evolution and search for ways to reduce global environmental competition, given how human evolution works. For example, research is needed to document the patterns and strength of human cultural evolution in the past and present. Studies could focus on the past processes that lead to the human domination of the biosphere, and on the ways cultural adaptation to the environment is occurring today.

But if the general outline proves to be correct, and human evolution tends to oppose collective solutions to global environmental problems, as the authors suggest, then some very pressing questions need to be answered. This includes whether we can use this knowledge to improve the global response to climate change.

"There is hope, of course, that humans may solve climate change. We have built cooperative governance before, although never like this: in a rush at a global scale." Waring says.

The growth of international environmental policy provides some hope. Successful examples include the Montreal Protocol to limit ozone-depleting gasses, and the global moratorium on commercial whaling.

New efforts should include fostering more intentional, peaceful and ethical systems of mutual self-limitation, particularly through market regulations and enforceable treaties, that bind human groups across the planet together ever more tightly into a functional unit.

But that model may not work for climate change.

"Our paper explains why and how building cooperative governance at the global scale is different, and helps researchers and policymakers be more clear-headed about how to work toward global solutions," says Waring.

This new research could lead to a novel policy mechanism to address the climate crisis: modifying the process of adaptive change among corporations and nations may be a powerful way to address global environmental risks.

As for whether humans can continue to survive on a limited planet, Waring says "we don't have any solutions for this idea of a long-term evolutionary trap, as we barely understand the problem." says Waring.

Read more at Science Daily

Jan 2, 2024

Astronomers detect seismic ripples in ancient galactic disk

A new snapshot of an ancient, far-off galaxy could help scientists understand how it formed and the origins of our own Milky Way.

At more than 12 billion years old, BRI 1335-0417 is the oldest and furthest known spiral galaxy in our universe.

Lead author Dr Takafumi Tsukui said a state-of-the-art telescope called ALMA allowed them to look at this ancient galaxy in much greater detail.

"Specifically, we were interested in how gas was moving into and throughout the galaxy," Dr Tsukui said.

"Gas is a key ingredient for forming stars and can give us important clues about how a galaxy is actually fuelling its star formation."

In this case, the researchers were able to not only capture the motion of the gas around BRI 1335-0417, but also reveal a seismic wave forming -- a first in this type of early galaxy.

The galaxy's disk, a flattened mass of rotating stars, gas and dust, moves in a way not dissimilar to ripples spreading on a pond after a stone is thrown in.

"The vertically oscillating motion of the disk is due to an external source, either from new gas streaming into the galaxy or by coming into contact with other smaller galaxies," Dr Tsukui said.

"Both possibilities would bombard the galaxy with new fuel for star formation.

"Additionally, our study revealed a bar-like structure in the disk. Galactic bars can disrupt gas and transport it towards the galaxy's centre. The bar discovered in BRI 1335-0417 is the most distant known structure of this kind.

"Together, these results show the dynamic growth of a young galaxy."

Because BRI 1335-0417 is so far away, its light takes longer to reach Earth.

The images seen through a telescope in the present day are a throwback to the galaxy's early days -- when the Universe was just 10 per cent of its current age.

"Early galaxies have been found to form stars at a much faster rate than modern galaxies. This is true for BRI 1335-0417, which, despite having a similar mass to our Milky Way, forms stars at rate a few hundred times faster," co-author Associate Professor Emily Wisnioski said.

"We wanted to understand how gas is supplied to keep up with this rapid rate of star formation.

"Spiral structures are rare in the early Universe, and exactly how they form also remains unknown. This study also gives us crucial information on the most likely scenarios.

Read more at Science Daily

Reducing inequality is essential in tackling climate crisis, researchers argue

In a report just published in the journal Nature Climate Change, researchers argue that tackling inequality is vital in moving the world towards Net-Zero -- because inequality constrains who can feasibly adopt low-carbon behaviours.

They say that changes are needed across society if we are to mitigate climate change effectively.

Although wealthy people have very large carbon footprints, they often have the means to reduce their carbon footprint more easily than those on lower incomes.

The researchers say there is lack of political recognition of the barriers that can make it difficult for people to change to more climate-friendly behaviours.

They suggest that policymakers provide equal opportunities for low-carbon behaviours across all income brackets of society.

The report defines inequality in various ways: in terms of wealth and income, political influence, free time, and access to low-carbon options such as public transport and housing insulation subsidies.

"It's increasingly acknowledged that there's inequality in terms of who causes climate change and who suffers the consequences, but there's far less attention being paid to the effect of inequality in changing behaviours to reduce carbon emissions," said Dr Charlotte Kukowski, a postdoctoral researcher in the University of Cambridge Departments of Psychology and Zoology, and first author of the report.

She added: "People on lower incomes can be more restricted in the things they can do to help reduce their carbon footprint, in terms of the cost and time associated with doing things differently."

The researchers found that deep-rooted inequalities can restrict people's capacity to switch to lower-carbon behaviours in many ways.

For example:
 

  • Insulating a house in the UK can be costly, and government subsidies are generally only available for homeowners; renters have little control over the houses they live in.


The UK has large numbers of old, badly insulated houses that require more energy to heat than new-build homes.

The researchers call for appropriate government schemes that make it more feasible for people in lower income groups to reduce the carbon emissions of their home.
 

  • Cooking more meat-free meals: plant-based meat alternatives currently tend to be less affordable than the animal products they are trying to replace.


Eating more plant-based foods instead of meat and animal-derived products is one of the most effective changes an individual can make in reducing their carbon footprint.
 

  • Buying an electric car or an electric bike is a substantial upfront cost, and people who aren't in permanent employment often can't benefit from tax breaks or financing available through employer schemes.


Other low-carbon transport options -- such as using public transport instead of a private car -- are made less feasible for many due to poor services, particularly in rural areas.

Sometimes the lower-carbon options are more expensive -- and this makes them less accessible to people on lower incomes.

"If you have more money you're likely to cause more carbon emissions, but you're also more likely to have greater ability to change the things you do and reduce those emissions," said Dr Emma Garnett, a postdoctoral researcher at the University of Oxford and second author of the report.

She added: "Interventions targeting high-emitting individuals are urgently needed, but also many areas where there are lower-carbon choices -- like food and transport -- need everyone to be involved."

The researchers say that campaigns to encourage people to switch to lower-carbon behaviours have tended to focus on providing information.

While this is important in helping people understand the issues, there can still be many barriers to making changes.

Read more at Science Daily

First step towards synthetic CO2 fixation in living cells

Synthetic biology offers the opportunity to build biochemical pathways for the capture and conversion of carbon dioxide (CO2). Researchers at the Max-Planck-Institute for Terrestrial Microbiology have developed a synthetic biochemical cycle that directly converts CO2 into the central building block Acetyl-CoA. The researchers were able to implement each of the three cycle modules in the bacterium E.coli, which represents a major step towards realizing synthetic CO2 fixing pathways within the context of living cells.

Developing new ways for the capture and conversion of CO2 is key to tackle the climate emergency.

Synthetic biology opens avenues for designing new-to-nature CO2-fixation pathways that capture CO2 more efficiently than those developed by nature.

However, realizing those new-to-nature pathways in different in vitro and in vivo systems is still a fundamental challenge.

Now, researchers in Tobias Erb's group have designed and constructed a new synthetic CO2-fixation pathway, the so-called THETA cycle.

It contains several central metabolites as intermediates, and with the central building block, acetyl-CoA, as its output.

This characteristic makes it possible to be divided into modules and integrated into the central metabolism of E. coli.

The entire THETA cycle involves 17 biocatalysts, and was designed around the two fastest CO2-fixing enzymes known to date: crotonyl-CoA carboxylase/reductase and phosphoenolpyruvate carboxylase.

The researchers found these powerful biocatalysts in bacteria.

Although each of the carboxylases can capture CO2 more than 10 times faster than RubisCO, the CO2-fixing enzyme in chloroplasts, evolution itself has not brought these capable enzymes together in natural photosynthesis.

The THETA cycle converts two CO2 molecules into one acetyl-CoA in one cycle.

Acetyl-CoA is a central metabolite in almost all cellular metabolism and serves as the building block for a wide array of vital biomolecules, including biofuels, biomaterials, and pharmaceuticals, making it a compound of great interest in biotechnological applications.

Upon constructing the cycle in test tubes, the researchers could confirm its functionality.

Then the training began: through rational and machine learning-guided optimization over several rounds of experiments, the team was able to improve the acetyl-CoA yield by a factor of 100.

In order to test its in vivo feasibility, incorporation into the living cell should be carried out step by step.

To this end, the researchers divided the THETA cycle into three modules, each of which was successfully implemented into the bacterium E. coli. The functionality of these modules was verified through growth-coupled selection and/or isotopic labelling.

"What is special about this cycle is that it contains several intermediates that serve as central metabolites in the bacterium's metabolism. This overlap offers the opportunity to develop a modular approach for its implementation." explains Shanshan Luo, lead author of the study.

"We were able to demonstrate the functionality of the three individual modules in E. coli. However, we have not yet succeeded in closing the entire cycle so that E. coli can grow completely with CO2," she adds.

Closing the THETA cycle is still a major challenge, as all of the 17 reactions need to be synchronized with the natural metabolism of E. coli, which naturally involves hundreds to thousands of reactions.

However, demonstrating the whole cycle in vivo is not the only goal, the researcher emphasizes.

"Our cycle has the potential to become a versatile platform for producing valuable compounds directly from CO2 through extending its output molecule, acetyl-CoA." says Shanshan Luo.

Read more at Science Daily

Ants recognize infected wounds and treat them with antibiotics

The Matabele ants (Megaponera analis), which are widespread south of the Sahara, have a narrow diet: They only eat termites. Their hunting expeditions are dangerous because termite soldiers defend their conspecifics -- and use their powerful mandibles to do so. It is therefore common for the ants to be injured while hunting.

If the wounds become infected, there is a significant survival risk.

However, Matabele ants have developed a sophisticated healthcare system: they can distinguish between non-infected and infected wounds and treat the latter efficiently with antibiotics they produce themselves.

This is reported by a team led by Dr Erik Frank from Julius-Maximilians-Universität (JMU) Würzburg and Professor Laurent Keller from the University of Lausanne in the journal Nature Communications.

Treatment Drastically Reduces Mortality

"Chemical analyses in cooperation with JMU Professor Thomas Schmitt have shown that the hydrocarbon profile of the ant cuticle changes as a result of a wound infection," says Erik Frank.

It is precisely this change that the ants are able to recognise and thus diagnose the infection status of injured nestmates.

For treatment, they then apply antimicrobial compounds and proteins to the infected wounds.

They take these antibiotics from the metapleural gland, which is located on the side of their thorax.

Its secretion contains 112 components, half of which have an antimicrobial or wound-healing effect.

And the therapy is highly effective: the mortality rate of infected individuals is reduced by 90 per cent, as the research group discovered.

Analysis of Ant Antibiotics is Planned

"With the exception of humans, I know of no other living creature that can carry out such sophisticated medical wound treatments," says Erik Frank.

Laurent Keller also adds that these findings "have medical implications because the primary pathogen in ant's wounds, Pseudomonas aeruginosa, is also a leading cause of infection in humans, with several strains being resistant to antibiotics."

Are Matabele ants really unique in this respect? The Würzburg researcher now wants to explore wound care behaviours in other ant species and other social animals.

He also wants to identify and analyse the antibiotics used by Matabele ants in cooperation with chemistry research groups.

This may lead to the discovery of new antibiotics that could also be used in humans.

Read more at Science Daily