May 4, 2024

Astronomers' simulations support dark matter theory

Computer simulations by astronomers support the idea that dark matter -- matter that no one has yet directly detected but which many physicists think must be there to explain several aspects of the observable universe -- exists, according to the researchers, who include those at the University of California, Irvine.

The work addresses a fundamental debate in astrophysics -- does invisible dark matter need to exist to explain how the universe works the way it does, or can physicists explain how things work based solely on the matter we can directly observe? Currently, many physicists think something like dark matter must exist to explain the motions of stars and galaxies.

"Our paper shows how we can use real, observed relationships as a basis to test two different models to describe the universe," said Francisco Mercado, lead author and recent Ph.D. graduate from the UC Irvine Department of Physics & Astronomy who is now a postdoctoral scholar at Pomona College. "We put forth a powerful test to discriminate between the two models."

The test involved running computer simulations with both types of matter -- normal and dark -- to explain the presence of intriguing features measured in real galaxies. The team reported their results in Monthly Notices of the Royal Astronomy Society.

The features in galaxies the team found "are expected to appear in a universe with dark matter but would be difficult to explain in a universe without it," said Mercado. "We show that such features appear in observations of many real galaxies. If we take these data at face value, this reaffirms the position of the dark matter model as the one that best describes the universe we live in."

These features Mercado noted describe patterns in the motions of stars and gas in galaxies that seem to only be possible in a universe with dark matter.

"Observed galaxies seem to obey a tight relationship between the matter we see and the inferred dark matter we detect, so much so that some have suggested that what we call dark matter is really evidence that our theory of gravity is wrong," said co-author James Bullock, professor of physics at UCI and dean of the UCI School of Physical Sciences. "What we showed is that not only does dark matter predict the relationship, but for many galaxies it can explain what we see more naturally than modified gravity. I come away even more convinced that dark matter is the right model."

The features also appear in observations made by proponents of a dark matter-free universe. "The observations we examined -- the very observations where we found these features -- were conducted by adherents of dark matter-free theories," said co-author Jorge Moreno, associate professor of physics and astronomy at Pomona College. "Despite their obvious presence, little-to-no analysis was performed on these features by that community. It took folks like us, scientists working with both regular and dark matter, to start the conversation."

Moreno added that he expects debate within his research community to follow in the wake of the study, but that there may be room for common ground, as the team also found that such features only appear in their simulations when there is both dark matter and normal matter in the universe.

"As stars are born and die, they explode into supernovae, which can shape the centers of galaxies, naturally explaining the existence of these features," said Moreno. "Simply put, the features we examined in observations require both the existence of dark matter and the incorporation of normal-matter physics."

Now that the dark matter model of the universe appears to be the leading one, the next step, Mercado explained, is to see if it remains consistent across a dark matter universe.

Read more at Science Daily

Rock solid evidence: Angola geology reveals prehistoric split between South America and Africa

An SMU-led research team has found that ancient rocks and fossils from long-extinct marine reptiles in Angola clearly show a key part of Earth's past -- the splitting of South America and Africa and the subsequent formation of the South Atlantic Ocean.

With their easily visualized "jigsaw-puzzle fit," it has long been known that the western coast of Africa and the eastern coast of South America once nestled together in the supercontinent Gondwana -- which broke off from the larger landmass of Pangea.

The research team says the southern coast of Angola, where they dug up the samples, arguably provides the most complete geological record ever recorded on land of the two continents moving apart and the opening of the South Atlantic Ocean. Rocks and fossils found date back from 130 million years ago to 71 million years.

"There are places that you can go to in South America, for instance, where you can see this part of the split or that part of it, but in Angola, it's all laid out in one place," said Louis L. Jacobs, SMU professor emeritus of Earth Sciences and president of ISEM. Jacobs is the lead author of a study published in The Geological Society, London, Special Publications.

"Before this, there was not a place known to go and see the rocks on the surface that really reflected the opening of the South Atlantic Ocean, because they're now in the ocean or eroded away," Jacobs said.

Angola rocks and fossils tell the whole story

Africa and South America started to split around 140 million years ago, causing gashes in Earth's crust called rifts to open up along pre-existing weaknesses. As the tectonic plates beneath South America and Africa moved apart, magma from the Earth's mantle rose to the surface, creating a new oceanic crust and pushing the continents away from each other. And eventually, the South Atlantic Ocean filled the void between these two newly-formed continents.

Scientists have previously found evidence of these events through geophysics and well cores drilled through the ocean floor.

But these tell-tale signs have never been found in one place, or been so clearly visible for anyone to see, said study co-author Michael J. Polcyn, research associate in the Huffington Department of Earth Sciences and senior research fellow, ISEM at SMU.

"It's one thing for a geophysicist to be able to look at seismic data and make inferences from that," he said. "It's quite another thing to be able to take a school field trip out to the rock formations, or outcrops, and say this is when the lava was spreading from eastern South America. Or this was when it was a continuous land."

Essentially, Angola presents the opportunity for someone to easily walk through each phase of this geologically significant chapter in Earth's history.

"That gives Angola major bragging rights," Jacobs said.

Jacobs, Polcyn and Diana P. Vineyard -- who is a research associate at SMU -- worked with an international team of paleontologists, geologists and others to analyze both the rock formations they found in eight different locations on the coast and the fossils within them.

Fieldwork in Angola's Namibe Province began in 2005. At that time, the research team recognized particular types of sediments, which gave them a good indication of what the western coast of Africa had been like at various stages millions of years ago. For instance, fields of lava revealed volcanic outpourings, and faults or breaks showed where the continents were being rifted apart. Sediments and salt deposits showed ocean flooding and evaporation, while overlying oceanic sediments and marine reptiles showed completion of the South Atlantic Ocean.

Paleontologists, meanwhile, discovered fossils in Angola from large marine reptiles that had lived late during the Cretaceous Period, right after the Atlantic Ocean was completed and while it grew wider.

By bringing together experts from a wide range of fields, "we were able to document when there was no ocean at all, to when there was a fresh enough ocean for those reptiles to thrive and have enough to eat," Vineyard said.

Many of the ancient fossils are currently on display at the Smithsonian's National Museum of Natural History "Sea Monsters Unearthed: Life in Angola's Ancient Seas" exhibit, which was co-produced with SMU -- a nationally-ranked Dallas-based private university.

Read more at Science Daily

Did a magnetic field collapse trigger the emergence of animals?

The Ediacaran Period, spanning from about 635 to 541 million years ago, was a pivotal time in Earth's history. It marked a transformative era during which complex, multicellular organisms emerged, setting the stage for the explosion of life.

But how did this surge of life unfold and what factors on Earth may have contributed to it?

Researchers from the University of Rochester have uncovered compelling evidence that Earth's magnetic field was in a highly unusual state when the macroscopic animals of the Ediacaran Period diversified and thrived. Their study, published in Nature Communications Earth & Environment, raises the question of whether these fluctuations in Earth's ancient magnetic field led to shifts in oxygen levels that may have been crucial to the proliferation of life forms millions of years ago.

According to John Tarduno, the William Kenan, Jr. Professor in the Department of Earth and Environmental Sciences, one of the most remarkable life forms during the Ediacaran Period was the Ediacaran fauna. They were notable for their resemblance to early animals -- some even reached more than a meter (three feet) in size and were mobile, indicating they probably needed more oxygen compared to earlier life forms.

"Previous ideas for the appearance of the spectacular Ediacaran fauna have included genetic or ecologic driving factors, but the close timing with the ultra-low geomagnetic field motivated us to revisit environmental issues, and, in particular, atmospheric and ocean oxygenation," says Tarduno, who is also the Dean of Research in the School of Arts & Sciences and the School of Engineering and Applied Sciences.

Earth's magnetic mysteries

About 1,800 miles below us, liquid iron churns in Earth's outer core, creating the planet's protective magnetic field. Though invisible, the magnetic field is essential for life on Earth because it shields the planet from solar wind -- streams of radiation from the sun. But Earth's magnetic field wasn't always as strong as it is today.

Researchers have proposed that an unusually low magnetic field might have contributed to the rise of animal life. However, it has been challenging to examine the link because of limited data about the strength of the magnetic field during this time.

Tarduno and his team used innovative strategies and techniques to examine the strength of the magnetic field by studying magnetism locked in ancient feldspar and pyroxene crystals from the rock anorthosite. The crystals contain magnetic particles that preserve magnetization from the time the minerals were formed. By dating the rocks, researchers can construct a timeline of the development of Earth's magnetic field.

Leveraging cutting-edge tools, including a CO2 laser and the lab's superconducting quantum interference device (SQUID) magnetometer, the team analyzed with precision the crystals and the magnetism locked within.

A weak magnetic field


Their data indicates that Earth's magnetic field at times during the Ediacaran Period was the weakest field known to date -- up to 30 times weaker than the magnetic field today -- and that the ultra-low field strength lasted for at least 26 million years.

A weak magnetic field makes it easier for charged particles from the sun to strip away lightweight atoms such as hydrogen from the atmosphere, causing them to escape into space. If hydrogen loss is significant, more oxygen may remain in the atmosphere instead of reacting with hydrogen to form water vapor. These reactions can lead to a buildup of oxygen over time.

The research conducted by Tarduno and his team suggests that during the Ediacaran Period, the ultraweak magnetic field caused a loss of hydrogen over at least tens of millions of years. This loss may have led to increased oxygenation of the atmosphere and surface ocean, enabling more advanced life forms to emerge.

Tarduno and his research team previously discovered that the geomagnetic field recovered in strength during the subsequent Cambrian Period, when most animal groups begin to appear in the fossil record, and the protective magnetic field was reestablished, allowing life to thrive.

"If the extraordinarily weak field had remained after the Ediacaran, Earth might look very different from the water-rich planet it is today: water loss might have gradually dried Earth," Tarduno says.

Core dynamics and evolution

The work suggests that understanding planetary interiors is crucial in contemplating the potential of life beyond Earth.

"It's fascinating to think that processes in Earth's core could be linked ultimately to evolution," Tarduno says. "As we think about the possibility of life elsewhere, we also need to consider how the interiors of planets form and develop."

Read more at Science Daily

May 3, 2024

Webb telescope probably didn't find life on an exoplanet -- yet

Recent reports of NASA's James Webb Space Telescope finding signs of life on a distant planet understandably sparked excitement. A new study challenges this finding, but also outlines how the telescope might verify the presence of the life-produced gas.

The UC Riverside study, published in the Astrophysical Journal Letters, may be a disappointment to extraterrestrial enthusiasts but does not rule out the near-future possibility of discovery.

In 2023 there were tantalizing reports of a biosignature gas in the atmosphere of planet K2-18b, which seemed to have several conditions that would make life possible.

Many exoplanets, meaning planets orbiting other stars, are not easily comparable to Earth. Their temperatures, atmospheres, and climates make it hard to imagine Earth-type life on them.

However, K2-18b is a bit different. "This planet gets almost the same amount of solar radiation as Earth. And if atmosphere is removed as a factor, K2-18b has a temperature close to Earth's, which is also an ideal situation in which to find life," said UCR project scientist and paper author Shang-Min Tsai.

K2-18b's atmosphere is mainly hydrogen, unlike our nitrogen-based atmosphere. But there was speculation that K2-18b has water oceans, like Earth. That makes K2-18b a potentially "Hycean" world, which means a combination of a hydrogen atmosphere and water oceans.

Last year, a Cambridge team revealed methane and carbon dioxide in the atmosphere of K2-18b using JWST -- other elements that could point to signs of life.

"What was icing on the cake, in terms of the search for life, is that last year these researchers reported a tentative detection of dimethyl sulfide, or DMS, in the atmosphere of that planet, which is produced by ocean phytoplankton on Earth," Tsai said. DMS is the main source of airborne sulfur on our planet and may play a role in cloud formation.

Because the telescope data were inconclusive, the UCR researchers wanted to understand whether enough DMS could accumulate to detectable levels on K2-18b, which is about 120 light years away from Earth. As with any planet that far away, obtaining physical samples of atmospheric chemicals is impossible.

"The DMS signal from the Webb telescope was not very strong and only showed up in certain ways when analyzing the data," Tsai said. "We wanted to know if we could be sure of what seemed like a hint about DMS."

Based on computer models that account for the physics and chemistry of DMS, as well as the hydrogen-based atmosphere, the researchers found that it is unlikely the data show the presence of DMS. "The signal strongly overlaps with methane, and we think that picking out DMS from methane is beyond this instrument's capability," Tsai said.

However, the researchers believe it is possible for DMS to accumulate to detectable levels. For that to happen, plankton or some other life form would have to produce 20 times more DMS than is present on Earth.

Detecting life on exoplanets is a daunting task, given their distance from Earth. To find DMS, the Webb telescope would need to use an instrument better able to detect infrared wavelengths in the atmosphere than the one used last year. Fortunately, the telescope will use such an instrument later this year, revealing definitively whether DMS exists on K2-18b.

"The best biosignatures on an exoplanet may differ significantly from those we find most abundant on Earth today. On a planet with a hydrogen-rich atmosphere, we may be more likely to find DMS made by life instead of oxygen made by plants and bacteria as on Earth," said UCR astrobiologist Eddie Schwieterman, a senior author of the study.

Given the complexities of searching far-flung planets for signs of life, some wonder about the researchers continued motivations.

Read more at Science Daily

'Gap' in carbon removal: Countries' plans to remove CO2 not enough

New research involving the University of East Anglia (UEA) suggests that countries' current plans to remove CO2 from the atmosphere will not be enough to comply with the 1.5 ºC warming limit set out under the Paris Agreement.

Since 2010, the United Nations environmental organisation UNEP has taken an annual measurement of the emissions gap -- the difference between countries' climate protection pledges and what is necessary to limit global heating to 1.5 ºC, or at least below 2 ºC.

The UNEP Emissions Gap Reports are clear: climate policy needs more ambition. This new study now explicitly applies this analytical concept to carbon dioxide removal (CDR) -- the removal of the most important greenhouse gas, CO2, from the atmosphere.

The study, published today in the journal Nature Climate Change, was led by the Berlin-based Mercator Research Institute on Global Commons and Climate Change (MCC) and involved an international team of scientists.

"In the Emissions Gap Reports, carbon removals are only accounted for indirectly," said lead author Dr William Lamb, of the MCC Applied Sustainability Science working group.

"After all, the usual benchmark for climate protection pledges is net emissions, ie emissions minus removals. We are now making transparent the specific ambition gap in scaling up removals.

"This planetary waste management will soon place completely new requirements on policymakers and may even become a central pillar of climate protection in the second half of the century."

Co-author Dr Naomi Vaughan, of the Tyndall Centre for Climate Change Research at UEA, added: "Carbon dioxide removal methods have a small but vital role to play in achieving net zero and limiting the impacts of climate change.

"Our analysis shows that countries need more awareness, ambition and action on scaling up CDR methods together with deep emissions reductions to achieve the aspirations of the Paris Agreement."

According to the study, if national targets are fully implemented, annual human-induced carbon removals could increase by a maximum of 0.5 gigatonnes of CO2 (500 million tonnes) by 2030, and by a maximum of 1.9 gigatonnes by 2050.

This contrasts with the 5.1 gigatonne increase required in a 'focus scenario', which the research team depicts as typical from the latest Intergovernmental Panel on Climate Change (IPCC) assessment report.

There, global heating, calculated over the entire course of this century, is limited to 1.5 ºC, and a particularly rapid expansion of renewable energies and reduction of fossil emissions is depicted as the core climate protection strategy.

But, the focus scenario still relies on scaling up carbon removals. The gap for the year 2050 is therefore at least 3.2 gigatonnes of CO2 (5.1 minus a maximum of 1.9).

An alternative focus scenario, also derived from the IPCC, assumes a significant reduction in global energy demand, due to politically initiated behaviour changes as the core element of climate protection strategy.

Here, carbon removals would increase by a more modest amount: 2.5 gigatonnes in 2050. Fully implemented national targets would be close to sufficient when compared to this scenario, with a gap in 2050 of 0.4 gigatonnes.

The research team points out the problem of sustainability limits in scaling up carbon removals; for example, the associated land area demand will come to jeopardise biodiversity and food security. Nevertheless, there is still plenty of room for designing fair and sustainable land management policies.

In addition, novel carbon removal options, such as air filter systems, or 'enhanced rock weathering', have hardly been promoted by politicians to date.

They currently only remove 0.002 gigatonnes of CO2 per year from the atmosphere, compared to 3 gigatonnes through conventional options such as afforestation, and they are unlikely to significantly increase by 2030. According to the scenarios, they must become more prevalent than conventional options by 2010.

Since only 40 countries have so far quantified their removal plans in their long-term low emissions development strategies, the study also draws on other national documents and best-guess assumptions.

"The calculation should certainly be refined," said Dr Lamb. "But our proposal using the focus scenarios further opens the discourse on how much carbon removal is necessary to meet the Paris Agreement.

"This much is clear: without a rapid reduction in emissions towards zero, across all sectors, the 1.5 ºC limit will not be met under any circumstances."

Read more at Science Daily

Significant new discovery in teleportation research -- Noise can improve the quality of quantum teleportation

In teleportation, the state of a quantum particle, or qubit, is transferred from one location to another without sending the particle itself. This transfer requires quantum resources, such as entanglement between an additional pair of qubits. In an ideal case, the transfer and teleportation of the qubit state can be done perfectly. However, real-world systems are vulnerable to noise and disturbances -- and this reduces and limits the quality of the teleportation.

Researchers from the University of Turku, Finland, and the University of Science and Technology of China, Hefei, have now proposed a theoretical idea and made corresponding experiments to overcome this problem. In other words, the new approach enables reaching high-quality teleportation despite the presence of noise.

"The work is based on an idea of distributing entanglement -- prior to running the teleportation protocol -- beyond the used qubits, i.e., exploiting the hybrid entanglement between different physical degrees of freedom," says Professor Jyrki Piilo from the University of Turku.

Conventionally, the polarisation of photons has been used for the entanglement of qubits in teleportation, while the current approach exploits the hybrid entanglement between the photons' polarisation and frequency.

"This allows for a significant change in how the noise influences the protocol, and as a matter of fact our discovery reverses the role of the noise from being harmful to being beneficial to teleportation," Piilo describes.

With conventional qubit entanglement in the presence of noise, the teleportation protocol does not work. In a case where there is initially hybrid entanglement and no noise, the teleportation does not work either.

"However, when we have hybrid entanglement and add noise, the teleportation and quantum state transfer occur in almost perfect manner," says Dr Olli Siltanen whose doctoral dissertation presented the theoretical part of the current research.

In general, the discovery enables almost ideal teleportation despite the presence of certain type of noise when using photons for teleportation.

"While we have done numerous experiments on different facets of quantum physics with photons in our laboratory, it was very thrilling and rewarding to see this very challenging teleportation experiment successfully completed," says Dr Zhao-Di Liu from the University of Science and Technology of China, Hefei.

"This is a significant proof-of-principle experiment in the context of one of the most important quantum protocols," says Professor Chuan-Feng Li from the University of Science and Technology of China, Hefei.

Read more at Science Daily

Wild orangutan treats wound with pain-relieving plant

Even though there is evidence of certain self-medication behaviors in animals, so far it has never been known that animals treat their wounds with healing plants. Now, biologists from the Max Planck Institute of Animal Behavior, Germany and Universitas Nasional, Indonesia have observed this in a male Sumatran orangutan who sustained a facial wound. He ate and repeatedly applied sap from a climbing plant with anti-inflammatory and pain-relieving properties commonly used in traditional medicine. He also covered the entire wound with the green plant mesh. Thus, medical wound treatment may have arisen in a common ancestor shared by humans and orangutans.

While sick and avoidance behavior can be regularly observed in non-human animals, self-medication in the form of ingestion of specific plant parts is widespread in animals but exhibited at low frequencies. The closest relatives to humans, the great apes, are known to ingest specific plants to treat parasite infection and to rub plant material on their skin to treat sore muscles. Recently a chimpanzee group in Gabon was observed applying insects to wounds. However, the efficiency of this behavior is still unknown. Wound treatment with a biologically active substance has so far not been documented.

In a study published in Scientific Reports, cognitive and evolutionary biologists from the Max Planck Institute of Animal Behavior, Konstanz, Germany and Universitas Nasional, Indonesiareport evidence of active wound treatment with a healing plant in a wild male Sumatran orangutan. The study, led by Caroline Schuppli and Isabelle Laumer, took place at the Suaq Balimbing research site in Indonesia, which is a protected rainforest area home to approximately 150 critically endangered Sumatran orangutans. "During daily observations of the orangutans, we noticed that a male named Rakus had sustained a facial wound, most likely during a fight with a neighboring male," says Isabelle Laumer (MPI-AB), first author of the study.

Three days after the injury Rakus selectively ripped off leaves of a liana with the common name Akar Kuning (Fibraurea tinctoria), chewed on them, and then repeatedly applied the resulting juice precisely onto the facial wound for several minutes. As a last step, he fully covered the wound with the chewed leaves.

Says Laumer: "This and related liana species that can be found in tropical forests of Southeast Asia are known for their analgesic and antipyretic effects and are used in traditional medicine to treat various diseases, such as malaria. Analyses of plant chemical compounds show the presence of furanoditerpenoids and protoberberine alkaloids, which are known to have antibacterial, anti-inflammatory, anti-fungal, antioxidant, and other biological activities of relevance to wound healing."

Observations over the following days did not show any signs of the wound becoming infected and after five days the wound was already closed. "Interestingly, Rakus also rested more than usual when being wounded. Sleep positively affects wound healing as growth hormone release, protein synthesis and cell division are increased during sleep," she explains.

Like all self-medication behavior in non-human animals, the case reported in this study raises questions about how intentional these behaviors are and how they emerge. "The behavior of Rakus appeared to be intentional as he selectively treated his facial wound on his right flange, and no other body parts, with the plant juice. The behavior was also repeated several times, not only with the plant juice but also later with more solid plant material until the wound was fully covered. The entire process took a considerable amount of time," says Laumer.

"It is possible, that wound treatment with Fibraurea tinctoria by the orangutans at Suaq emerges through individual innovation," says Caroline Schuppli, senior author of the study. "Orangutans at the site rarely eat the plant. However, individuals may accidentally touch their wounds while feeding on this plant and thus unintentionally apply the plant's juice to their wounds. As Fibraurea tinctoria has potent analgesic effects, individuals may feel an immediate pain release, causing them to repeat the behavior several times."

Since the behavior has not been observed before, it may be that wound treatment with Fibraurea tinctoria has so far been absent in the behavioral repertoire of the Suaq orangutan population. Like all adult males in the area, Rakus was not born in Suaq, and his origin is unknown. "Orangutan males disperse from their natal area during or after puberty over long distances to either establish a new home range in another area or are moving between other's home ranges," explains Schuppli. "Therefore, it is possible that the behavior is shown by more individuals in his natal population outside the Suaq research area."

Read more at Science Daily

May 2, 2024

A 'cosmic glitch' in gravity

A group of researchers at the University of Waterloo and the University of British Columbia have discovered a potential "cosmic glitch" in the universe's gravity, explaining its strange behaviour on a cosmic scale.

For the last 100 years, physicists have relied upon Albert Einstein's theory of "general relativity" to explain how gravity works throughout the universe. General relativity, proven accurate by countless tests and observations, suggests that gravity impacts not simply three physical dimensions but also a fourth dimension: time.

"This model of gravity has been essential for everything from theorizing the Big Bang to photographing black holes," said Robin Wen, the lead author on the project and a recent Waterloo Mathematical Physics graduate.

"But when we try to understand gravity on a cosmic scale, at the scale of galaxy clusters and beyond, we encounter apparent inconsistencies with the predictions of general relativity. It's almost as if gravity itself stops perfectly matching Einstein's theory. We are calling this inconsistency a 'cosmic glitch': gravity becomes around one per cent weaker when dealing with distances in the billions of light years. "

For more than twenty years, physicists and astronomers have been trying to create a mathematical model that explains the apparent inconsistencies of the theory of general relativity. Many of those efforts have taken place at Waterloo, which has a long history of cutting-edge gravitational research resulting from ongoing interdisciplinary collaboration between applied mathematicians and astrophysicists.

"Almost a century ago, astronomers discovered that our universe is expanding," said Niayesh Afshordi, a professor of astrophysics at the University of Waterloo and researcher at the Perimeter Institute.

"The farther away galaxies are, the faster they are moving, to the point that they seem to be moving at nearly the speed of light, the maximum allowed by Einstein's theory. Our finding suggests that, on those very scales, Einstein's theory may also be insufficient."

The research team's new model of a "cosmic glitch" modifies and extends Einstein's mathematical formulas in a way that resolves the inconsistency of some of the cosmological measurements without affecting existing successful uses of general relativity.

"Think of it as being like a footnote to Einstein's theory," Wen said. "Once you reach a cosmic scale, terms and conditions apply."

Read more at Science Daily

Climate change and mercury pollution stressed plants for millions of years

The link between massive flood basalt volcanism and the end-Triassic (201 million years ago) mass-extinction is commonly accepted. However, exactly how volcanism led to the collapse of ecosystems and the extinction of entire families of organisms is difficult to establish. Extreme climate change from the release of carbon dioxide, degradation of the ozone layer due to the injection of damaging chemicals, and the emissions of toxic pollutants, are all seen as contributing factors. One toxic element stands out: mercury. As one of the most toxic elements on Earth, Hg is a metal that is emitted from volcanoes in gaseous form, and thus has the capacity to spread worldwide. A new study in Nature Communications adds new compelling evidence for the combined effects of global warming and widespread mercury pollution that continued to stress plants long after volcanic activity had ceased.

An international team of Dutch, Chinese, Danish, British, and Czech scientists studied sediments from Northern-Germany in a drill-core (Schandelah-1) that spans the uppermost Triassic to lower Jurassic for microfossils and geochemical signals. A study of pollen and spore abundances revealed a profusion of fern spores showing a range of malformations, from abnormalities in wall structure to evidence for botched meiotic divisions, leading to unseparated, dwarfed, and fused fern spores. "Seeing the sheer amount and different types of malformed fern spores in sediment samples from a coastal lagoon, dating back 201 million years ago is truly astonishing. It means there must have been very many ferns being stressed," explains Remco Bos, a PhD candidate at Utrecht University and lead author of the study. "It is also not something we see regularly during other periods that also contain many fern fossils, making it a true signal connected to the end-Triassic mass-extinction event."

Deforestation and ferns

The results from Bos and co-authors confirm earlier work by co-authors Sofie Lindström (University of Copenhagen), Hamed Sanei (Aarhus University), and Bas van de Schootbrugge (Utrecht University), who previously produced similar data obtained from cores from Denmark and from nearby outcrops in Sweden. According to Sofie Lindström: "Ferns replaced trees across the extinction interval in response to dramatic environmental changes likely driven by heat stress, strongly increased monsoonal rainfall, and increased forest fire activity. Palynological results show that a pioneering fern vegetation spread across vast swaths of coastal lowlands in Northwestern Europe from Sweden and Denmark to Germany, France, Luxemburg, and Austria in response to widespread deforestation." Ferns are hardy plants, often colonizing disturbed environments, including newly formed volcanic islands or landscapes devastated by volcanism or wildfires. "What is extraordinary here is that the ferns that produced all these malformed spores in all these different sites, did not go extinct. While other plants went extinct, ferns were apparently robust enough to continue, which could also be related to their different mercury tolerance."

Climate variability

In this new study, Bos and co-authors show that the ferns, which took advantage of the dieback of forests, themselves were subjected to stress from Hg-pollution well beyond the immediate extinction interval. "We found four more intervals with high levels of Hg concentrations and high numbers of malformed spores in the 1.3 to 2 million years following the extinction interval," explains Remco Bos. This interval, known as the Hettangian, was a time of continuing adverse conditions in the oceans, with generally low diversities among marine invertebrates, such as ammonites and bivalves. On land, however, vegetation appeared to have recovered quicker. "We now show that this forest ecosystem continued to be perturbed repeatedly for at least 1.3 million years, but perhaps as long as 2 million years," Bos explains.

The four additional episodes of high Hg concentrations and high fern spore malformations were unlikely connected to later phases of Central Atlantic Magmatic Province volcanism. Instead, Bos and co-authors show that these periods correspond closely to the long eccentricity cycle, the major variation in the shape of Earth's orbit that moves Earth closer or further away from the Sun every 405 thousand years. During eccentricity maxima Earth moves closer to the Sun allowing for more sunlight to reach the Earth surface. As the Earth's atmosphere was already supercharged with carbon dioxide from the large-scale volcanism, this cyclic modulation of the climate system repeatedly triggered forest dieback, allowing for the renewed spread of pioneer ferns. As is shown by the correlation with high Hg contents, malformations in fern spores during these episodes were also the result of mercury poisoning. But where did this Hg come from?

Hg-isotopes

A crucial data set was generated at Tianjin University (China) by Wang Zheng, a co-corresponding author and geochemist specialized in metal isotope studies, especially Hg-isotopes. Mercury has different stable isotopes that behave differently in the environment. During reactions in nature, for example the expulsion from volcanism, deposition from the atmosphere, and the uptake by organisms, Hg-isotopes can become fractionated, enriching one pool in heavier isotopes, and others in lighter isotopes. Sediments with elevated levels of Hg and malformed spores also show clear variations in Hg-isotopes. "Based on the Hg-isotope variations we were able to link an initial pulse in Hg enrichment at the Triassic-Jurassic boundary to the emission of mercury from flood basalt volcanism," Wang Zheng explains. "However, the four other pulses in mercury had a different isotopic composition, indicating they were mainly driven by Hg input from soil erosion and photochemical reduction."

Climate change and toxic pollution

The combined geochemical and microfossil data thus paint a picture of a much more complex and drawn-out sequence of events, starting with massive volcanism driving climate change and releasing toxic pollutants, followed by episodic pulses of disturbance in the aftermath of the extinction event lasting for at least 1.3 million years. Dr. Tomas Navratil from the Czech Academy of Sciences, a co-author on the paper and a specialist for modern-day mercury pollution, agrees with this scenario. "Our work on polluted sites in the Czech Republic does show evidence for episodic remobilization from forest soils, especially during hot summers, and in places that are more exposed to sunlight causing the photochemical reduction of mercury and re-release to the atmosphere of previously stored mercury."

Read more at Science Daily

75,000-year-old female Neanderthal from cave where species buried their dead

A new documentary has recreated the face of a 75,000-year-old female Neanderthal whose flattened skull was discovered and rebuilt from hundreds of bone fragments by a team of archaeologists and conservators led by the University of Cambridge.

The team excavated the female Neanderthal in 2018 from inside a cave in Iraqi Kurdistan where the species had repeatedly returned to lay their dead to rest. The cave was made famous by work in the late 1950s that unearthed several Neanderthals which appeared to have been buried in succession.

'Secrets of the Neanderthals', produced by BBC Studios Science Unit, is released on Netflix worldwide. The documentary follows the team led by the universities of Cambridge and Liverpool John Moores as they return to Shanidar Cave to continue excavations.

"The skulls of Neanderthals and humans look very different," said Dr Emma Pomeroy, a palaeo-anthropologist from Cambridge's Department of Archaeology, who features in the new film.

"Neanderthal skulls have huge brow ridges and lack chins, with a projecting midface that results in more prominent noses. But the recreated face suggests those differences were not so stark in life.

"It's perhaps easier to see how interbreeding occurred between our species, to the extent that almost everyone alive today still has Neanderthal DNA."

Neanderthals are thought to have died out around 40,000 years ago, and the discoveries of new remains are few and far between. The Neanderthal featured in the documentary is the first from the cave for over fifty years, and perhaps the best preserved individual to be found this century.

While earlier finds were numbered, this one is called Shanidar Z, although researchers think it may be the top half of an individual excavated in 1960.

The head had been crushed, possibly by rockfall, relatively soon after death -- after the brain decomposed but before the cranium filled with dirt -- and then compacted further by tens of thousands of years of sediment.

When archaeologists found it, the skull was flattened to around two centimetres thick.

The team carefully exposed the remains, including an articulated skeleton almost to the waist, and used a glue-like consolidant to strengthen the bones and surrounding sediment. They removed Shanidar Z in dozens of small foil-wrapped blocks from under seven and a half metres of soil and rock within the heart of the cave.

In the Cambridge lab, researchers took micro-CT scans of each block before gradually diluting the glue and using the scans to guide extraction of bone fragments. Lead conservator Dr Lucía López-Polín pieced over 200 bits of skull together freehand to return it to its original shape, including upper and lower jaws.

"Each skull fragment is gently cleaned while glue and consolidant are re-added to stabilise the bone, which can be very soft, similar in consistency to a biscuit dunked in tea," said Pomeroy. "It's like a high stakes 3D jigsaw puzzle. A single block can take over a fortnight to process."

The team even referred to forensic science -- studies on how bones shift after blunt force trauma and during decomposition -- to help them understand if remains had been buried, and the ways in which teeth had pinged from jawbones.

The rebuilt skull was surface scanned and 3D-printed, forming the basis of a reconstructed head created by world-leading palaeoartists and identical twins Adrie and Alfons Kennis, who built up layers of fabricated muscle and skin to reveal a face.

New analysis strongly suggests that Shanidar Z was an older female, perhaps in her mid-forties according to researchers -- a significant age to reach so deep in prehistory.

Without pelvic bones, the team relied on sequencing tooth enamel proteins to determine her sex. Teeth were also used to gauge her age through levels of wear and tear -- with some front teeth worn down to the root. At around five feet tall, and with some of the smallest adult arm bones in the Neanderthal fossil record, her physique also implies a female.

While remnants of at least ten separate Neanderthals have now come from the cave, Shanidar Z is the fifth to be found in a cluster of bodies buried at a similar time in the same location: right behind a huge vertical rock, over two metres tall at the time, which sits in the centre of the cave.

The rock had come down from the ceiling long before the bodies were interred. Researchers say it may have served as a landmark for Neanderthals to identify a particular site for repeated burials.

"Neanderthals have had a bad press ever since the first ones were found over 150 years ago," said Professor Graeme Barker from Cambridge's McDonald Institute for Archaeological Research, who leads the excavations at the cave.

"Our discoveries show that the Shanidar Neanderthals may have been thinking about death and its aftermath in ways not so very different from their closest evolutionary cousins -- ourselves."

The other four bodies in the cluster were discovered by archaeologist Ralph Solecki in 1960. One was surrounded by clumps of ancient pollen. Solecki and pollen specialist Arlette Leroi-Gourhan argued the finds were evidence of funerary rituals where the deceased was laid to rest on a bed of flowers.

This archaeological work was among the first to suggest Neanderthals were far more sophisticated than the primitive creatures many had assumed, based on their stocky frames and ape-like brows.

Decades later, the Cambridge-led team retraced Solecki's dig, aiming to use the latest techniques to retrieve more evidence for his contentious claims, as well as the environment and activities of the Neanderthals and later modern humans who lived there, when they uncovered Shanidar Z.

"Shanidar Cave was used first by Neanderthals and then by our own species, so it provides an ideal laboratory to tackle one of the biggest questions of human evolution," said Barker.

"Why did Neanderthals disappear from the stage around the same time as Homo sapiens spread over regions where Neanderthals had lived successfully for almost half a million years?"

A study led by Professor Chris Hunt of Liverpool John Moores University now suggests the pollen was left by bees burrowing into the cave floor. However, remains from Shanidar Cave still show signs of an empathetic species. For example, one male had a paralysed arm, deafness and head trauma that likely rendered him partially blind, yet had lived a long time, so must have been cared for.

Site analysis suggests that Shanidar Z was laid to rest in a gully formed by running water that had been further hollowed out by hand to accommodate the body. Posture indicates she had been leant against the side, with her left hand curled under her head, and a rock behind the head like a small cushion, which may have been placed there.

While Shanidar Z was buried within a similar timeframe as other bodies in the cluster, researchers cannot say how contemporaneous they are, only that they all date to around 75,000 years ago.

In fact, while filming onsite for the new documentary in 2022, the team found remains of yet another individual in the same burial cluster, uncovering the left shoulder blade, some ribs and a fairly complete right hand.

In the sediments several feet above, another three Neanderthals dating to around 50,000 years had been found by Solecki, more of which have been recovered by the current team.

Further research since Shanidar Z was found has detected microscopic traces of charred food in the soil around the older body cluster. These carbonised bits of wild seeds, nuts and grasses, suggest not only that Neanderthals prepared food -- soaking and pounding pulses -- and then cooked it, but did so in the presence of their dead.

"The body of Shanidar Z was within arm's reach of living individuals cooking with fire and eating," said Pomeroy. "For these Neanderthals, there does not appear to be that clear separation between life and death."

"We can see that Neanderthals are coming back to one particular spot to bury their dead. This could be decades or even thousands of years apart. Is it just a coincidence, or is it intentional, and if so what brings them back?"

Read more at Science Daily

Sleep resets brain connections -- but only for first few hours

During sleep, the brain weakens the new connections between neurons that had been forged while awake -- but only during the first half of a night's sleep, according to a new study in fish by UCL scientists.

The researchers say their findings, published in Nature, provide insight into the role of sleep, but still leave an open question around what function the latter half of a night's sleep serves.

The researchers say the study supports the Synaptic Homeostasis Hypothesis, a key theory on the purpose of sleep which proposes that sleeping acts as a reset for the brain.

Lead author Professor Jason Rihel (UCL Cell & Developmental Biology) said: "When we are awake, the connections between brain cells get stronger and more complex. If this activity were to continue unabated, it would be energetically unsustainable. Too many active connections between brain cells could prevent new connections from being made the following day.

"While the function of sleep remains mysterious, it may be serving as an 'off-line' period when those connections can be weakened across the brain, in preparation for us to learn new things the following day."

For the study, the scientists used optically translucent zebrafish, with genes enabling synapses (structures that communicate between brain cells) to be easily imaged. The research team monitored the fish over several sleep-wake cycles.

The researchers found that brain cells gain more connections during waking hours, and then lose them during sleep. They found that this was dependent on how much sleep pressure (need for sleep) the animal had built up before being allowed to rest; if the scientists deprived the fish from sleeping for a few extra hours, the connections continued to increase until the animal was able to sleep.

Professor Rihel added: "If the patterns we observed hold true in humans, our findings suggest that this remodelling of synapses might be less effective during a mid-day nap, when sleep pressure is still low, rather than at night, when we really need the sleep."

The researchers also found that these rearrangements of connections between neurons mostly happened in the first half of the animal's nightly sleep. This mirrors the pattern of slow-wave activity, which is part of the sleep cycle that is strongest at the beginning of the night.

Read more at Science Daily

May 1, 2024

Earth-like environment likely on ancient Mars

A research team using the ChemCam instrument onboard NASA's Curiosity rover discovered higher-than-usual amounts of manganese in lakebed rocks within Gale Crater on Mars, which indicates that the sediments were formed in a river, delta, or near the shoreline of an ancient lake. The results were published today in Journal of Geophysical Research: Planets.

"It is difficult for manganese oxide to form on the surface of Mars, so we didn't expect to find it in such high concentrations in a shoreline deposit," said Patrick Gasda, of Los Alamos National Laboratory's Space Science and Applications group and lead author on the study. "On Earth, these types of deposits happen all the time because of the high oxygen in our atmosphere produced by photosynthetic life, and from microbes that help catalyze those manganese oxidation reactions.

"On Mars, we don't have evidence for life, and the mechanism to produce oxygen in Mars's ancient atmosphere is unclear, so how the manganese oxide was formed and concentrated here is really puzzling. These findings point to larger processes occurring in the Martian atmosphere or surface water and shows that more work needs to be done to understand oxidation on Mars," Gasda added.

ChemCam, which was developed at Los Alamos and CNES (the French space agency), uses a laser to form a plasma on the surface of a rock, and collects that light in order to quantify elemental composition in rocks.

The sedimentary rocks explored by the rover are a mix of sands, silts, and muds. The sandy rocks are more porous, and groundwater can more easily pass through sands compared to the muds that make up most of the lakebed rocks in the Gale Crater. The research team looked at how manganese could have been enriched in these sands -- for example, by percolation of groundwater through the sands on the shore of a lake or mouth of a delta -- and what oxidant could be responsible for the precipitation of manganese in the rocks.

On Earth, manganese becomes enriched because of oxygen in the atmosphere, and this process is often sped up by the presence of microbes. Microbes on Earth can use the many oxidation states of manganese as energy for metabolism; if life was present on ancient Mars, the increased amounts of manganese in these rocks along the lake shore would have been a helpful energy source for life.

Read more at Science Daily

Revised dating of the Liujiang skeleton renews understanding of human occupation of China

The emergence of Homo sapiens in Eastern Asia has long been a subject of intense research interest, with the scarcity of well-preserved and dated human fossils posing significant challenges.

Tongtianyan cave, located in the Liujiang District of Liuzhou City, Southern China, has been a focal point of this research, housing one of the most significant fossil finds of Homo sapiens. However, the age of the fossils found within has been a matter of debate -- until now.

In a new international study in Nature Communications, with contributions by Griffith University, researchers have provided new age estimates and revised provenance information for the Liujiang human fossils, shedding light on the presence of Homo sapiens in the region.

Using advanced dating techniques including U-series dating on human fossils, and radiocarbon and optically stimulated luminescence dating on fossil-bearing sediments, the study revealed new ages ranging from approximately 33,000 to 23,000 years ago. Previously, studies had reported ages of up to 227,000 years of age for the skeleton.

"These revised age estimates align with dates from other human fossils in northern China, suggesting a geographically widespread presence of H. sapiens across Eastern Asia after 40,000 years ago," said Professor Michael Petraglia, study co-author and Director of Griffith's Australian Research Centre for Human Evolution.

Dr Junyi Ge, of the Chinese Academy of Sciences, and lead author of the study, said: "This finding holds significant implications for understanding human dispersals and adaptations in the region. It challenges previous interpretations and provides insights into the occupation history of China."

The Liujiang skeletal remains, discovered in 1958, have long been considered among the most significant human fossils from Eastern Asia.

With their excellent preservation, the cranial, dental, and postcranial remains have been the subjects of extensive biological and morphological comparisons across Eurasia.

Dr Qingfeng Shao, of the Nanjing Normal University added: "The findings of this study overturn earlier age estimates and palaeoanthropological interpretations, emphasising the need for robust dating methods and proper provenance documentation in the study of human evolution."

Read more at Science Daily

Every breath you take: Study models the journey of inhaled plastic particle pollution

With recent studies having established the presence of nano and microplastic particles in the respiratory systems of both human and bird populations, a new University of Technology Sydney (UTS) study has modelled what happens when people breathe in different kinds of plastic particles and where they end up.

Led by Senior Lecturer of Mechanical Engineering Dr Suvash Saha, the UTS research team has used computational fluid-particle dynamics (CFPD) to study the transfer and deposition of particles of different sizes and shapes depending on the rate of breathing.

The results of the modelling, published in the journal Environmental Advances, have pinpointed hotspots in the human respiratory system where plastic particles can accumulate, from the nasal cavity and larynx and into the lungs.

Dr Saha said evidence was mounting on the significant impact of nano and microplastics on respiratory health and the UTS study would provide essential insights for the development of targeted strategies to mitigate potential risks and ensure effective health interventions.

"Experimental evidence has strongly suggested that these plastic particles amplify human susceptibility to a spectrum of lung disorders, including chronic obstructive pulmonary disease, fibrosis, dyspnea (shortness of breath), asthma, and the formation of what are called frosted glass nodules," Dr Saha said.

"Plastic particle air pollution is now pervasive and inhalation ranks as the second most likely pathway for human exposure.

"The primary types are intentionally manufactured, including a wide array of cosmetics and personal care products such as toothpaste.

"The secondary ones are fragments derived from the degradation of larger plastic products, such as water bottles, food containers and clothes.

"Extensive investigations have identified synthetic textiles as a principal source of indoor airborne plastic particles, while the outdoor environment presents a multitude of sources encompassing contaminated aerosols from the ocean to particles originating from wastewater treatment."

Dr Saha said the UTS team's modelling found that breathing rate along with particle size and shape determined where in the respiratory system plastic particles would be deposited.

"Faster breathing rates led to heightened deposition in the upper respiratory tract, particularly for larger microplastics, whereas slower breathing facilitated deeper penetration and deposition of smaller nanoplastic particles," he said.

"Particle shape was another factor, with non-spherical microplastic particles showing a propensity for deeper lung penetration compared to spherical microplastics and nanoplastics, potentially leading to different health outcomes.

Read more at Science Daily

Scientists work out the effects of exercise at the cellular level

The health benefits of exercise are well known but new research shows that the body's response to exercise is more complex and far-reaching than previously thought. In a study on rats, a team of scientists from across the United States has found that physical activity causes many cellular and molecular changes in all 19 of the organs they studied in the animals.

Exercise lowers the risk of many diseases, but scientists still don't fully understand how exercise changes the body on a molecular level. Most studies have focused on a single organ, sex, or time point, and only include one or two data types.

To take a more comprehensive look at the biology of exercise, scientists with the Molecular Transducers of Physical Activity Consortium (MoTrPAC) used an array of techniques in the lab to analyze molecular changes in rats as they were put through the paces of weeks of intense exercise. Their findings appear in Nature.

The team studied a range of tissues from the animals, such as the heart, brain, and lungs. They found that each of the organs they looked at changed with exercise, helping the body to regulate the immune system, respond to stress, and control pathways connected to inflammatory liver disease, heart disease, and tissue injury.

The data provide potential clues into many different human health conditions; for example, the researchers found a possible explanation for why the liver becomes less fatty during exercise, which could help in the development of new treatments for non-alcoholic fatty liver disease.

The team hopes that their findings could one day be used to tailor exercise to an individual's health status or to develop treatments that mimic the effects of physical activity for people who are unable to exercise. They have already started studies on people to track the molecular effects of exercise.

Launched in 2016, MoTrPAC draws together scientists from the Broad Institute of MIT and Harvard, Stanford University, the National Institutes of Health, and other institutions to shed light on the biological processes that underlie the health benefits of exercise. The Broad project was originally conceived of by Steve Carr, senior director of Broad's Proteomics Platform; Clary Clish, senior director of Broad's Metabolomics Platform; Robert Gerszten, a senior associate member at the Broad and chief of cardiovascular medicine at Beth Israel Deaconess Medical Center; and Christopher Newgard, a professor of nutrition at Duke University.

Co-first authors on the study include Pierre Jean-Beltran, a postdoctoral researcher in Carr's group at Broad when the study began, as well as David Amar and Nicole Gay of Stanford. Courtney Dennis and Julian Avila, both researchers in Clish's group, were also co-authors on the manuscript.

"It took a village of scientists with distinct scientific backgrounds to generate and integrate the massive amount of high quality data produced," said Carr, a co-senior author of the study. "This is the first whole-organism map looking at the effects of training in multiple different organs. The resource produced will be enormously valuable, and has already produced many potentially novel biological insights for further exploration."

The team has made all of the animal data available in an online public repository. Other scientists can use this site to download, for example, information about the proteins changing in abundance in the lungs of female rats after eight weeks of regular exercise on a treadmill, or the RNA response to exercise in all organs of male and female rats over time.

Whole-body analysis

Conducting such a large and detailed study required a lot of planning. "The amount of coordination that all of the labs involved in this study had to do was phenomenal," said Clish.

In partnership with Sue Bodine at the Carver College of Medicine at the University of Iowa, whose group collected tissue samples from animals after up to eight weeks of training, other members of the MoTrPAC team divided the samples up so that each lab -- Carr's team analyzing proteins, Clish's studying metabolites, and others -- would examine virtually identical samples.

"A lot of large-scale studies only focus on one or two data types," said Natalie Clark, a computational scientist in Carr's group. "But here we have a breadth of many different experiments on the same tissues, and that's given us a global overview of how all of these different molecular layers contribute to exercise response."

In all, the teams performed nearly 10,000 assays to make about 15 million measurements on blood and 18 solid tissues. They found that exercise impacted thousands of molecules, with the most extreme changes in the adrenal gland, which produces hormones that regulate many important processes such as immunity, metabolism, and blood pressure. The researchers uncovered sex differences in several organs, particularly related to the immune response over time. Most immune-signaling molecules unique to females showed changes in levels between one and two weeks of training, whereas those in males showed differences between four and eight weeks.

Some responses were consistent across sexes and organs. For example, the researchers found that heat-shock proteins, which are produced by cells in response to stress, were regulated in the same ways across different tissues. But other insights were tissue-specific. To their surprise, Carr's team found an increase in acetylation of mitochondrial proteins involved in energy production, and in a phosphorylation signal that regulates energy storage, both in the liver that changed during exercise. These changes could help the liver become less fatty and less prone to disease with exercise, and could give researchers a target for future treatments of non-alcoholic fatty liver disease.

"Even though the liver is not directly involved in exercise, it still undergoes changes that could improve health. No one speculated that we'd see these acetylation and phosphorylation changes in the liver after exercise training," said Jean-Beltran. "This highlights why we deploy all of these different molecular modalities -- exercise is a very complex process, and this is just the tip of the iceberg."

"Two or three generations of research associates matured on this consortium project and learned what it means to carefully design a study and process samples," added Hasmik Keshishian, a senior group leader in Carr's group and co-author of the study. "Now we are seeing the results of our work: biologically insightful findings that are yielding from the high quality data we and others have generated.That's really fulfilling."

Read more at Science Daily

Apr 30, 2024

NASA's Webb maps weather on planet 280 light-years away

An international team of researchers has successfully used NASA's James Webb Space Telescope to map the weather on the hot gas-giant exoplanet WASP-43 b.

Precise brightness measurements over a broad spectrum of mid-infrared light, combined with 3D climate models and previous observations from other telescopes, suggest the presence of thick, high clouds covering the nightside, clear skies on the dayside, and equatorial winds upwards of 5,000 miles per hour mixing atmospheric gases around the planet.

The investigation is just the latest demonstration of the exoplanet science now possible with Webb's extraordinary ability to measure temperature variations and detect atmospheric gases trillions of miles away.

Tidally Locked "Hot Jupiter"


WASP-43 b is a "hot Jupiter" type of exoplanet: similar in size to Jupiter, made primarily of hydrogen and helium, and much hotter than any of the giant planets in our own solar system. Although its star is smaller and cooler than the Sun, WASP-43 b orbits at a distance of just 1.3 million miles -- less than 1/25th the distance between Mercury and the Sun.

With such a tight orbit, the planet is tidally locked, with one side continuously illuminated and the other in permanent darkness. Although the nightside never receives any direct radiation from the star, strong eastward winds transport heat around from the dayside.

Since its discovery in 2011, WASP-43 b has been observed with numerous telescopes, including NASA's Hubble and now-retired Spitzer space telescopes.

"With Hubble, we could clearly see that there is water vapor on the dayside. Both Hubble and Spitzer suggested there might be clouds on the nightside," explained Taylor Bell, researcher from the Bay Area Environmental Research Institute and lead author of a study published today in Nature Astronomy. "But we needed more precise measurements from Webb to really begin mapping the temperature, cloud cover, winds, and more detailed atmospheric composition all the way around the planet."

Mapping Temperature and Inferring Weather

Although WASP-43 b is too small, dim, and close to its star for a telescope to see directly, its short orbital period of just 19.5 hours makes it ideal for phase curve spectroscopy, a technique that involves measuring tiny changes in brightness of the star-planet system as the planet orbits the star.

Since the amount of mid-infrared light given off by an object depends largely on how hot it is, the brightness data captured by Webb can then be used to calculate the planet's temperature.

The team used Webb's MIRI (Mid-Infrared Instrument) to measure light from the WASP-43 system every 10 seconds for more than 24 hours. "By observing over an entire orbit, we were able to calculate the temperature of different sides of the planet as they rotate into view," explained Bell. "From that, we could construct a rough map of temperature across the planet."

The measurements show that the dayside has an average temperature of nearly 2,300 degrees Fahrenheit (1,250 degrees Celsius) -- hot enough to forge iron. Meanwhile, the nightside is significantly cooler at 1,100 degrees Fahrenheit (600 degrees Celsius). The data also helps locate the hottest spot on the planet (the "hotspot"), which is shifted slightly eastward from the point that receives the most stellar radiation, where the star is highest in the planet's sky. This shift occurs because of supersonic winds, which move heated air eastward.

"The fact that we can map temperature in this way is a real testament to Webb's sensitivity and stability," said Michael Roman, a co-author from the University of Leicester in the U.K.

To interpret the map, the team used complex 3D atmospheric models like those used to understand weather and climate on Earth. The analysis shows that the nightside is probably covered in a thick, high layer of clouds that prevent some of the infrared light from escaping to space. As a result, the nightside -- while very hot -- looks dimmer and cooler than it would if there were no clouds.

Missing Methane and High Winds

The broad spectrum of mid-infrared light captured by Webb also made it possible to measure the amount of water vapor (H2O) and methane (CH4) around the planet. "Webb has given us an opportunity to figure out exactly which molecules we're seeing and put some limits on the abundances," said Joanna Barstow, a co-author from the Open University in the U.K.

The spectra show clear signs of water vapor on the nightside as well as the dayside of the planet, providing additional information about how thick the clouds are and how high they extend in the atmosphere.

Surprisingly, the data also shows a distinct lack of methane anywhere in the atmosphere. Although the dayside is too hot for methane to exist (most of the carbon should be in the form of carbon monoxide), methane should be stable and detectable on the cooler nightside.

"The fact that we don't see methane tells us that WASP-43b must have wind speeds reaching something like 5,000 miles per hour," explained Barstow. "If winds move gas around from the dayside to the nightside and back again fast enough, there isn't enough time for the expected chemical reactions to produce detectable amounts of methane on the nightside."

The team thinks that because of this wind-driven mixing, the atmospheric chemistry is the same all the way around the planet, which wasn't apparent from past work with Hubble and Spitzer.

Read more at Science Daily

How can forests be reforested in a climate-friendly way?

Europe's forests have already been severely affected by climate change. Thousands of hectares of trees have already died due to drought and bark beetles. Scientists from the University of Vienna and the Technical University of Munich TUM have now investigated which trees can be used for reforestation. Their findings: only a few tree species are fit for the future, such as English oak in the UK. However, mixed forests are important for the survival of forests, otherwise the forest ecosystem as a whole could be weakened. The results of the study were recently published in the renowned journal Nature Ecology and Evolution.

Although European forests are naturally home to a mix of trees, the number of tree species is lower than in climatically comparable areas of North America or East Asia. In the future, even fewer species will be available to the forestry industry, as scientists led by Johannes Wessely and Stefan Dullinger from the University of Vienna have shown in their new study. Depending on the region, between a third and a half of the tree species found there today will no longer be able to cope with future conditions. "This is an enormous decline," says lead author Johannes Wessely, "especially when you consider that only some of the species are of interest for forestry."

The scientists examined the 69 more common of the just over 100 European tree species with regard to the 21st century in Europe. On average, only nine of these 69 species per location are fit for the future in Europe, compared to four in the UK. "Trees that are planted now for reforestation must survive under both current and future conditions. This is difficult because they have to withstand the cold and frost of the next few years as well as a much warmer climate at the end of the 21st century. There is only a very small overlap," says Wessely. In the UK, these climate-fit species include, for example, the English oak. Which tree species will suit which region of Europe in the future varies greatly overall.

Forest ecosystem at risk due to restriction of species

However, even with the selected set of future-proof trees, a major problem remains: the average of nine species is not enough for a species-rich mixed forest. "Mixed forests consisting of many tree species are an important measure to make forests more robust against disturbances such as bark beetles. In some places in Europe, however, we could run out of tree species to establish such colorful mixed forests," explains last author Rupert Seidl from the Technical University of Munich TUM.

Not all trees offer important properties


Trees store carbon, provide a habitat or food source for animals or can be processed into timber -- these are all important properties of forests. But not all trees fulfill these functions equally; only an average of three of the nine climate-fit tree species can do this.

"Our work clearly shows how severely the vitality of forests is affected by climate change. We cannot rely solely on a new mix of tree species; rapid measures to mitigate climate change are essential for the sustainable protection of our forests," says Wessely.

Read more at Science Daily

The double-fanged adolescence of saber-toothed cats

The fearsome, saber-like teeth of Smilodon fatalis -- California's state fossil -- are familiar to anyone who has ever visited Los Angeles' La Brea Tar Pits, a sticky trap from which more than 2,000 saber-toothed cat skulls have been excavated over more than a century.

Though few of the recovered skulls had sabers attached, a handful exhibited a peculiar feature: the tooth socket for the saber was occupied by two teeth, with the permanent tooth slotted into a groove in the baby tooth.

Paleontologist Jack Tseng, associate professor of integrative biology at the University of California, Berkeley, doesn't think the double fangs were a fluke.

Nine years ago, he joined a few colleagues in speculating that the baby tooth helped to stabilize the permanent tooth against sideways breakage as it erupted. The researchers interpreted growth data for the saber-toothed cat to imply that the two teeth existed side by side for up to 30 months during the animal's adolescence, after which the baby tooth fell out.

In a new paper accepted for publication in the journal The Anatomical Record, Tseng provides the first evidence that the saber tooth alone would have been increasingly vulnerable to lateral breakage during eruption, but that a baby or milk tooth alongside it would have made it much more stable. The evidence consists of computer modeling of saber-tooth strength and stiffness against sideways bending, and actual testing and breaking of plastic models of saber teeth.

"This new study is a confirmation -- a physical and simulation test -- of an idea some collaborators and I published a couple of years ago: that the timing of the eruption of the sabers has been tweaked to allow a double-fang stage," said Tseng, who is a curator in the UC Museum of Paleontology. "Imagine a timeline where you have the milk canine coming out, and when they finish erupting, the permanent canine comes out and overtakes the milk canine, eventually pushing it out. What if this milk tooth, for the 30 or so months that it was inside the mouth right next to this permanent tooth, was a mechanical buttress?"

He speculates that the unusual presence of the baby canine -- one of the deciduous teeth all mammals grow and lose by adulthood -- long after the permanent saber tooth erupted protected the saber while the maturing cats learned how to hunt without damaging them. Eventually, the baby tooth would fall out and the adult would lose the saber support, presumably having learned how to be careful with its saber. Paleontologists still do not know how saber-toothed animals like Smilodon hunted prey without breaking their unwieldy sabers.

"The double-fang stage is probably worth a rethinking now that I've shown there's this potential insurance policy, this larger range of protection," he said. "It allows the equivalent of our teenagers to experiment, to take risks, essentially to learn how to be a full-grown, fully fledged predator. I think that this refines, though it doesn't solve, thinking about the growth of saber tooth use and hunting through a mechanical lens."

The study also has implications for how saber-toothed cats and other saber-toothed animals hunted as adults, presumably using their predatory skills and strong muscles to compensate for vulnerable canines.

Beam theory

Thanks to the wealth of saber-toothed cat fossils, which includes many thousands of skeletal parts in addition to skulls, unearthed from the La Brea Tar Pits, scientists know a lot more about Smilodon fatalis than about any other saber-toothed animal, even though at least five separate lineages of saber-toothed animals evolved around the world. Smilodon roamed widely across North America and into Central America, going extinct about 10,000 years ago.

Yet paleontologists are still confounded by that fact that adult animals with thin-bladed knives for canines apparently avoided breaking them frequently despite the sideways forces likely generated during biting. One study of the La Brea predator fossils found that during periods of animal scarcity, saber-toothed cats did break their teeth more often than in times of plenty, perhaps because of altered feeding strategies.

The double-fanged specimens from La Brea, which have been considered rare cases of individuals with delayed loss of the baby tooth, gave Tseng a different idea -- that they had an evolutionary purpose. To test his hypothesis, he used beam theory -- a type of engineering analysis employed widely to model structures ranging from bridges to building materials -- to model real-life saber teeth. This is combined with finite element analysis, which uses computer models to simulate the sideways forces a saber tooth could withstand before breaking.

"According to beam theory, when you bend a blade-like structure laterally sideways in the direction of their narrower dimension, they are quite a lot weaker compared to the main direction of strength," Tseng said. "Prior interpretations of how saber tooths may have hunted use this as a constraint. No matter how they use their teeth, they could not have bent them a lot in a lateral direction."

He found that while the saber's bending strength -- how much force it can withstand before breaking -- remained about the same throughout its elongation, the saber's stiffness -- its deflection under a given force -- decreased with increasing length. In essence, as the tooth got longer, it was easier to bend, increasing the chance of breakage.

By adding a supportive baby tooth in the beam theory model, however, the stiffness of the permanent saber kept pace with the bending strength, reducing the chance of breaking.

"During the time period when the permanent tooth is erupting alongside the milk one, it is around the time when you switch from maximum width to the relatively narrower width, when that tooth will be getting weaker," Tseng said. "When you add an additional width back into the beam theory equation to account for the baby saber, the overall stiffness more closely aligned with theoretical optimal."

Though not reported in the paper, he also 3D-printed resin replicas of saber teeth and tested their bending strength and stiffness on a machine designed to measure tensile strength. The results of these tests mirrored the conclusions from the computer simulations. He is hoping to 3D-print replicas from more life-like dental material to more accurately simulate the strength of real teeth.

Tseng noted that the same canine stabilization system may have evolved in other saber-toothed animals. While no examples of double fangs in other species have been found in the fossil record, some skulls have been found with adult teeth elsewhere in the jaws but milk teeth where the saber would erupt.

"What we do see is milk canines preserved on specimens with otherwise adult dentition, which suggests a prolonged retention of those milk canines while the adult tooth, the sabers, are either about to erupt or erupting," he said.

Read more at Science Daily

A virus could help save billions of gallons of wastewater produced by fracking

An estimated 168 billion gallons of wastewater -- or produced water -- is generated annually by the Permian Basin fracking industry, according to a 2022 report by the Texas Produced Water Consortium. The major waste stream has proved both difficult and costly to treat because of the chemical complexity of the water.

In a new study published in the journal Water, researchers at The University of Texas at El Paso have identified a novel means of treating the wastewater generated by oil and gas production: bacteriophages.

Ramón Antonio Sánchez, a doctoral candidate within UTEP's chemistry program, is the first author on the publication, detailing how bacteriophages, viruses that are often highly specific and lethal to a single species of bacteria, can be used as a rapid and cost-effective method to treat produced water on an industrial scale.

Sánchez said if the work is successful, it would give the oil and gas industry a means of treating, reusing and recycling produced water, rather than the current industry practice of disposing the majority of produced water by injecting it into the ground post oil exploration.

The research focuses on two of the most prominent bacteria found within produced water across the oil and gas industry -- Pseudomonas aeruginosa and Bacillus megaterium. P. aeruginosa has the ability to corrode stainless steel and presents a challenge for the longevity of pipelines and other metal-based infrastructure, while B. megaterium, can decompose hydrocarbons -- the basis for oil.

Sánchez, along with one of his collaborators, Zacariah Hildenbrand, Ph.D., a UTEP alum, were inspired to use bacteriophages based on their applications in the medical industry, where they are used to combat infections caused by multi-drug resistant bacteria.

"Since the bacteria are living organisms, over time they developed a resistance, in the form of a less penetrable membrane, to traditional disinfectants," Sánchez explained. "But the bacteriophages, which are viruses themselves, attach to specific receptors on the surface of the host cell and evolve alongside the bacteria they are trying to infect, meaning that any resistance acquired by the bacteria triggers the modification of bacteriophages to keep the infection going."

The team's experiments with bacteriophages have been effective, achieving the inactivation of both P. aeruginosa and B. megaterium in laboratory settings. For Sánchez, who graduates this spring with his Ph.D., the work will continue in the industry where his focus will be on replicating his laboratory results out in the field. He will also try to expand the number of microorganisms that can be treated in produced water by securing a larger catalog of bacteriophages.

Read more at Science Daily

Apr 29, 2024

Probing the effects of interplanetary space on asteroid Ryugu

Samples reveal evidence of changes experienced by the surface of asteroid Ryugu, some probably due to micrometeoroid bombardment.

Analyzing samples retrieved from the asteroid Ryugu by the Japanese Space Agency's Hayabusa2 spacecraft has revealed new insights into the magnetic and physical bombardment environment of interplanetary space. The results of the study, carried out by Professor Yuki Kimura at Hokkaido University and co-workers at 13 other institutions in Japan, are published in the journal Nature Communications.

The investigations used electron waves penetrating the samples to reveal details of their structure and magnetic and electric properties, a technique called electron holography.

Hayabusa2 reached asteroid Ryugu on 27 June 2018, collected samples during two delicate touchdowns, and then returned the jettisoned samples to Earth in December 2020. The spacecraft is now continuing its journey through space, with plans for it to observe two other asteroids in 2029 and 2031.

One advantage of collecting samples directly from an asteroid is that it allows researchers to examine long-term effects of its exposure to the environment of space. The 'solar wind' of high energy particles from the sun and bombardment by micrometeoroids cause changes known as space-weathering. It is impossible to study these changes precisely using most of the meteorite samples that land naturally on Earth, partly due to their origin from the internal parts of an asteroid, and also due to the effects of their fiery descent through the atmosphere.

"The signatures of space weathering we have detected directly will give us a better understanding of some of the phenomena occurring in the Solar System," says Kimura. He explains that the strength of the magnetic field in the early solar system decreased as planets formed, and measuring the remnant magnetization on asteroids can reveal information about the magnetic field in the very early stages of the solar system.

Kimura adds, "In future work, our results could also help to reveal the relative ages of surfaces on airless bodies and assist in the accurate interpretation of remote sensing data obtained from these bodies."

One particularly interesting finding was that small mineral grains called framboids, composed of magnetite, a form of iron oxide, had completely lost their normal magnetic properties. The researchers suggest this was due to collision with high velocity micrometeoroids between 2 and 20 micrometers in diameter. The framboids were surrounded by thousands of metallic iron nanoparticles. Future studies of these nanoparticles will hopefully reveal insights into the magnetic field that the asteroid has experienced over long periods of time.

Read more at Science Daily

Scientists capture X-rays from upward positive lightning

Globally, lightning is responsible for over 4,000 fatalities and billions of dollars in damage every year; Switzerland itself weathers up to 150,000 strikes annually. Understanding exactly how lightning forms is key for reducing risk, but because lightning phenomena occur on sub-millisecond timescales, direct measurements are extremely difficult to obtain.

Now, researchers from the Electromagnetic Compatibility Lab, led by Farhad Rachidi, in EPFL's School of Engineering have for the first time directly measured an elusive phenomenon that explains a lot about the birth of a lightning bolt: X-ray radiation. In a collaborative study with the University of Applied Sciences of Western Switzerland and Uppsala University in Sweden, they recorded lightning strikes at the Säntis tower in northeastern Switzerland, identifying X-rays associated with the beginning of upward positive flashes. These flashes start with negatively charged tendrils (leaders) that ascend stepwise from a high-altitude object, before connecting with a thundercloud, transferring positive charge to the ground.

"At sea level, upward flashes are rare, but could become the dominant type at high altitudes. They also have the potential to be more damaging, because in an upward flash, lightning remains in contact with a structure for longer than it does during a downward flash, giving it more time to transfer electrical charge," explains Electromagnetic Compatibility Lab PhD candidate Toma Oregel-Chaumont.

Although X-ray emissions have previously been observed from other types of lightning, this is the first time they have been captured from upward positive flashes. Oregel-Chaumont, the first author on a recent Nature Scientific Reports paper describing the observations, says that they offer valuable insights into how lightning -- and upward lightning in particular -- forms.

"The actual mechanism by which lightning initiates and propagates is still a mystery. The observation of upward lightning from tall structures like the Säntis tower makes it possible to correlate X-ray measurements with other simultaneously measured quantities, like high-speed video observations and electric currents."

A unique observation opportunity

It's perhaps not surprising that the novel observations were made in Switzerland, as the Säntis tower offers unique and ideal measurement conditions. The 124-meter tower is perched atop a high peak of the Appenzell Alps, making it a prime lightning target. There is a clear line of sight from neighboring peaks, and the expansive research facility is packed with

high-speed cameras, X-ray detectors, electric field sensors, and current-measuring devices.

Crucially, the speed and sensitivity of this equipment allowed the team to see a difference between negative leader steps that emitted X-rays and those that did not, supporting a theory of lightning formation known as the cold runaway electron model. In a nutshell, the association of X-rays with very rapid electric field changes supported the theory that sudden increases in the air's electric field causes ambient electrons to "run away" and become a plasma: lightning.

"As a physicist, I like to be able to understand the theory behind observations, but this information is also important for understanding lightning from an engineering perspective: More and more high-altitude structures, like wind turbines and aircraft, are being built from composite materials. These are less conductive than metals like aluminum, so they heat up more, making them vulnerable to damage from upward lightning," Oregel-Chaumont says.

Read more at Science Daily

After 25 years, researchers uncover genetic cause of rare neurological disease

Spinocerebellar ataxia 4 is a devastating progressive movement disease that can begin as early as the late teens. Now, a multinational research team led by University of Utah researchers has conclusively identified the genetic difference that causes the disease, bringing answers to families and opening the door to future treatments.

Some families call it a trial of faith. Others just call it a curse. The progressive neurological disease known as spinocerebellar ataxia 4 (SCA4) is a rare condition, but its effects on patients and their families can be severe. For most people, the first sign is difficulty walking and balancing, which gets worse as time progresses. The symptoms usually start in a person's forties or fifties but can begin as early as the late teens. There is no known cure. And, until now, there was no known cause.

Now, after 25 years of uncertainty, a multinational study led by Stefan Pulst, M.D., Dr. med., professor and chair of neurology, and K. Pattie Figueroa, a project manager in neurology, both in the Spencer Fox Eccles School of Medicine at University of Utah, has conclusively identified the genetic difference that causes SCA4, bringing answers to families and opening the door to future treatments. Their results are published in the peer-reviewed journal Nature Genetics.

Solving a genetic enigma


SCA4's pattern of inheritance had long made it clear that the disease was genetic, and previous research had located the gene responsible to a specific region of one chromosome. But that region proved extraordinarily difficult for researchers to analyze: full of repeated segments that look like parts of other chromosomes, and with an unusual chemical makeup that makes most genetic tests fail.

To pinpoint the change that causes SCA4, Figueroa and Pulst, along with the rest of the research team, used a recently developed advanced sequencing technology. By comparing DNA from affected and unaffected people from several Utah families, they found that in SCA4 patients, a section in a gene called ZFHX3 is much longer than it should be, containing an extra-long string of repetitive DNA.

Isolated human cells that have the extra-long version of ZFHX3 show signs of being sick -- they don't seem able to recycle proteins as well as they should, and some of them contain clumps of stuck-together protein.

"This mutation is a toxic expanded repeat and we think that it actually jams up how a cell deals with unfolded or misfolded proteins," says Pulst, the last author on the study. Healthy cells need to constantly break down non-functional proteins. Using cells from SCA4 patients, the group showed that the SCA4-causing mutation gums up the works of cells' protein-recycling machinery in a way that could poison nerve cells.

Hope for the future


Intriguingly, something similar seems to be happening in another form of ataxia, SCA2, which also interferes with protein recycling. The researchers are currently testing a potential therapy for SCA2 in clinical trials, and the similarities between the two conditions raise the possibility that the treatment might benefit patients with SCA4 as well.

Finding the genetic change that leads to SCA4 is essential to develop better treatments, Pulst says. "The only step to really improve the life of patients with inherited disease is to find out what the primary cause is. We now can attack the effects of this mutation potentially at multiple levels."

But while treatments will take a long time to develop, simply knowing the cause of the disease can be incredibly valuable for families affected by SCA4, says Figueroa, the first author on the study. People in affected families can learn whether they have the disease-causing genetic change or not, which can help inform life decisions such as family planning. "They can come and get tested and they can have an answer, for better or for worse," Figueroa says.

The researchers emphasize that their discoveries would not have been possible without the generosity of SCA4 patients and their families, whose sharing of family records and biological samples allowed them to compare the DNA of affected and unaffected individuals. "Different branches of the family opened up not just their homes but their history to us," Figueroa says. Family records were complete enough that the researchers were able to trace the origins of the disease in Utah back through history to a pioneer couple who moved to Salt Lake Valley in the 1840s.

Since meeting so many families with the disease, studying SCA4 has become a personal quest, Figueroa adds. "I've been working on SCA4 directly since 2010 when the first family approached me, and once you go to their homes and get to know them, they're no longer the number on the DNA vial. These are people you see every day… You can't walk away. This is not just science. This is somebody's life."

Read more at Science Daily

T. Rex not as smart as previously claimed

Dinosaurs were as smart as reptiles but not as intelligent as monkeys, as former research suggests.

An international team of palaeontologists, behavioural scientists and neurologists have re-examined brain size and structure in dinosaurs and concluded they behaved more like crocodiles and lizards.

In a study published last year, it was claimed that dinosaurs like T. rex had an exceptionally high number of neurons and were substantially more intelligent than assumed. It was claimed that these high neuron counts could directly inform on intelligence, metabolism and life history, and that T. rex was rather monkey-like in some of its habits. Cultural transmission of knowledge as well as tool use were cited as examples of cognitive traits that it might have possessed.

However the new study, published today in The Anatomical Record, involving the University of Bristol's Hady George, Dr Darren Naish (University of Southampton) and led by Dr Kai Caspar (Heinrich Heine University) with Dr Cristian Gutierrez-Ibanez (University of Alberta) and Dr Grant Hurlburt (Royal Ontario Museum) takes a closer look at techniques used to predict both brain size and neuron numbers in dinosaur brains. The team found that previous assumptions about brain size in dinosaurs, and the number of neurons their brains contained, were unreliable.

The research follows decades of analysis in which palaeontologists and biologists have examined dinosaur brain size and anatomy, and used these data to infer behaviour and lifestyle. Information on dinosaur brains comes from mineral infillings of the brain cavity, termed endocasts, as well as the shapes of the cavities themselves.

The team found that their brain size had been overestimated -- especially that of the forebrain -- and thus neuron counts as well. In addition, they show that neuron count estimates are not a reliable guide to intelligence.

To reliably reconstruct the biology of long-extinct species, the team argues, researchers should look at multiple lines of evidence, including skeletal anatomy, bone histology, the behaviour of living relatives, and trace fossils. "Determining the intelligence of dinosaurs and other extinct animals is best done using many lines of evidence ranging from gross anatomy to fossil footprints instead of relying on neuron number estimates alone," explained Hady from Bristol's School of Earth Sciences.

Dr Kai Caspar explained: "We argue that it's not good practice to predict intelligence in extinct species when neuron counts reconstructed from endocasts are all we have to go on."

"Neuron counts are not good predictors of cognitive performance, and using them to predict intelligence in long-extinct species can lead to highly misleading interpretations," added Dr Ornella Bertrand (Institut Català de Paleontologia Miquel Crusafont).

Read more at Science Daily