Nov 27, 2021

How people understand other people

To successfully cooperate or compete with other people in everyday life, it is important to know what the other person thinks, feels, or wants. Dr. Julia Wolf, Dr. Sabrina Coninx and Professor Albert Newen from Institute of Philosophy II at Ruhr-Universität Bochum have explored which strategies people use to understand other people. For a long time, it was assumed that people relied exclusively on a single strategy: mindreading. This means that people infer the mental states of others solely based on their behaviour. In more recent accounts, however, this ability has been relegated to the background. The Bochum-based team now argues that although people use a number of different strategies, mindreading also plays an important role. They present their findings in the journal Erkenntnis, published online on 10 November 2021.

Strategies for understanding others

In recent years, researchers have criticised mindreading for being too complicated and demanding to be a common strategy for understanding other people. Julia Wolf provides an example of mindreading: "When I see someone running towards a bus, I infer that this person has the desire to catch the bus," she says. "In doing so, I can either picture myself in their situation, or draw on my knowledge of general principles regarding the behaviour of others."

However, in order to recognise the feelings, desires, and needs of others, people may take a different approach. They can directly perceive that a person is stressed based on physical features and other contextual cues. But they can also predict what a person will do next based on learned behavioural rules, without having to infer and attribute a mental state. "When someone gets on the bus, I can predict that they will show their ticket at the entrance without thinking about what is making them do so," says Sabrina Coninx.

People combine different strategies

Today, researchers assume that people combine several strategies to understand others. "We argue that mindreading is more than an unreliable and rarely used backup strategy in this context -- it plays a major role in social cognition," Albert Newen sums up the findings. The authors identify three criteria that could be used to test the importance of mindreading: how frequently it is used, how central it is, and how reliable it is.

While more empirical research is needed to answer the frequency question, the Bochum-based team thinks that there are good reasons to think that mindreading is central to social understanding. "It enables us to develop an individual understanding of others that goes beyond the here and now," explains Julia Wolf. "This plays a crucial role in building and maintaining long-term relationships."

Read more at Science Daily

New ultrahard diamond glass synthesized

Carnegie's Yingwei Fei and Lin Wang were part of an international research team that synthesized a new ultrahard form of carbon glass with a wealth of potential practical applications for devices and electronics. It is the hardest known glass with the highest thermal conductivity among all glass materials. Their findings are published in Nature.

Function follows form when it comes to understanding the properties of a material. How its atoms are chemically bonded to each other, and their resulting structural arrangement, determines a material's physical qualities -- both those that are observable by the naked eye and those that are only revealed by scientific probing.

Carbon is unrivaled in its ability to form stable structures -- alone and in combination with other elements. Some forms of carbon are highly organized, with repeating crystalline lattices. Others are more disordered, a quality termed amorphous.

The type of bond holding a carbon-based material together determine its hardness. For example, soft graphite has two-dimensional bonds and hard diamond has three-dimensional bonds.

"The synthesis of an amorphous carbon material with three-dimensional bonds has been a long-standing goal," explained Fei. "The trick is to find the right starting material to transform with the application of pressure."

"For decades Carnegie researchers have been at the forefront of the field, using laboratory techniques to generate extreme pressures to produce novel materials or mimic the conditions found deep inside planets," added Carnegie Earth and Planets Laboratory Director Richard Carlson.

Because of its extremely high melting point, it's impossible to use diamond as the starting point to synthesize diamond-like glass. However, the research team, led by Jilin University's Bingbing Liu and Mingguang Yao -- a former Carnegie visiting scholar -- made their breakthrough by using a form of carbon composed of 60 molecules arranged to form a hollow ball. Informally called a buckyball, this Nobel Prize-winning material was heated just enough to collapse its soccer-ball-like structure to induce disorder before turning the carbon to crystalline diamond under pressure.

The team used a large-volume multi-anvil press to synthesize the diamond-like glass. The glass is sufficient large for characterization. Its properties were confirmed using a variety of advanced, high-resolution techniques for probing atomic structure.

Read more at Science Daily

Nov 26, 2021

Analysis of Mars’s wind-induced vibrations sheds light on the planet’s subsurface properties

Seismic data collected in Elysium Planitia, the second largest volcanic region on Mars, suggest the presence of a shallow sedimentary layer sandwiched between lava flows beneath the planet's surface. These findings were gained in the framework of NASA's InSight mission (Interior Exploration using Seismic Investigations, Geodesy and Heat Transport), in which several international research partners, including the University of Cologne, collaborate. The paper 'The shallow structure of Mars at the InSight landing site from inversion of ambient vibrations' will appeared in Nature Communications on 23 November.

Geophysicist Dr Cédric Schmelzbach from ETH Zurich and colleagues, including the earthquake specialists Dr Brigitte Knapmeyer-Endrun and doctoral researcher Sebastian Carrasco (MSc) from the University of Cologne's Seismic Observatory in Bensberg, used seismic data to analyse the composition of the Elysium Planitia region. The authors examined the shallow subsurface to around 200 metres in depth. Right beneath the surface, they discovered a regolith layer of dominantly sandy material approximately three metres thick above a 15 metre layer of coarse blocky ejecta -- rocky blocks that were ejected after a meteorite impact and fell back to the surface.

Below these top layers, they identified around 150 metres of basaltic rocks, i.e., cooled and solidified lava flows, which was largely consistent with the expected subsurface structure. However, between these lava flows, starting at a depth of about 30 metres, the authors identified an additional layer 30 to 40 metres thick with low seismic velocity, suggesting it contains weak sedimentary materials relative to the stronger basalt layers.

To date the shallower lava flows, the authors used crater counts from existing literature. Established knowledge about the impact rate of meteorites allows geologists to date rocks: surfaces with many impact craters from meteorites are older than ones with fewer craters. Also, craters with larger diameters extend into the lower layer, allowing the scientists to date the deep rock, while smaller ones allow them to date the shallower rock layers.

They found that the shallower lava flows are approximately 1.7 billion years old, forming during the Amazonian period -- a geological era on Mars characterized by low rates of meteorite and asteroid impacts and by cold, hyper-arid conditions, which began approximately 3 billion years ago. In contrast, the deeper basalt layer below the sediments formed much earlier, approximately 3.6 billion years ago during the Hesperian period, which was characterized by widespread volcanic activity.

The authors propose that the intermediate layer with low volcanic velocities could be composed of sedimentary deposits sandwiched between the Hesperian and Amazonian basalts, or within the Amazonian basalts themselves. These results provide the first opportunity to compare seismic ground-truth measurements of the shallow subsurface to prior predictions based on orbital geological mapping. Prior to the landing, Dr Knapmeyer-Endrun had already developed models of the velocity structure of the shallow subsurface at the InSight landing site based on terrestrial analogues. The actual measurements now indicate additional layering as well as more porous rocks in general.

'While the results help to better understand the geological processes in Elysium Planitia, comparison with pre-landing models is also valuable for future landed missions, since it can help to refine predictions,' Knapmeyer-Endrun remarked. Knowledge of the properties of the shallow subsurface is required to assess, for example, its load-bearing capacity and trafficability for rovers. Besides, details on the layering in the shallow subsurface help to understand where it might still contain ground water or ice. Within the framework of his doctoral research at the University of Cologne, Sebastian Carrasco will continue to analyse the effect of the shallow structure of Elysium Planitia on marsquake recordings.

Read more at Science Daily

Researchers reveal how to turn a global warming liability into a profitable food security solution

Like a mirage on the horizon, an innovative process for converting a potent greenhouse gas into a food security solution has been stalled by economic uncertainty. Now, a first-of-its-kind Stanford University analysis evaluates the market potential of the approach, in which bacteria fed captured methane grow into protein-rich fishmeal. The study, published Nov. 22 in Nature Sustainability, finds production costs involving methane captured from certain sources in the U.S. are lower than the market price for conventional fishmeal. It also highlights feasible cost reductions that could make the approach profitable using other methane sources and capable of meeting all global fishmeal demand.

"Industrial sources in the U.S. are emitting a truly staggering amount of methane, which is uneconomical to capture and use with current applications," said study lead author Sahar El Abbadi, who conducted the research as a graduate student in civil and environmental engineering.

"Our goal is to flip that paradigm, using biotechnology to create a high-value product," added El Abbadi, who is now a lecturer in the Civic, Liberal and Global Education program at Stanford.

Two problems, one solution

Although carbon dioxide is more abundant in the atmosphere, methane's global warming potential is about 85 times as great over a 20-year period and at least 25 times as great a century after its release. Methane also threatens air quality by increasing the concentration of tropospheric ozone, exposure to which causes an estimated 1 million premature deaths annually worldwide due to respiratory illnesses. Methane's relative concentration has grown more than twice as fast as that of carbon dioxide since the beginning of the Industrial Revolution due in great part to human-driven emissions.

A potential solution lies in methane-consuming bacteria called methanotrophs. These bacteria can be grown in a chilled, water-filled bioreactor fed pressurized methane, oxygen and nutrients such as nitrogen, phosphorus and trace metals. The protein-rich biomass that results can be used as fishmeal in aquaculture feed, offsetting demand for fishmeal made from small fish or plant-based feeds that require land, water and fertilizer.

"While some companies are doing this already with pipeline natural gas as feedstock, a preferable feedstock would be methane emitted at large landfills, wastewater treatment plants and oil and gas facilities," said study co-author Craig Criddle, a professor of civil and environmental engineering in Stanford's School of Engineering. "This would result in multiple benefits. including lower levels of a potent greenhouse gas in the atmosphere, more stable ecosystems and positive financial outcomes."

Consumption of seafood, an important global source of protein and micronutrients, has increased more than fourfold since 1960. As a result, wild fish stocks are badly depleted, and fish farms now provide about half of all the animal-sourced seafood we eat. The challenge will only grow as global demand for aquatic animals, plants and algae will likely double by 2050, according to a comprehensive review of the sector led by researchers at Stanford and other institutions.

While methane-fed methanotrophs can provide feed for farmed fish, the economics of the approach have been unclear, even as prices of conventional fishmeal have nearly tripled in real terms since 2000. To clarify the approach's potential to meet demand profitably, the Stanford researchers modeled scenarios in which methane is sourced from relatively large wastewater treatment plants, landfills, and oil and gas facilities, as well as natural gas purchased from the commercial natural gas grid. Their analysis looked at a range of variables, including the cost of electricity and labor availability.

Toward turning a profit

In the scenarios involving methane captured from landfills and oil and gas facilities, the analysis found methanotrophic fishmeal production costs -- $1,546 and $1,531 per ton, respectively -- were lower than the 10-year average market price of $1,600. For the scenario in which methane was captured from wastewater treatment plants, production costs were slightly higher -- $1,645 per ton -- than the average market price of fishmeal. The scenario in which methane was purchased from the commercial grid led to the most expensive fishmeal production costs -- $1,783 per ton -- due to the cost of purchasing natural gas.

For every scenario, electricity was the largest expense, accounting for over 45 percent of total cost on average. In states such as Mississippi and Texas with low electricity prices, production costs came down over 20 percent, making it possible to produce fishmeal from methane for $1,214 per ton, or $386 less per ton than conventional fishmeal production. Electricity costs could be reduced further, the researchers say, by designing reactors that better transfer heat to require less cooling, and switching electric-powered applications to those powered by so-called stranded gas that would otherwise be wasted or unused, which can also reduce reliance on grid electricity for remote locations. In scenarios involving methane from wastewater treatment plants, the wastewater itself could be used to provide nitrogen and phosphorus, as well as cooling.

If efficiencies like these could bring down the production cost for a methanotroph-based fishmeal by 20 percent, the process could profitably supply total global demand for fishmeal with methane captured in the U.S. alone, according to the study. Similarly, the process could replace soybean and animal feeds if further cost reductions were achieved.

"Despite decades of trying, the energy industry has had trouble finding a good use for stranded natural gas," said study co-author Evan David Sherwin, a postdoctoral researcher in energy resources engineering at Stanford. "Once we started looking at the energy and food systems together, it became clear that we could solve at least two longstanding problems at once."

Read more at Science Daily

Prehistoric mums may have cared for kids better than we thought

A new study from The Australian National University (ANU) has revealed the death rate of babies in ancient societies is not a reflection of poor healthcare, disease and other factors, but instead is an indication of the number of babies born in that era.

The findings shed new light on the history of our ancestors and debunk old assumptions that infant mortality rates were consistently high in ancient populations.

The study also opens up the possibility mothers from early human societies may have been much more capable of caring for their children than previously thought.

"It has long been assumed that if there are a lot of deceased babies in a burial sample, then infant mortality must have been high," lead author Dr Clare McFadden, from the ANU School of Archaeology and Anthropology, said.

"Many have assumed that infant mortality was very high in the past in the absence of modern healthcare.

"When we look at these burial samples, it actually tells us more about the number of babies that were born and tells us very little about the number of babies that were dying, which is counterintuitive to past perceptions."

The researchers examined United Nations (UN) data from the past decade for 97 countries that looked at infant mortality, fertility and the number of deaths that occurred during infancy. The analysis revealed that fertility had a much greater influence on the proportion of deceased infants than the infant mortality rate.

Because there is very little known about early human societies, the UN data helped the researchers make interpretations about humans from the past 10,000 years.

"Archaeology has often looked at the proportion of deceased infants to learn something about infant mortality. There was an assumption that nearly half, 40 per cent, of all babies born in prehistoric populations died within the first year of their lives," Dr McFadden said.

After analysing the UN data, Dr McFadden found no evidence to support this assumption.

"Burial samples show no proof that a lot of babies were dying, but they do tell us a lot of babies were being born," she said.

"If mothers during that time were having a lot of babies, then it seems reasonable to suggest they were capable of caring for their young children."

The ANU findings could help researchers understand more about humans that inhabited the Earth tens of thousands of years ago and in particular, how mothers in ancient societies cared for and interacted with their children.

Dr McFadden said as we piece together more clues about the history of humans, it's important we "bring some humanity" back to our ancestors.

"Artistic representations and popular culture tend to view our ancestors as these archaic and incapable people, and we forget their emotional experience and responses such as the desire to provide care and feelings of grief date back tens of thousands of years, so adding this emotional and empathetic aspect to the human narrative is really important," she said.

The researchers would also like to see greater emphasis placed on the stories of women in past populations, which they say have long been neglected in favour of male stories.

"We hear a lot of stories about conflict involving males and even narratives around colonisation and expansion of populations tend to have a focus on men and I think it's really important to be telling these stories of women in the past and what the female experience was like, including the roles they played in the community and as a mother," Dr McFadden said.

Read more at Science Daily

Digital teaching: Opportunity or challenge?

In the past two years, due to the COVID-19 pandemic and the associated lockdowns, not only the importance of interpersonal contact but also the indispensability of face-to-face teaching has been repeatedly addressed and discussed. Never before have lecturers had to change and redesign their teaching methods in such a short period of time. Poor Internet connections, difficulties in the technical implementation and the lack of personal exchange made the transfer of knowledge and teaching content more difficult.

For a high-quality university education, face-to-face teaching is considered an extremely important core component. However, digital teaching also brings unexpected advantages and opportunities, demonstrates the Study and Teaching Commission of the Deutsche Gesellschaft für Psychologie (German Psychological Society (DGPs)) in a report for Psychologische Rundschau. Therein, Dr. Anne Gärtner from Technische Universität Dresden explains that digital teaching offers students as well as lecturers new, unprecedented opportunities and brings a completely renewed form of teaching and learning to life: "On the one hand, the flexibility in terms of time and space in work organization is one of the greatest advantages of digital teaching, as not only time but also costs can be saved, for example, by eliminating travel. Lecturers have greater autonomy and can decide for themselves how to manage their time and organize their seminars and lectures. In addition, recorded teaching material can be reused."

Students feel similarly: digital teaching allows them to learn at their own personal pace and repeat recorded lectures as often as necessary. Nevertheless, "face-to-face teaching and digital formats should not be played off against each other," says Dr. Gärtner. "Digital teaching should be seen as a complementary means to further improve the quality of teaching, and the importance of face-to-face teaching should not be forgotten." Because even if online learning brings more advantages than initially expected, the lack of contact between lecturers and students leaves many gaps that cannot be "filled" online. One obvious disadvantage, for example, is the requirement of a stable Internet connection and the necessary technical equipment. Since digital teaching and its technical implementation was still uncharted territory for many, there was accordingly an increased workload, especially in the early days. In addition, one of the main disadvantages is undoubtedly the difficulty of remaining disciplined, focused and motivated in front of one's computer all by oneself over a long period. For students in particular, this requires significantly more self-discipline and organization than in face-to-face courses.

For Dr. Gärtner personally, the biggest disadvantage was not knowing whether she could actually reach her students in her online courses: "However, it turned out that my online seminars and lectures have been very well attended so far, and interaction and exchange have been possible, albeit in a somewhat different form. As well as that has worked out -- my digital seminar was even awarded a teaching prize, which I was particularly pleased about -- I still hope that I will soon be able to discuss things with students together again in the seminar room and conduct exciting experiments in the lab," explains the psychologist.

Read more at Science Daily

Nov 25, 2021

Hubble witnesses shock wave of colliding gases in Running Man Nebula

A jet from a newly formed star flares into the shining depths of reflection nebula NGC 1977 in this Hubble image. The jet (the orange object at the bottom center of the image) is being emitted by the young star Parengo 2042, which is embedded in a disk of debris that could give rise to planets. The star powers a pulsing jet of plasma that stretches over two light-years through space, bending to the north in this image. The gas of the jet has been ionized until it glows by the radiation of a nearby star, 42 Orionis. This makes it particularly useful to researchers because its outflow remains visible under the ionizing radiation of nearby stars. Typically the outflow of jets like this would only be visible as it collided with surrounding material, creating bright shock waves that vanish as they cool.

In this image, red and orange colors indicate the jet and glowing gas of related shocks. The glowing blue ripples that seem to be flowing away from the jet to the right of the image are bow shocks facing the star 42 Orionis (not shown). Bow shocks happen in space when streams of gas collide, and are named after the crescent-shaped waves made by a ship as it moves through water.

The bright western lobe of the jet is cocooned in a series of orange arcs that diminish in size with increasing distance from the star, forming a cone or spindle shape. These arcs may trace the ionized outer rim of a disk of debris around the star with a radius of 500 times the distance between the Sun and Earth and a sizable (170 astronomical units) hole in the center of the disk. The spindle-like shape may trace the surface of an outflow of material away from the disk and is estimated to be losing the mass of approximately a hundred-million Suns every year.

Read more at Science Daily

One year on this giant, blistering hot planet is just 16 hours long

The hunt for planets beyond our solar system has turned up more than 4,000 far-flung worlds, orbiting stars thousands of light years from Earth. These extrasolar planets are a veritable menagerie, from rocky super-Earths and miniature Neptunes to colossal gas giants.

Among the more confounding planets discovered to date are "hot Jupiters" -- massive balls of gas that are about the size of our own Jovian planet but that zing around their stars in less than 10 days, in contrast to Jupiter's plodding, 12-year orbit. Scientists have discovered about 400 hot Jupiters to date. But exactly how these weighty whirlers came to be remains one of the biggest unsolved mysteries in planetary science.

Now, astronomers have discovered one of the most extreme ultrahot Jupiters -- a gas giant that is about five times Jupiter's mass and blitzes around its star in just 16 hours. The planet's orbit is the shortest of any known gas giant to date.

Due to its extremely tight orbit and proximity to its star, the planet's day side is estimated to be at around 3,500 Kelvin, or close to 6,000 degrees Fahrenheit -- about as hot as a small star. This makes the planet, designated TOI-2109b, the second hottest detected so far.

Judging from its properties, astronomers believe that TOI-2109b is in the process of "orbital decay," or spiraling into its star, like bathwater circling the drain. Its extremely short orbit is predicted to cause the planet to spiral toward its star faster than other hot Jupiters.

The discovery, which was made initially by NASA's Transiting Exoplanet Survey Satellite (TESS), an MIT-led mission, presents a unique opportunity for astronomers to study how planets behave as they are drawn in and swallowed by their star.

"In one or two years, if we are lucky, we may be able to detect how the planet moves closer to its star," says Ian Wong, lead author of the discovery, who was a postdoc at MIT during the study and has since moved to NASA Goddard Space Flight Center. "In our lifetime we will not see the planet fall into its star. But give it another 10 million years, and this planet might not be there."

The discovery is reported today in the Astronomical Journal and is the result of the work of a large collaboration that included members of MIT's TESS science team and researchers from around the world.

Transit track

On May 13, 2020, NASA's TESS satellite began observing TOI-2109, a star located in the southern portion of the Hercules constellation, about 855 light years from Earth. The star was identified by the mission as the 2,109th "TESS Object of Interest," for the possibility that it might host an orbiting planet.

Over nearly a month, the spacecraft collected measurements of the star's light, which the TESS science team then analyzed for transits -- periodic dips in starlight that might indicate a planet passing in front of and briefly blocking a small fraction of the star's light. The data from TESS confirmed that the star indeed hosts an object that transits about every 16 hours.

The team notified the wider astronomy community, and shortly after, multiple ground-based telescopes followed up over the next year to observe the star more closely over a range of frequency bands. These observations, combined with TESS' initial detection, confirmed the transiting object as an orbiting planet, which was designated TOI-2109b.

"Everything was consistent with it being a planet, and we realized we had something very interesting and relatively rare," says study co-author Avi Shporer, a research scientist at MIT's Kavli Institute for Astrophysics and Space Research.

Day and night

By analyzing measurements over various optical and infrared wavelengths, the team determined that TOI-2109b is about five times as massive as Jupiter, about 35 percent larger, and extremely close to its star, at a distance of about 1.5 million miles out. Mercury, by comparison, is around 36 million miles from the Sun.

The planet's star is roughly 50 percent larger in size and mass compared to our Sun. From the observed properties of the system, the researchers estimated that TOI-2109b is spiraling into its star at a rate of 10 to 750 milliseconds per year -- faster than any hot Jupiter yet observed.

Given the planet's dimensions and proximity to its star, the researchers determined TOI-2109b to be an ultrahot Jupiter, with the shortest orbit of any known gas giant. Like most hot Jupiters, the planet appears to be tidally locked, with a perpetual day and night side, similar to the Moon with respect to the Earth. From the month-long TESS observations, the team was able to witness the planet's varying brightness as it revolves about its axis. By observing the planet pass behind its star (known as a secondary eclipse) at both optical and infrared wavelengths, the researchers estimated that the day side reaches temperatures of more than 3,500 Kelvin.

"Meanwhile, the planet's night side brightness is below the sensitivity of the TESS data, which raises questions about what is really happening there," Shporer says. "Is the temperature there very cold, or does the planet somehow take heat on the day side and transfer it to the night side? We're at the beginning of trying to answer this question for these ultrahot Jupiters."

The researchers hope to observe TOI-2109b with more powerful tools in the near future, including the Hubble Space Telescope and the soon-to-launch James Webb Space Telescope. More detailed observations could illuminate the conditions hot Jupiters undergo as they fall into their star.

"Ultrahot Jupiters such as TOI-2109b constitute the most extreme subclass of exoplanet," Wong says. "We have only just started to understand some of the unique physical and chemical processes that occur in their atmospheres -- processes that have no analogs in our own solar system."

Read more at Science Daily

Collapse of ancient Liangzhu culture caused by climate change

Referred to as "China's Venice of the Stone Age", the Liangzhu excavation site in eastern China is considered one of the most significant testimonies of early Chinese advanced civilisation. More than 5000 years ago, the city already had an elaborate water management system. Until now, it has been controversial what led to the sudden collapse. Massive flooding triggered by anomalously intense monsoon rains caused the collapse, as an international team with Innsbruck geologist and climate researcher Christoph Spötl has now shown in the journal Science Advances.

In the Yangtze Delta, about 160 kilometres southwest of Shanghai, the archeological ruins of Liangzhu City are located. There, a highly advanced culture blossomed about 5300 years ago, which is considered to be one of the earliest proofs of monumental water culture. The oldest evidence of large hydraulic engineering structures in China originates from this late Neolithic cultural site. The walled city had a complex system of navigable canals, dams and water reservoirs. This system made it possible to cultivate very large agricultural areas throughout the year. In the history of human civilisation, this is one of the first examples of highly developed communities based on a water infrastructure. Metals, however, were still unknown in this culture. Thousands of elaborately crafted jade burial objects were found during excavations. Long undiscovered and underestimated in its historical significance, the archaeological site is now considered a well-preserved record of Chinese civilisation dating back more than 5000 years. Liangzhu was declared a UNESCO World Heritage Site in 2019. However, the advanced civilisation of this city, which was inhabited for almost 1000 years, came to an abrupt end. Until today, it remains controversial what caused it. "A thin layer of clay was found on the preserved ruins, which points to a possible connection between the demise of the advanced civilisation and floods of the Yangtze River or floods from the East China Sea. No evidence could be found for human causes such as warlike conflicts," explains Christoph Spötl, head of the Quaternary Research Group at the Department of Geology. "However, no clear conclusions on the cause were possible from the mud layer itself."

Dripstones store the answer

Caves and their deposits, such as dripstones, are among the most important climate archives that exist. They allow the reconstruction of climatic conditions above the caves up to several 100,000 years into the past. Since it is still not clear what caused the sudden collapse of the Liangzhu culture, the research team searched for suitable archives in order to investigate a possible climatic cause of this collapse. Geologist Haiwei Zhang from Xi'an Jiaotong University in Xi'an, who spent a year at the University of Innsbruck as a visiting researcher in 2017, took samples of stalagmites from the two caves Shennong and Jiulong, which are located southwest of the excavation site. "These caves have been well explored for years. They are located in the same area affected by the Southeast Asian monsoon as the Yangtze delta and their stalagmites provide a precise insight into the time of the collapse of the Liangzhu culture, which, according to archaeological findings, happened about 4300 years ago," Spötl explains. Data from the stalagmites show that between 4345 and 4324 years ago there was a period of extremely high precipitation. Evidence for this was provided by the isotope records of carbon, which were measured at the University of Innsbruck. The precise dating was done by uranium-thorium analyses at Xi'an Jiaotong University, whose measurement accuracy is ± 30 years. "This is amazingly precise in light of the temporal dimension," says the geologist. "The massive monsoon rains probably led to such severe flooding of the Yangtze and its branches that even the sophisticated dams and canals could no longer withstand these masses of water, destroying Liangzhu City and forcing people to flee." The very humid climatic conditions continued intermittently for another 300 years, as the geologists show from the cave data.

From Science Daily

Morning exposure to deep red light improves declining eyesight

Just three minutes of exposure to deep red light once a week, when delivered in the morning, can significantly improve declining eyesight, finds a pioneering new study by UCL researchers.

Published in Scientific Reports, the study builds on the team's previous work*, which showed daily three-minute exposure to longwave deep red light 'switched on' energy producing mitochondria cells in the human retina, helping boost naturally declining vision.

For this latest study, scientists wanted to establish what effect a single three-minute exposure would have, while also using much lower energy levels than their previous studies. Furthermore, building on separate UCL research in flies** that found mitochondria display 'shifting workloads' depending on the time of day, the team compared morning exposure to afternoon exposure.

In summary, researchers found there was, on average, a 17% improvement in participants' colour contrast vision when exposed to three minutes of 670 nanometre (long wavelength) deep red light in the morning and the effects of this single exposure lasted for at least a week. However, when the same test was conducted in the afternoon, no improvement was seen.

Scientists say the benefits of deep red light, highlighted by the findings, mark a breakthrough for eye health and should lead to affordable home-based eye therapies, helping the millions of people globally with naturally declining vision.

Lead author, Professor Glen Jeffery (UCL Institute of Ophthalmology), said: "We demonstrate that one single exposure to long wave deep red light in the morning can significantly improve declining vision, which is a major health and wellbeing issue, affecting millions of people globally.

"This simple intervention applied at the population level would significantly impact on quality of life as people age and would likely result in reduced social costs that arise from problems associated with reduced vision."

Naturally declining vision and mitochondria

In humans around 40 years old, cells in the eye's retina begin to age, and the pace of this ageing is caused, in part, when the cell's mitochondria, whose role is to produce energy (known as ATP) and boost cell function, also start to decline.

Mitochondrial density is greatest in the retina's photoreceptor cells, which have high energy demands. As a result, the retina ages faster than other organs, with a 70% ATP reduction over life, causing a significant decline in photoreceptor function as they lack the energy to perform their normal role.

In studying the effects of deep red light in humans, researchers built on their previous findings in mice, bumblebees and fruit flies, which all found significant improvements in the function of the retina's photoreceptors when their eyes were exposed to 670 nanometre (long wavelength) deep red light.

"Mitochondria have specific sensitivities to long wavelength light influencing their performance: longer wavelengths spanning 650 to 900nm improve mitochondrial performance to increase energy production," said Professor Jeffery.

Morning and afternoon studies

The retina's photoreceptor population is formed of cones, which mediate colour vision, and rods, which adapt vision in low/dim light. This study focused on cones*** and observed colour contrast sensitivity, along the protan axis (measuring red-green contrast) and the tritan axis (blue-yellow).

All the participants were aged between 34 and 70, had no ocular disease, completed a questionnaire regarding eye health prior to testing, and had normal colour vision (cone function). This was assessed using a 'Chroma Test': identifying coloured letters that had very low contrast and appeared increasingly blurred, a process called colour contrast.

Using a provided LED device all 20 participants (13 female and 7 male) were exposed to three minutes of 670nm deep red light in the morning between 8am and 9am. Their colour vision was then tested again three hours post exposure and 10 of the participants were also tested one week post exposure.

On average there was a 'significant' 17% improvement in colour vision, which lasted a week in tested participants; in some older participants there was a 20% improvement, also lasting a week.

A few months on from the first test (ensuring any positive effects of the deep red light had been 'washed out') six (three female, three male) of the 20 participants, carried out the same test in the afternoon, between 12pm to 1pm. When participants then had their colour vision tested again, it showed zero improvement.

Professor Jeffery said: "Using a simple LED device once a week, recharges the energy system that has declined in the retina cells, rather like re-charging a battery.

"And morning exposure is absolutely key to achieving improvements in declining vision: as we have previously seen in flies, mitochondria have shifting work patterns and do not respond in the same way to light in the afternoon -- this study confirms this."

For this study the light energy emitted by the LED torch was just 8mW/cm2, rather than 40mW/cm2, which they had previously used. This has the effect of dimming the light but does not affect the wavelength. While both energy levels are perfectly safe for the human eye, reducing the energy further is an additional benefit.

Home-based affordable eye therapies


With a paucity of affordable deep red-light eye-therapies available, Professor Jeffery has been working for no commercial gain with Planet Lighting UK, a small company in Wales and others, with the aim of producing 670nm infra-red eye ware at an affordable cost, in contrast to some other LED devices designed to improve vision available in the US for over $20,000.

"The technology is simple and very safe; the energy delivered by 670nm long wave light is not that much greater than that found in natural environmental light," Professor Jeffery said.

"Given its simplicity, I am confident an easy-to-use device can be made available at an affordable cost to the general public.

"In the near future, a once a week three-minute exposure to deep red light could be done while making a coffee, or on the commute listening to a podcast, and such a simple addition could transform eye care and vision around the world."

Study limitations

Despite the clarity of the results, researchers say some of the data are "noisy." While positive effects are clear for individuals following 670nm exposure, the magnitude of improvements can vary markedly between those of similar ages. Therefore, some caution is needed in interpretating the data. It is possible that there are other variables between individuals that influence the degree of improvement that the researchers have not identified so far and would require a larger sample size.

Read more at Science Daily

Only alcohol -- not caffeine, diet or lack of sleep -- might trigger heart rhythm condition

New research from UC San Francisco that tested possible triggers of a common heart condition, including caffeine, sleep deprivation and sleeping on the left side, found that only alcohol use was consistently associated with more episodes of the heart arrhythmia.

The authors conclude that people might be able to reduce their risk of atrial fibrillation (AF) by avoiding certain triggers.

The study is published in JAMA Cardiology and was presented November 14, 2021, at the annual Scientific Sessions of the American Heart Association.

Researchers were surprised to find that although most of the things that participants thought would be related to their AF were not, those in the intervention group still experienced less arrhythmia than the people in a comparison group that was not self-monitoring.

"This suggests that those personalized assessments revealed actionable results," said lead author Gregory Marcus, MD, professor of medicine in the Division of Cardiology at UCSF. "Although caffeine was the most commonly selected trigger for testing, we found no evidence of a near-term relationship between caffeine consumption and atrial fibrillation. In contrast, alcohol consumption most consistently exhibited heightened risks of atrial fibrillation."

Atrial fibrillation contributes to more than 150,000 deaths in the United States each year, reports the federal Centers for Disease Control and Prevention, with the death rate on the rise for more than 20 years.

To learn more about what patients felt was especially important to study about the disease, researchers held a brainstorming session in 2014. Patients said researching individual triggers for AF was their top priority, giving rise to the I-STOP-AFib study, which enabled individuals to test any presumed AF trigger. About 450 people participated, more than half of whom (58 percent) were men, and the overwhelming majority of whom were white (92 percent).

Participants in the randomized clinical trial utilized a mobile electrocardiogram recording device along with a phone app to log potential triggers like drinking alcohol and caffeine, sleeping on the left side or not getting enough sleep, eating a large meal, a cold drink, or sticking to a particular diet, engaging in exercise, or anything else they thought was relevant to their AF. Although participants were most likely to select caffeine as a trigger, there was no association with AF. Recent research from UCSF has similarly failed to demonstrate a relationship between caffeine and arrhythmias -- on the contrary, investigators found it may have a protective effect.

The new study demonstrated that consumption of alcohol was the only trigger that consistently resulted in significantly more self-reported AF episodes.

The individualized testing method, known as n-of-1, did not validate participant-selected triggers for AF. But trial participants did report fewer AF episodes than those in the control group, and the data suggest that behaviors like avoiding alcohol could lessen the chances of having an AF episode.

Read more at Science Daily

Nov 24, 2021

High-speed propeller star is fastest spinning white dwarf

A white dwarf star that completes a full rotation once every 25 seconds is the fastest spinning confirmed white dwarf, according to a team of astronomers led by the University of Warwick.

They have established the spin period of the star for the first time, confirming it as an extremely rare example of a magnetic propeller system: the white dwarf is pulling gaseous plasma from a nearby companion star and flinging it into space at around 3000 kilometres per second.

Reported today (22 November) in the journal Monthly Notices of the Royal Astronomical Society: Letters, it is only the second magnetic propeller white dwarf to have been identified in over seventy years thanks to a combination of powerful and sensitive instruments that allowed scientists to catch a glimpse of the speeding star.

The study was led by the University of Warwick with the University of Sheffield, and funded by the Science and Technology Facilities Council (STFC), part of UK Research and Innovation, and the Leverhulme Trust.

A white dwarf is a star that has burnt up all of its fuel and shed its outer layers, now undergoing a process of shrinking and cooling over millions of years. The star that the Warwick team observed, named LAMOST J024048.51+195226.9 -- or J0240+1952 for short, is the size of the Earth but is thought to be at least 200,000 times more massive. It is part of a binary star system and its immense gravity is pulling material from its larger companion star in the form of plasma.

In the past, this plasma was falling onto the white dwarf's equator at high speed, providing the energy that has given it this dizzyingly fast spin. Put into context, one rotation of the planet Earth takes 24 hours, while the equivalent on J0240+1952 is a mere 25 seconds. That's almost 20% faster than the confirmed white dwarf with the most comparable spin rate, which completes a rotation in just over 29 seconds.

However, at some point in its evolutionary history J0240+1952 developed a strong magnetic field. The magnetic field acts a protective barrier, causing most of the falling plasma to be propelled away from the white dwarf. The remainder will flow towards the star's magnetic poles. It gathers in bright spots on the surface of the star and as these rotate in and out of view they cause pulsations in the light that the astronomers observe from Earth, which they then used to measure the rotation of the entire star.

Lead author Dr Ingrid Pelisoli of the University of Warwick Department of Physics said: "J0240+1952 will have completed several rotations in the short amount of time that people take to read about it, it is really incredible. The rotation is so fast that the white dwarf must have an above average mass just to stay together and not be torn apart.

"It is pulling material from its companion star due to its gravitational effect, but as that gets closer to the white dwarf the magnetic field starts to dominate. This type of gas is highly conducting and picks up a lot of speed from this process, which propels it away from the star and out into space."

J0240+1952 is one of only two stars with this magnetic propeller system discovered in over past seventy years. Although material being flung out of the star was first observed in 2020, astronomers had not been able to confirm the presence of a rapid spin that is a main ingredient of a magnetic propeller, as the pulsations are too fast and dim for other telescopes to observe.

To visualise the star at that speed for the first time, the University of Warwick team used the highly sensitive HiPERCAM instrument, jointly operated by Warwick and the University of Sheffield with funding from the European Research Council. This was specially mounted on the largest functioning optical telescope in the world, the 10 metre diameter Gran Telescopio Canarias in La Palma, to capture as much light as possible.

"These kinds of studies are possible thanks to the unique combination of the fast imaging capability of HiPERCAM with the largest collecting area in the world provided by GTC," said Antonio Cabrera, Head of GTC Science Operations.

Co-author Professor Tom Marsh from the University of Warwick Department of Physics adds: "It's only the second time that we have found one of these magnetic propeller systems, so we now know it's not a unique occurrence. It establishes that the magnetic propeller mechanism is a generic property that operates in these binaries, if the circumstances are right.

Read more at Science Daily

Ultrashort-pulse lasers kill bacterial superbugs, spores

Life-threatening bacteria are becoming ever more resistant to antibiotics, making the search for alternatives to antibiotics an increasingly urgent challenge. For certain applications, one alternative may be a special type of laser.

Researchers at Washington University School of Medicine in St. Louis have found that lasers that emit ultrashort pulses of light can kill multidrug-resistant bacteria and hardy bacterial spores. The findings, available online in the Journal of Biophotonics, open up the possibility of using such lasers to destroy bacteria that are hard to kill by other means. The researchers previously have shown that such lasers don't damage human cells, making it possible to envision using the lasers to sterilize wounds or disinfect blood products.

"The ultrashort-pulse laser technology uniquely inactivates pathogens while preserving human proteins and cells," said first author Shaw-Wei (David) Tsen, MD, PhD, an instructor of radiology at Washington University's Mallinckrodt Institute of Radiology (MIR). "Imagine if, prior to closing a surgical wound, we could scan a laser beam across the site and further reduce the chances of infection. I can see this technology being used soon to disinfect biological products in vitro, and even to treat bloodstream infections in the future by putting patients on dialysis and passing the blood through a laser treatment device."

Tsen and senior author Samuel Achilefu, PhD, the Michel M. Ter-Pogossian Professor of Radiology and director of MIR's Biophotonics Research Center, have been exploring the germicidal properties of ultrashort-pulse lasers for years. They have shown that such lasers can inactivate viruses and ordinary bacteria without harming human cells. In the new study, conducted in collaboration with Shelley Haydel, PhD, a professor of microbiology at Arizona State University, they extended their exploration to antibiotic-resistant bacteria and bacterial spores.

The researchers trained their lasers on multidrug-resistant Staphylococcus aureus (MRSA), which causes infections of the skin, lungs and other organs, and extended spectrum beta-lactamase-producing Escherichia coli (E. coli), which cause urinary tract infections, diarrhea and wound infections. Apart from their shared ability to make people miserable, MRSA and E. coli are very different types of bacteria, representing two distant branches of the bacterial kingdom. The researchers also looked at spores of the bacterium Bacillus cereus, which causes food poisoning and food spoilage. Bacillus spores can withstand boiling and cooking.

In all cases, the lasers killed more than 99.9% of the target organisms, reducing their numbers by more than 1,000 times.

Viruses and bacteria contain densely packed protein structures that can be excited by an ultrashort-pulse laser. The laser kills by causing these protein structures to vibrate until some of their molecular bonds break. The broken ends quickly reattach to whatever they can find, which in many cases is not what they had been attached to before. The result is a mess of incorrect linkages inside and between proteins, and that mess causes normal protein function in microorganisms to grind to a halt.

"We previously published a paper in which we showed that the laser power matters," Tsen said. "At a certain laser power, we're inactivating viruses. As you increase the power, you start inactivating bacteria. But it takes even higher power than that, and we're talking orders of magnitude, to start killing human cells. So there is a therapeutic window where we can tune the laser parameters such that we can kill pathogens without affecting the human cells."

Heat, radiation and chemicals such as bleach are effective at sterilizing objects, but most are too damaging to be used on people or biological products. By inactivating all kinds of bacteria and viruses without damaging cells, ultrashort-pulse lasers could provide a new approach to making blood products and other biological products safer.

Read more at Science Daily

Microbes can provide sustainable hydrocarbons for the petrochemical industry

If the petrochemical industry is ever to wean itself off oil and gas, it has to find sustainably-sourced chemicals that slip effortlessly into existing processes for making products such as fuels, lubricants and plastics.

Making those chemicals biologically is the obvious option, but microbial products are different from fossil fuel hydrocarbons in two key ways: They contain too much oxygen, and they have too many other atoms hanging off the carbons. In order for microbial hydrocarbons to work in existing synthetic processes, they often have to be de-oxygenated -- in chemical parlance, reduced -- and stripped of extraneous chemical groups, all of which takes energy.

A team of chemists from the University of California, Berkeley, and the University of Minnesota has now engineered microbes to make hydrocarbon chains that can be deoxygenated more easily and using less energy -- basically just the sugar glucose that the bacteria eat, plus a little heat.

The process allows microbial production of a broad range of chemicals currently made from oil and gas -- in particular, products like lubricants made from medium-chain hydrocarbons, which contain between eight and 10 carbon atoms in the chain.

"Part of the issue with trying to move to something like glucose as a feedstock for making molecules or to drive the chemical industry is that the fossil fuel structures of petrochemicals are so different -- they're usually fully reduced, with no oxygen substitutions," said Michelle Chang, UC Berkeley professor of chemistry and of chemical and biomolecular engineering. "Bacteria know how to make all these complex molecules that have all these functional groups sticking out from them, like all natural products, but making petrochemicals that we're used to using as precursors for the chemical industry is a bit of a challenge for them."

"This process is one step towards deoxygenating these microbial products, and it allows us to start making things that can replace petrochemicals, using just glucose from plant biomass, which is more sustainable and renewable," she said. "That way we can get away from petrochemicals and other fossil fuels."

The bacteria were engineered to make hydrocarbon chains of medium length, which has not been achieved before, though others have developed microbial processes for making shorter and longer chains, up to about 20 carbons. But the process can be readily adapted to make chains of other lengths, Chang said, including short-chain hydrocarbons used as precursors to the most popular plastics, such as polyethylene.

She and her colleagues published their results this week in the journal Nature Chemistry.

A bioprocess to make olefins

Fossil hydrocarbons are simple linear chains of carbon atoms with a hydrogen atom attached to each carbon. But the chemical processes optimized for turning these into high-value products don't easily allow substitution by microbially produced precursors that are oxygenated and have carbon atoms decorated with lots of other atoms and small molecules.

To get bacteria to produce something that can replace these fossil fuel precursors, Chang and her team, including co-first authors Zhen Wang and Heng Song, former UC Berkeley postdoctoral fellows, searched databases for enzymes from other bacteria that can synthesize medium-chain hydrocarbons. They also sought an enzyme that could add a special chemical group, carboxylic acid, at one end of the hydrocarbon, turning it into what's called a fatty acid.

All told, the researchers inserted five separate genes into E. coli bacteria, forcing the bacteria to ferment glucose and produce the desired medium-chain fatty acid. The added enzymatic reactions were independent of, or orthogonal to, the bacteria's own enzyme pathways, which worked better than trying to tweak the bacteria's complex metabolic network.

"We identified new enzymes that could actually make these mid-size hydrocarbon chains and that were orthogonal, so separate from fatty acid biosynthesis by the bacteria. That allows us to run it separately, and it uses less energy than it would if you use the native synthase pathway," Chang said. "The cells consume enough glucose to survive, but then alongside that, you have your pathway chewing through all the sugar to get higher conversions and a high yield."

That final step to create a medium-chain fatty acid primed the product for easy conversion by catalytic reaction to olefins, which are precursors to polymers and lubricants.

The UC Berkeley group collaborated with the Minnesota group led by Paul Dauenhauer, which showed that a simple, acid-based catalytic reaction called a Lewis acid catalysis (after famed UC Berkeley chemist Gilbert Newton Lewis) easily removed the carboxylic acid from the final microbial products -- 3-hydroxyoctanoic and 3-hydroxydecanoic acids -- to produce the olefins heptene and nonene, respectively. Lewis acid catalysis uses much less energy than the redox reactions typically needed to remove oxygen from natural products to produce pure hydrocarbons.

"The biorenewable molecules that Professor Chang's group made were perfect raw materials for catalytic refining," said Dauenhauer, who refers to these precursor molecules as bio-petroleum. "These molecules contained just enough oxygen that we could readily convert them to larger, more useful molecules using metal nanoparticle catalysts. This allowed us to tune the distribution of molecular products as needed, just like conventional petroleum products, except this time we were using renewable resources."

Heptene, with seven carbons, and nonene, with nine, can be employed directly as lubricants, cracked to smaller hydrocarbons and used as precursors to plastic polymers, such as polyethylene or polypropylene, or linked to form even longer hydrocarbons, like those in waxes and diesel fuel.

"This is a general process for making target compounds, no matter what chain length they are," Chang said. "And you don't have to engineer an enzyme system every time you want to change a functional group or the chain length or how branched it is."

Despite their feat of metabolic engineering, Chang noted that the long-term and more sustainable goal would be to completely redesign processes for synthesizing industrial hydrocarbons, including plastics, so that they are optimized to use the types of chemicals that microbes normally produce, rather than altering microbial products to fit into existing synthetic processes.

"There's a lot of interest in the question, 'What if we look at entirely new polymer structures?'," she said. "Can we make monomers from glucose by fermentation for plastics with similar properties to the plastics that we use today, but not the same structures as polyethylene or polypropylene, which are not easy to recycle."

Read more at Science Daily

How moles change into melanoma

Moles and melanomas are both skin tumors that come from the same cell called melanocytes. The difference is that moles are usually harmless, while melanomas are cancerous and often deadly without treatment. In a study published today in eLife Magazine, Robert Judson-Torres, PhD, Huntsman Cancer Institute (HCI) researcher and University of Utah (U of U) assistant professor of dermatology and oncological sciences, explains how common moles and melanomas form and why moles can change into melanoma.

Melanocytes are cells that give color to the skin to protect it from the sun's rays. Specific changes to the DNA sequence of melanocytes, called BRAF gene mutations, are found in over 75% of moles. The same change is also found in 50% of melanomas and is common in cancers like colon and lung. It was thought that when melanocytes only have the BRAFV600E mutation the cell stops dividing, resulting in a mole. When melanocytes have other mutations with BRAFV600E, they divide uncontrollably, turning into melanoma. This model is called "oncogene-induced senescence."

"A number of studies have challenged this model in recent years," says Judson-Torres. "These studies have provided excellent data to suggest that the oncogene-induced senescence model does not explain mole formation but what they have all lacked is an alternative explanation -- which has remained elusive."

With help from collaborators across HCI and the University of California San Francisco, the study team took moles and melanomas donated by patients and used transcriptomic profiling and digital holographic cytometry. Transcriptomic profiling lets researchers determine molecular differences between moles and melanomas. Digital holographic cytometry helps researchers track changes in human cells.

"We discovered a new molecular mechanism that explains how moles form, how melanomas form, and why moles sometimes become melanomas," says Judson-Torres.

The study shows melanocytes that turn into melanoma do not need to have additional mutations but are actually affected by environmental signaling, when cells receive signals from the environment in the skin around them that give them direction. Melanocytes express genes in different environments, telling them to either divide uncontrollably or stop dividing altogether.

"Origins of melanoma being dependent on environmental signals gives a new outlook in prevention and treatment," says Judson-Torres. "It also plays a role in trying to combat melanoma by preventing and targeting genetic mutations. We might also be able to combat melanoma by changing the environment."

These findings create a foundation for researching potential melanoma biomarkers, allowing doctors to detect cancerous changes in the blood at earlier stages. The researchers are also interested in using these data to better understand potential topical agents to reduce the risk of melanoma, delay development, or stop recurrence, and to detect melanoma early.

Read more at Science Daily

Nov 23, 2021

Scientist reveals cause of lost magnetism at meteorite site

A University of Alaska Fairbanks scientist has discovered a method for detecting and better defining meteorite impact sites that have long lost their tell-tale craters. The discovery could further the study of not only Earth's geology but also that of other bodies in our solar system.

The key, according to work by associate research professor Gunther Kletetschka at the UAF Geophysical Institute, is in the greatly reduced level of natural remanent magnetization of rock that has been subjected to the intense forces from a meteor as it nears and then strikes the surface.

Rocks unaltered by humanmade or non-Earth forces have 2% to 3% natural remanent magnetization, meaning they consist of that quantity of magnetic mineral grains -- usually magnetite or hematite or both. Kletetschka found that samples collected at the Santa Fe Impact Structure in New Mexico contained less than 0.1% magnetism.

Kletetschka determined that plasma created at the moment of impact and a change in the behavior of electrons in the rocks' atoms are the reasons for the minimal magnetism.

Kletetschka reported his findings in a paper published Wednesday in the journal Scientific Reports.

The Santa Fe Impact Structure was discovered in 2005 and is estimated to be about 1.2 billion years old. The site consists of easily recognized shatter cones, which are rocks with fantail features and radiating fracture lines. Shatter cones are believed to only form when a rock is subjected to a high-pressure, high-velocity shock wave such as from a meteor or nuclear explosion.

Kletetschka's work will now allow researchers to determine an impact site before shatter cones are discovered and to better define the extent of known impact sites that have lost their craters due to erosion.

"When you have an impact, it's at a tremendous velocity," Kletetschka said. "And as soon as there is a contact with that velocity, there is a change of the kinetic energy into heat and vapor and plasma. A lot of people understand that there is heat, maybe some melting and evaporation, but people don't think about plasma."

Plasma is a gas in which atoms have been broken into free-floating negative electrons and positive ions.

"We were able to detect in the rocks that a plasma was created during the impact," he said.

Earth's magnetic field lines penetrate everything on the planet. Magnetic stability in rocks can be knocked out temporarily by a shock wave, as they are when hitting an object with a hammer, for example. The magnetic stability in rocks returns immediately after the shock wave passes.

At Santa Fe, the meteorite's impact sent a massive shock wave through the rocks, as expected. Kletetschka found that the shock wave altered the characteristics of atoms in the rocks by modifying the orbits of certain electrons, leading to their loss of magnetism.

The modification of the atoms would allow for a quick remagnetization of the rocks, but Kletetschka also found that the meteorite impact had weakened the magnetic field in the area. There was no way for the rocks to regain their 2% to 3% magnetism even though they had the capability to do so.

That's because of the presence of plasma in the rocks at the impact surface and below. Presence of the plasma increased the rocks' electrical conductivity as they converted to vapor and molten rock at the leading edge of the shock wave, temporarily weakening the ambient magnetic field.

Read more at Science Daily

One in five galaxies in the early universe could still be hidden behind cosmic dust

Astronomers at the University of Copenhagen's Cosmic Dawn Center have discovered two previously invisible galaxies 29 billion light-years away. Their discovery suggests that up to one in five such distant galaxies remain hidden from our telescopes, camouflaged by cosmic dust. The new knowledge changes perceptions of our universe's evolution since the Big Bang.

Researchers at the University of Copenhagen's Niels Bohr Institute have just discovered two previously invisible galaxies 29 billion light-years away from Earth. The two galaxies have been invisible to the optical lens of the Hubble Space Telescope, hidden behind a thick layer of cosmic dust that surrounds them.

But with the help of the giant ALMA radio telescopes (Atacama Large Milimeter Array) in Chile's Atacama Desert, which can capture radio waves emitted from the coldest, darkest depths of the universe, the two invisible galaxies suddenly appeared.

"We were looking at a sample of very distant galaxies, which we already knew existed from the Hubble Space Telescope. And then we noticed that two of them had a neighbor that we didn't expect to be there at all. As both of these neighboring galaxies are surrounded by dust, some of their light is blocked, making them invisible to Hubble," explains Associate Professor Pascal Oesch of the Cosmic Dawn Center at the Niels Bohr Institute.

The study has just been published in the scientific journal, Nature.

With the help of the giant ALMA radio telescopes (Atacama Large Milimeter Array) in Chile's Atacama Desert the two invisible galaxies suddenly appeared. Photo: NASA

10-20 percent of the universe's galaxies are missing

The new discovery suggests that the very early universe contains many more galaxies than previously assumed. They simply lie hidden behind dust consisting of small particles from stars. However, they can now be detected thanks to the highly sensitive ALMA telescope and the method used by the researchers.

Facts about the research:
 

  • The two hidden galaxies are so far called REBELS-12-2 and REBELS-29-2.
  • The light from the two invisible galaxies has travelled about 13 billion years to reach us.
  • The galaxies are now located 29 billion light years away due the universe's expansion.
  • Researchers used the ALMA telescope, which is based on radio signals.
  • The ALMA Telescope combines the light of all its 66 antennae to create a high resolution image and spectra of the sky.


By comparing these new galaxies with previously known sources in the very early universe, approximately 13 billion years ago, the researchers estimate that between 10 and 20 percent of such early galaxies may still remain hidden behind curtains of cosmic dust.

"Our discovery demonstrates that up to one in five of the earliest galaxies may have been missing from our map of the heavens. Before we can start to understand when and how galaxies formed in the Universe, we first need a proper accounting," says Pascal Oesch.

New super telescope will find the missing galaxies

To help with that task, NASA, ESA and the Canadian Space Agency have built a new super telescope, the James Webb Space Telescope, which is expected to be launched into orbit on the 18th of December 2021.

With its strength and improved technology, the telescope will gaze even deeper into the universe and contribute new knowledge about its origins. This will, among many other things, help Cosmic Dawn researchers at the Niels Bohr Institute see through the cosmic dust.

Read more at Science Daily

Hurricanes expected to linger over Northeast cities, causing greater damage

By the late 21st century, northeastern U.S. cities will see worsening hurricane outcomes, with storms arriving more quickly but slowing down once they've made landfall. As storms linger longer over the East Coast, they will cause greater damage along the heavily populated corridor, according to a new study.

In the new study, climate scientist Andra Garner at Rowan University analyzed more than 35,000 computer-simulated storms. To assess likely storm outcomes in the future, Garner and her collaborators compared where storms formed, how fast they moved and where they ended from the pre-industrial period through the end of the 21st century.

The researchers found that future East Coast hurricanes will likely cause greater damage than storms of the past. The research predicted that a greater number of future hurricanes will form near the East Coast, and those storms will reach the Northeast corridor more quickly. The simulated storms slow to a crawl as they approach the East Coast, allowing them to produce more wind, rain, floods, and related damage in the Northeast region. The longest-lived tropical storms are predicted to be twice as long as storms today.

The study was published in Earth's Future, which publishes interdisciplinary research on the past, present and future of our planet and its inhabitants.

The changes in storm speed will be driven by changes in atmospheric patterns over the Atlantic, prompted by warmer air temperatures. While Garner and her colleagues note that more research remains to be done to fully understand the relationship between a warming climate and changing storm tracks, they noted that potential northward shifts in the region where Northern and Southern Hemisphere trade winds meet or slowing environmental wind speeds could be to blame.

"When you think of a hurricane moving along the East Coast, there are larger scale wind patterns that generally help push them back out to sea," Garner said. "We see those winds slowing down over time." Without those winds, the hurricanes can overstay their welcome on the coast.

Garner, whose previous work focused on the devastating East Coast effects of storms like Hurricane Sandy, particularly in the Mid-Atlantic, said the concern raised by the new study is that more storms capable of producing damage levels similar to Sandy are likely.

And the longer storms linger, the worse they can be, she said.

"Think of Hurricane Harvey in 2017 sitting over Texas, and Hurricane Dorian in 2019 over the Bahamas," she said. "That prolonged exposure can worsen the impacts."

From 2010 to 2020, U.S. coastlines were hit by 19 tropical cyclones that qualified as billion-dollar disasters, generating approximately $480 billion in damages, adjusted for inflation. If storms sit over coasts for longer stretches, that economic damage is likely to increase as well. For the authors, that provides clear economic motivation to stem rising greenhouse gas emissions.

"The work produced yet more evidence of a dire need to cut emissions of greenhouse gases now to stop the climate warming," Garner said.

Read more at Science Daily

Snow cover critical for revegetation following forest fires

With wildfires devastating mountain ecosystems across the western United States, their successful forest revegetation recovery hinges on, among other factors, an adequate lasting snowpack, according to research by the University of Nevada, Reno and Oregon State University.

"Our study illustrated that summer precipitation, snow cover and elevation were all important drivers of revegetation success," said Anne Nolin, a hydrologist and geography professor at the University of Nevada, Reno and formerly at Oregon State University. "In particular, we found that snow cover was a critical explanatory variable for revegetation in the Oregon and Washington Cascades. This could help inform revegetation management practices following severe wildfires."

Climate change has already increased the fraction of winter precipitation that falls as rain rather than snow, reduced the spring snow water equivalent -- a metric for how much water snow contains -- and caused snowmelt to begin earlier in the spring than it used to, Nolin explained. Pacific Northwest snowpacks have seen the greatest declines of any seasonal snow region in the West.

The research, led by Nolin, examined the 260,000-square-mile Columbia River Basin in the Pacific Northwest. She teamed with co-author Andrew Wilson, a graduate research assistant in OSU's College of Earth, Ocean, and Atmospheric Science, and co-author Kevin Bladon of OSU's College of Forestry for the study.

The NASA-supported study featured before-and-after vegetation analyses for two dozen high-severity wildfires. The fires occurred over a 10-year period among the four distinct subregions of the Columbia River Basin. There are many short- and long-term effects from these fires, including erosion, debris flows and water quality issues, which can affect the health of aquatic ecosystems and downstream community water supply, highlighting the importance of understanding post-fire forest rehabilitation.

In their paper published in the Journal of Geophysical Research -- Biogeosciences, "Assessing the Role of Snow Cover for Post-Wildfire Revegetation Across the Pacific Northwest," the findings show that given the trends of increasing wildfire activity, lower snowpacks, and earlier snow disappearance dates across the Pacific Northwest, forests will likely experience more frequent drought conditions, which will negatively impact the success of post?wildfire vegetation recovery with a number of impacts to the ecosystem.

"This knowledge may be used to facilitate adaptive post-fire management policies and decisions to ensure long-term forest health," Nolin, who is also director of the University of Nevada, Reno's Graduate Program of Hydrological Sciences, said. "For example, depending on the sub-region and species composition, reseeding efforts following low snow winters might employ more drought tolerant species or, replanting could be delayed one to two years until snowmelt and soil moisture conditions are more favorable for seedling propagation.

"However, climate change projections and shifting wildfire regimes have increased concerns about post-fire regeneration and, as such it is imperative that we broaden our understanding of the role of snowpacks in post-wildfire forest regeneration. The snowpacks' role in aiding revegetation will become increasingly important across the West. And where snowpacks have declined, there likely will be ecosystem transitions that look like a shift from forest to non-forest and from evergreen to deciduous vegetation."

Wildfires continue to burn more area each year across many regions of the planet, including the Pacific Northwest. The Pacific Northwest's largest watershed, the Columbia River Basin contains a variety of fire-prone landscapes that have seen almost 900 fires since 2010, serves as critical habitat for more than 700 species and is a water source for seven states.

"As wildfire activity continues to increase and intensify in the Northwest, understanding what shapes revegetation on severely burned forested landscapes is vital for guiding management decisions," co-author Bladon said.

After the occurrence of a wildfire, revegetation over the burned area is critical to maintain or re-establish ecosystem functions from forests such as biodiversity, erosion control, water purification and habitat provision.

"Snow matters to regrowing vegetation following fire, and with double impacts of declining snowpacks and increasing wildfires it is critical that we understand how these changes are affecting Pacific Northwest forests," Nolin said. "Positive relationships between snow cover and summer precipitation with post-fire greening suggest that active post-fire revegetation efforts will help facilitate recovery, especially during years when severe wildfires are followed by early snowmelt years or below average summer precipitation."

In the study, summer precipitation consistently appeared as the most important variable driving post-fire revegetation across all four subregions. Snow cover frequency, along with elevation, were shown to be secondary but significantly influential explanatory variables for revegetation in the Oregon and Washington Cascades.

More than 80% of wildfires in the western United States from 2000 to 2012 burned within a seasonal snow zone, a time period that overlaps with the years studied by the scientists.

"As wildfire activity continues to increase and intensify in the Northwest, understanding what shapes revegetation on severely burned forested landscapes is vital for guiding management decisions," Bladon said. "But variables such as snow cover frequency, pre-fire forest composition, and elevation, were also shown to be significantly influential for revegetation in the Oregon and Washington Cascades."

Wildfire season length in the western U.S. overall has increased by roughly 25 days in recent decades, including a massive increase in the Northwest from the mid-1970s, when it was 23 days, to 116 days in the early 2000s. That's attributable mainly to warmer temperatures and drier conditions in the spring and summer.

"Snow cover has a strong influence on postfire vegetation greening, but the influence varied depending on subregion and dominant prefire conifer species, with the biggest impacts at low to moderate elevations in the Washington Cascades, the Oregon Cascades and western Montana Rockies," Nolin said. "And with current climate change projections, snowpacks' role in aiding revegetation will become increasingly important across the West."

Bladon suggests fire can be looked at as an opportunity for forests to reassemble into ecosystems better suited to survive warmer winters, longer fire seasons and more drought stress.

Read more at Science Daily

Nov 22, 2021

Scientists solve 50-year-old mystery behind plant growth

A team of researchers led by UC Riverside has demonstrated for the first time one way that a small molecule turns a single cell into something as large as a tree. For half a century, scientists have known that all plants depend on this molecule, auxin, to grow. Until now, they didn't understand exactly how auxin sets growth in motion.

The word auxin is derived from the Greek word "auxein," meaning "to grow." There are two main pathways that auxin uses to orchestrate plant growth, and one of them is now described in a new Nature journal article.

Plant cells are encased in shell-like cell walls, whose primary layer has three major components: cellulose, hemicellulose, and pectin.

"Cellulose works like rebar in a high rise, providing a broad base of strength. It's reinforced by hemicellulose chains and sealed in by pectin," said UCR botany professor and research team leader Zhenbiao Yang.

These components define the shape of plant cells, resulting in sometimes-surprising formations like the puzzle-piece-shaped leaf epidermis cells that Yang has been studying for the last two decades. These shapes help tightly glue cells together and provide physical strength for plants against elements such as the wind. With everything locked so tightly by the cell walls, how is movement and growth possible?

One theory posits that when plants are ready to grow, auxin causes their cells to become acidic, loosening the bonds between components and allowing the walls to soften and expand. This theory was proposed half a century ago, but how auxin activates the acidification remained a mystery until now.

Yang's team discovered auxin creates that acidity by triggering the pumping of protons into the cell walls, lowering their pH levels. The lower pH activates a protein, expansin, appropriately named because it breaks down links between cellulose and hemicellulose, allowing the cells to expand.

The pumping of protons into the cell wall also drives water uptake into the cell, building inner pressure. If the cell wall is loose enough and there is enough pressure inside the cell, it will expand.

"Like a balloon, expansion depends on how thick the outsides are, versus how much air you're blowing in," Yang explained. "Lowering the pH in a cell wall can allow water outside of a cell to move in, fueling turgor pressure and expansion."

There are two known mechanisms by which auxin regulates growth. One is the pH lowering that Yang's team described. Another is auxin's ability to turn on gene expression in the nucleus of the plants' cells, which in turn increases the amount of expansion and other growth-regulating factors in the cell.

The latter mechanism also lowers the pH of the cell and facilitates growth. UC San Diego professor of cell biology Mark Estelle is a leading authority in this field. He discovered and researches this other mechanism.

"Dr. Yang's recent work represents a significant advance in our understanding of how auxin regulates cell expansion. It's been known that acidification of the extracellular space promotes cell expansion but it wasn't known how this happens," Estelle said. "It's exciting to see an old problem being solved."

It is an understatement to say that auxin simply "contributes" to plant growth. It is essential to nearly every aspect of a plant's growth and development, including aspects that are important to agriculture such as fruit, seed and root development, shoot branching, and leaf formation. Even the plant's correct responses to gravity and light depend on auxin to ensure roots head down while the shoots grow up toward light.

Read more at Science Daily

Antarctic ice-sheet destabilized within a decade

After the natural warming that followed the last Ice Age, there were repeated periods when masses of icebergs broke off from Antarctica into the Southern Ocean. A new data-model study led by the University of Bonn (Germany) now shows that it took only a decade to initiate this tipping point in the climate system, and that ice mass loss then continued for many centuries. Accompanying modeling studies suggest that today's accelerating Antarctic ice mass loss also represents such a tipping point, which could lead to irreversible and long-lasting ice retreat and global sea level rise. The study has now been published in the journal Nature Communications.

To understand what the consequences of current and future human-induced climate warming may be, it helps to take a look at the past: how did sea-level changes look like during times of natural climate warming? In a recent study, an international research team led by Dr. Michael Weber from the Institute of Geosciences at the University of Bonn investigated this question. In doing so, they focused on the Antarctic Ice Sheet as the largest remaining ice sheet on Earth.

There, they searched for evidence of icebergs that broke off the Antarctic continent, floated in the surrounding ocean and melted down in the major gateway to lower latitudes called "Iceberg Alley." In the process, the icebergs released encapsulated debris that accumulated on the ocean floor. The team took sediment cores from the deep ocean in 3.5 km water depth from the area, dated the natural climate archive and counted the ice-rafted debris.

The scientists identified eight phased with high amounts of debris which they interpret as retreat phases of the Antarctic Ice Sheet after the Last Glacial Maximum about 19,000 to 9,000 years ago, when climate warmed and Antarctica shed masses of icebergs repeatedly into the ocean. The result of the new data-model study: each such phase destabilized the ice sheet within a decade and contributed to global sea-level rise for centuries to a millennium. The subsequent re-stabilization was equally rapidly within a decade.

The research team found three other independent pieces of evidence for such post-glacial tipping points: Model experiments showing the melting of the entire Antarctic ice sheet, a West Antarctic ice core documenting ice-sheet elevation draw-down and drill cores revealing a step-wise ice-sheet retreat across the Ross Sea shelf.

Today's ice mass loss could be start of long-lasting period

The results are also relevant for ice retreat observed today: "Our findings are consistent with a growing body of evidence suggesting the acceleration of Antarctic ice-mass loss in recent decades may mark the begin of a self-sustaining and irreversible period of ice sheet retreat and substantial global sea level rise," says study leader Dr. Michael Weber from the University of Bonn.

Combining the sediment record with computer models of ice sheet behaviour the team showed that each episode of increased iceberg calving reflected increased loss of ice from the interior of the ice sheet, not just changes in the already-floating ice shelves. "We found that iceberg calving events on multi-year time scales were synchronous with discharge of grounded ice from the Antarctic Ice Sheet," said Prof. Nick Golledge from the University of Wellington (New Zealand), who led the ice-sheet modelling.

Dr. Zoë Thomas, a co-author of the study from the University of New South Wales in Sydney, Australia, then applied statistical methods to the model outputs to see if early warning signs could be detected for tipping points in the ice sheet system. Her analyses confirmed that tipping points did indeed exist. "If it just takes one decade to tip a system like this, that's actually quite scary because if the Antarctic Ice Sheet behaves in future like it did in the past, we must be experiencing the tipping right now," Thomas said.

Read more at Science Daily

Scientists capture humor’s earliest emergence

Young children's ability to laugh and make jokes has been mapped by age for the first time using data from a new study involving nearly 700 children from birth to 4 years of age, from around the world. The findings, led by University of Bristol researchers and published in Behavior Research Methods, identifies the earliest age humour emerges and how it typically builds in the first years of life.

Researchers from Bristol's School of Education sought to determine what types of humour are present in early development and the ages at which different types of humour emerge. The team created the 20-question Early Humour Survey (EHS) and asked the parents of 671 children aged 0 to 47 months from the UK, US, Australia, and Canada, to complete the five-minute survey about their child's humour development.

The team found the earliest reported age that some children appreciated humour was 1 month, with an estimated 50% of children appreciating humour by 2 months, and 50% producing humour by 11 months. The team also show that once children produced humour, they produced it often, with half of children having joked in the last 3 hours.

Of the children surveyed, the team identified 21 different types of humour. Children under one year of age appreciated physical, visual and auditory forms of humour. This included hide and reveal games (e.g., peekaboo), tickling, funny faces, bodily humour (e.g., putting your head through your legs), funny voices and noises, chasing, and misusing objects (e.g., putting a cup on your head).

One-year-olds appreciated several types of humour that involved getting a reaction from others. This included teasing, showing hidden body parts (e.g., taking off clothes), scaring others, and taboo topics (e.g., toilet humour). They also found it funny to act like something else (e.g., an animal).

Two-year-olds' humour reflected language development, including mislabelling, playing with concepts (e.g., dogs say moo), and nonsense words. Children in this age group were also found to demonstrate a mean streak as they appreciated making fun of others and aggressive humour (e.g., pushing someone).

Finally, 3-year-olds were found to play with social rules (e.g., saying naughty words to be funny), and showed the beginnings of understanding tricks and puns.

Dr Elena Hoicka, Associate Professor in Bristol's School of Education and the study's lead author, said: "Our results highlight that humour is a complex, developing process in the first four years of life. Given its universality and importance in so many aspects of children's and adults' lives, it is important that we develop tools to determine how humour first develops so that we can further understand not only the emergence of humour itself, but how humour may help young children function cognitively, socially, and in terms of mental health.

Read more at Science Daily

Reading the mind of a worm

It sounds like a party trick: scientists can now look at the brain activity of a tiny worm and tell you which chemical the animal smelled a few seconds before. But the findings of a new study, led by Salk Associate Professor Sreekanth Chalasani, are more than just a novelty; they help the scientists better understand how the brain functions and integrates information.

"We found some unexpected things when we started looking at the effect of these sensory stimuli on individual cells and connections within the worms' brains," says Chalasani, member of the Molecular Neurobiology Laboratory and senior author of the new work, published in the journal PLOS Computational Biology on November 9, 2021.

Chalasani is interested in how, at a cellular level, the brain processes information from the outside world. Researchers can't simultaneously track the activity of each of the 86 billion brain cells in a living human -- but they can do this in the microscopic worm Caenorhabditis elegans, which has only 302 neurons. Chalasani explains that in a simple animal like C. elegans, researchers can monitor individual neurons as the animal is carrying out actions. That level of resolution is not currently possible in humans or even mice.

Chalasani's team set out to study how C. elegans neurons react to smelling each of five different chemicals: benzaldehyde, diacetyl, isoamyl alcohol, 2-nonanone, and sodium chloride. Previous studies have shown that C. elegans can differentiate these chemicals, which, to humans, smell roughly like almond, buttered popcorn, banana, cheese, and salt. And while researchers know the identities of the small handful of sensory neurons that directly sense these stimuli, Chalasani's group was more interested in how the rest of the brain reacts.

The researchers engineered C. elegans so that each of their 302 neurons contained a fluorescent sensor that would light up when the neuron was active. Then, they watched under a microscope as they exposed 48 different worms to repeated bursts of the five chemicals. On average, 50 or 60 neurons activated in response to each chemical.

By looking at basic properties of the datasets -- such as how many cells were active at each time point -- Chalasani and his colleagues couldn't immediately differentiate between the different chemicals. So, they turned to a mathematical approach called graph theory, which analyzes the collective interactions between pairs of cells: When one cell is activated, how does the activity of other cells change in response?

This approach revealed that whenever C. elegans was exposed to sodium chloride (salt), there was first a burst of activity in one set of neurons -- likely the sensory neurons -- but then about 30 second later, triplets of other neurons began to strongly coordinate their activities. These same distinct triplets weren't seen after the other stimuli, letting the researchers accurately identify -- based only on the brain patterns -- when a worm had been exposed to salt.

"C. elegans seems to have attached a high value to sensing salt, using a completely different circuit configuration in the brain to respond," says Chalasani. "This might be because salt often represents bacteria, which is food for the worm."

The researchers next used a machine-learning algorithm to pinpoint other, more subtle, differences in how the brain responded to each of the five chemicals. The algorithm was able to learn to differentiate the neural response to salt and benzaldehyde but often confused the other three chemicals.

"Whatever analysis we've done, it's a start but we're still only getting a partial answer as to how the brain discriminates these things," says Chalasani.

Still, he points out that the way the team approached the study -- looking at the brain's network-wide response to a stimulus, and applying graph theory, rather than just focusing on a small set of sensory neurons and whether they're activated -- paves the way toward more complex and holistic studies of how brains react to stimuli.

The researchers' ultimate goal, of course, isn't to read the minds of microscopic worms, but to gain a deeper understanding of how humans encode information in the brain and what happens when this goes awry in sensory processing disorders and related conditions like anxiety, attention deficit hyperactivity disorders (ADHD), autism spectrum disorders and others.

Read more at Science Daily

Nov 21, 2021

Study confirms that Gabon is the largest stronghold for critically endangered African forest elephants

The most comprehensive survey conducted of elephant numbers in the Central African nation of Gabon since the late 1980s has found elephants occurring in higher numbers than previously thought.

The study, which was conducted by the Wildlife Conservation Society (WCS), Gabon's National Park Agency (ANPN) and Vulcan using a new non-invasive survey technique, estimates that 95,000 forest elephants (Loxodonta cyclotis) now live in Gabon, confirming it as the principal stronghold for this species, which is considered Critically Endangered by IUCN. The technical improvements enabled a more accurate estimate than previous methods confined to dung counts.

The findings provide hope for the future of the species and the impact that conservation-focused policies can have in encouraging wildlife protection if effectively implemented. The study's results, published in the journal Global Ecology and Conservation, mark the first-known DNA-based assessment of a free-ranging large mammal in Africa.

Said Emma Stokes, WCS Regional Director for Central Africa and a co-author of the study: "These results underscore the importance of Gabon as a critical stronghold for forest elephants -- containing some 60-70 percent of Africa's forests elephants. Gabon, together with the northern Republic of Congo, probably hold as many as 85 percent of remaining forest elephants -- in large and relatively stable populations. With significant declines in forest elephants reported across much of the rest of the Congo Basin, these two nations will determine the future for the forest elephant in Africa."

President Ali Bongo Ondimba of Gabon said: "Managing forests, protecting our parks and fighting organised criminal and terrorist groups, who plunder our natural resources, is not easy.

It requires constant vigilance, technical knowhow, logistical capacity, sustainable funding and most importantly, courageous, dedicated, incorruptible forest managers."

Lee White, Minister of Water and Forests, the Sea, the Environment charged with Climate Planning and Land Use Plan, said: "These results demonstrate that Gabon, under the active leadership of President Ali Bongo Ondimba, has been able to buck the trend of forest elephant decline. This is down to the courage and dedication of our national parks rangers, who are very much in the line of fire. In Africa there is a clear link between healthy elephant numbers and natural resource governance -- most countries that have lost their elephant populations have also experienced civil war and instability."

Christian Tchemambela, ANPN Executive Secretary, said: "The data on the forest elephant population is now clearer. This is a big step forward. Our teams have developed more reliable methods, particularly in our genetics laboratory. This scientific evidence can now be used to support decision-making."

The team used spatial capture-recapture (SCR) techniques based on non-invasive molecular sampling from dung. Unlike savannah elephants (Loxodonta africana), which can be accurately counted by aerial surveys, forest elephants often live deep in rainforests making their populations more difficult to estimate. African forest elephants were recently acknowledged by IUCN as a separate species from savannah elephants.

The study found that Gabon has not only more forest elephants than any other country but also the largest intact habitat in the species' range, with elephants found over more than 250,000 square kilometers (96,500 square miles) which represents some 90 percent of the country.

Although elephants were found across Gabon, the highest elephant densities were found in relatively flat areas where there was more suitable elephant habitat available (i.e. forests or forest-savannah mosaics). The lowest elephant densities were found in more hilly terrain or in areas with less suitable elephant habitat (i.e. near cities, or roads).

The team found low elephant densities immediately adjacent to international borders. Neither protected areas nor human pressure in Gabon strongly predict elephant densities. This means that good elephant habitat is not necessarily restricted to protected areas.

Said the lead author of the study, Alice Laguardia of WCS: "Our findings offer a useful nationwide baseline and status update for forest elephants that will inform adaptive management and stewardship of one of the last remaining forest elephant strongholds. These results are of interest to local, national, and international decision-makers concerned with the conservation of this species and its habitat, with the important ecological role of forest elephants on climate regulation potential of forests, and with forest elephants as a useful indicator for healthy, intact and well-governed forests."

But it was not all good news, with researchers reporting that pockets of low elephant density from recent poaching events remain and have yet to recover.

Across Central Africa, forest elephants have been decimated by ivory poachers in recent years across their range. Gabon is unique in the region in having elephants still distributed at relatively high densities across most of the country. With the exception of Gabon and parts of northern Congo, many other countries have experienced catastrophic declines in forest elephant populations over the last two decades: a WCS-led study released in 2014 documented a 65 percent decline in forest elephant numbers between 2002 and 2013.

Elephants can act as a useful indicator of healthy forests and that good management of forests both inside and outside of protected areas can have benefits for both climate and biodiversity. A 2019 study linked forest elephant presence to significantly greater carbon storage through their browsing that effectively thins the forest of smaller trees, thus promoting growth of larger, more carbon rich trees.

"We believe conservation efforts must be founded on solid data," said Lara Littlefield, Executive Director of Partnerships & Programs at Vulcan. "The 2016 Great Elephant Census of savannah elephants led to support for programs that benefitted threatened populations and communities. We are confident this important new study will lead to targeted efforts to protect endangered forest elephants."

Read more at Science Daily