Feb 17, 2023

Oldest spinosaur brains revealed

Researchers from the University of Southampton and Ohio University have reconstructed the brains and inner ears of two British spinosaurs, helping uncover how these large predatory dinosaurs interacted with their environment.

Spinosaurs are an unusual group of theropod dinosaurs, equipped with long, crocodile-like jaws and conical teeth. These adaptations helped them live a somewhat-aquatic lifestyle that involved stalking riverbanks in quest of prey, among which were large fish. This way of life was very different from that of more familiar theropods, like Allosaurus and Tyrannosaurus.

To better understand the evolution of spinosaur brains and senses, the team scanned fossils of Baryonyx from Surrey and Ceratosuchops from the Isle of Wight. These two are the oldest spinosaurs for which braincase material is known. The huge creatures would have been roaming the planet about 125 million years ago years ago. The braincases of both specimens are well preserved, and the team digitally reconstructed the internal soft tissues that had long rotted away.

The researchers found the olfactory bulbs, which process smells, weren't particularly developed, and the ear was probably attuned to low frequency sounds. Those parts of the brain involved in keeping the head stable and the gaze fixed on prey were possibly less developed than they were in later, more specialised spinosaurs.

Findings are due to be published in the Journal of Anatomy.

"Despite their unusual ecology, it seems the brains and senses of these early spinosaurs retained many aspects in common with other large-bodied theropods -- there is no evidence that their semi-aquatic lifestyles are reflected in the way their brains are organised," said University of Southampton PhD student Chris Barker, who led the study.

One interpretation of this evidence is that the theropod ancestors of spinosaurs already possessed brains and sensory adaptations suited for part-time fish catching, and that 'all' spinosaurs needed to do to become specialised for a semi-aquatic existence was evolve an unusual snout and teeth.

"Because the skulls of all spinosaurs are so specialised for fish-catching, it's surprising to see such 'non-specialised' brains," said contributing author Dr Darren Naish. "But the results are still significant. It's exciting to get so much information on sensory abilities -- on hearing, sense of smell, balance and so on -- from British dinosaurs. Using cutting-edged technology, we basically obtained all the brain-related information we possibly could from these fossils," Dr Naish said.

Over the last few years, the EvoPalaeo Lab at the University of Southampton has conducted substantial research on new spinosaurs from the Isle of Wight. Ceratosuchops itself was only announced by the team in 2021, and its discovery was followed up by the publication of another new spinosaur -- the gigantic White Rock spinosaur -- in 2022. The braincase of Ceratosuchops was scanned at the ?-Vis X-ray Imaging Centre at the University of Southampton, home to some of the most powerful CT scanners in the country, and a model of its brain will be on display alongside its bones at Dinosaur Isle Museum in Sandown, on the Isle of Wight.

"This new research is just the latest in what amounts to a revolution in palaeontology due to advances in CT-based imaging of fossils," said co-author Lawrence M. Witmer, professor of anatomy at the Ohio University Heritage College of Osteopathic Medicine, who has been CT scanning dinosaurs -- including Baryonyx -- for over 25 years. "We're now in a position to be able to assess the cognitive and sensory capabilities of extinct animals and explore how the brain evolved in behaviourally extreme dinosaurs like spinosaurs."

Read more at Science Daily

Tadpole playing around black hole

A peculiar cloud of gas, nicknamed the Tadpole due to its shape, appears to be revolving around a space devoid of any bright objects. This suggests that the Tadpole is orbiting a dark object, most likely a black hole 100,000 times more massive than the Sun. Future observations will help determine what is responsible for the shape and motion of the Tadpole.

A team of Japanese researchers led by Miyuki Kaneko at Keio University used data from the James Clerk Maxwell Telescope, operated by the East Asian Observatory, and NAOJ's Nobeyama 45-m Radio Telescope to identify an unusual cloud of gas about 27,000 light-years away in the constellation Sagittarius. The curved "Tadpole" shape of the molecular gas cloud strongly suggests that it is being stretched as it orbits around a massive compact object. The only problem is, at the center of the Tadpole's orbit, there are no bright objects which could be massive enough to gravitationally hold the Tadpole. The best candidate for this massive compact invisible object is a black hole.

Because black holes don't emit light, the only way to detect them is when they interact with other objects. This leaves astronomers in the dark about just how many black holes, and with what range of masses, might be lurking in the Milky Way.

Now the team plans to use ALMA (Atacama Large Millimeter/submillimeter Array) to search for faint signs of a black hole, or other object, at the gravitational center of the Tadpole's orbit.

From Science Daily

Does ice in the Universe contain the molecules making up the building blocks of life in planetary systems?

The James Webb Space Telescope -- the most precise telescope ever built -- was decisive in discovering the frozen forms of a long series of molecules, such as carbon dioxide, ammonia, methane, methanol and even more complex molecules, frozen out as ices on the surface of small dust grains.

The dust grains grow in size when being a part of the discs of gas and dust forming around young stars. This means that the researchers could study many of the molecules going into the forming of new exoplanets.

Researchers at the Niels Bohr Institute, University of Copenhagen, combined the discoveries from JWST with data from Atacama Large Millimeter Array (ALMA), making observations in other wavelengths than JWST and researchers from Aarhus University contributed with the necessary investigations in the laboratory.

"With the application of observations, e.g. from ALMA, it is possible for us to directly observe the dust grains themselves, and it is also possible to see the same molecules as in the gas observed in the ice" Lars Kristensen, associate Professor at the Niels Bohr Institute (NBI), explains.

"Using the combined data set gives us a unique insight into the complex interactions between gas, ice and dust in areas where stars and planets form" according to Jes Jørgensen, Professor at NBI.

"This way we can map the location of the molecules in the area both before and after they have been frozen out onto the dust grains and we can follow their path from the cold molecular cloud to the emerging planetary systems around young stars."

The content of ice in the molecular cloud was a decisive discovery

The ices were detected and measured by studying how starlight from beyond the molecular cloud was absorbed by icy molecules at specific infrared wavelengths visible to Webb.

This process leaves behind chemical fingerprints known as absorption spectra which can be compared with laboratory data to identify which ices are present in the molecular cloud.

In this study, the team targeted ices buried in a particularly cold, dense and difficult to investigate region of the Chamaeleon I molecular cloud, a region approximately 600 light-years from Earth which is currently in the process of forming dozens of young stars.

Along with star forming comes planet forming and the perspective for the researchers in the IceAge collaboration is basically to identify the role the ice plays in gathering the molecules necessary to form life.

"This study confirms that interstellar grains of dust are catalysts for the forming of complex molecules in the very diffuse gas in these clouds, something we see in the lab as well," Sergio Ioppolo explains, associate professor at Aarhus University, contributing with some of the experiments in the lab that were compared with the observations.

The sensitivity of JWST was an absolutely necessary precondition for the discovery

"We simply couldn't have observed these ices without Webb," elaborated Klaus Pontoppidan, JWST project scientist at the Space Telescope Science Institute, Baltimore, USA, who was involved in this research.

"The ices show up as dips against a continuum of background starlight. In regions that are this cold and dense, much of the light from the background star is blocked and Webb's exquisite sensitivity was necessary to detect the starlight and therefore identify the ices in the molecular cloud."

The IceAge team has already planned more observations with both Webb and other telescopes.

"These observations together with further laboratory studies will tell us which mixture of ices -- and therefore which elements -- can eventually be delivered to the surfaces of terrestrial exoplanets or incorporated into the atmospheres of giant gas or ice planets.

Read more at Science Daily

Newly discovered virus can kill resistant bacteria

The Danish creeks, Odense Å and Lindved Å, have surprised researchers and students at SDU by containing previously unknown virus species.

"We have found five new species that we believe are unknown to science," said associate professor Clare Kirkpatrick, who studies bacterial stress-response at the Department of Biochemistry and Molecular Biology at University of Southern Denmark.

The somewhat surprising discovery was made during the coronavirus pandemic, when some of Kirkpatrick's students could not carry out their normal microbe studies in the laboratory and therefore went on field trips to local creeks to see if they had any interesting microbes to offer.

The fact that viruses exist in nature is not surprising, as they are the world's most widespread organism. They are everywhere and part of all kinds of microbial cycles and ecosystems, but the fact that five potentially new species have appeared in local creeks, did surprise Clare Kirkpatrick.

While four of the five have not yet had their genome mapped in a genome sequencing, one species has now been fully sequenced, scientifically described, named and published in Microbiology Resource Announcements. The name is Fyn8.

Many viruses are so-called bacteriophages (or phages), meaning that they kill bacteria, and Fyn8 is no exception. It can attack and kill the bacteria Pseudomonas aeruginosa.

Pseudomonas aeruginosa is a bacterium found naturally in soil and water. It is normally harmless towards healthy people, but like many other bacteria it has developed resistance to antibiotics and is found in hospitals.

For example, patients with wounds (like burn patients) and ventilator patients are at risk of getting an infection that cannot be fought with antibiotics.

The researchers have no doubt that Fyn8 can effectively kill Pseudomonas aeruginosa:

"We could see it with the naked eye: Clear holes appeared in the layer of Pseudomonas aeruginosa bacteria in our petri dishes, where Fyn8 had infected the bacterial cells, killed them, multiplied and proceeded to attack the next."

Considering that the world is facing a resistance crisis, where more people will die from an infection with resistant bacteria than from cancer, the new finding is of course interesting and raises the question; Can phages help us in the fight against resistant bacteria?

Research in this field has been uncommon until recently, both in academic research institutions and in pharmaceutical companies. In the past and in other parts of the world however, there has been some research, and phages have also been used to treat infections in Eastern European countries in particular.

The phages were discovered at the beginning of the 20th century by researchers who had their bacterial cultures destroyed by virus infections.

The benefits of that discovery were obvious, but antibiotics, not phages, became the most widespread cure against bacterial infections.

One reason was perhaps that antibiotics were easy to produce and easy to use, while the phages were difficult to isolate and give to patients.

Another reason was probably also that an antibiotic dose could kill many different bacteria, while a phage only matches with a single bacterial species.

"But today it is relatively easy to make precision medicine for the individual patient. First you find out what exact bacteria a patient is infected with -- and then you can treat the patient with exactly the phage that will kill the bacteria," explained Clare Kirkpatrick.

She adds that this strategy works even on bacteria which are resistant to all known antibiotics.

Read more at Science Daily

How to pull carbon dioxide out of seawater

As carbon dioxide continues to build up in the Earth's atmosphere, research teams around the world have spent years seeking ways to remove the gas efficiently from the air. Meanwhile, the world's number one "sink" for carbon dioxide from the atmosphere is the ocean, which soaks up some 30 to 40 percent of all of the gas produced by human activities.

Recently, the possibility of removing carbon dioxide directly from ocean water has emerged as another promising possibility for mitigating CO2 emissions, one that could potentially someday even lead to overall net negative emissions. But, like air capture systems, the idea has not yet led to any widespread use, though there are a few companies attempting to enter this area.

Now, a team of researchers at MIT says they may have found the key to a truly efficient and inexpensive removal mechanism. The findings were reported this week in the journal Energy and Environmental Science, in a paper by MIT professors T. Alan Hatton and Kripa Varanasi, postdoc Seoni Kim, and graduate students Michael Nitzsche, Simon Rufer, and Jack Lake.

The existing methods for removing carbon dioxide from seawater apply a voltage across a stack of membranes to acidify a feed stream by water splitting. This converts bicarbonates in the water to molecules of CO2, which can then be removed under vacuum. Hatton, who is the Ralph Landau Professor of Chemical Engineering, notes that the membranes are expensive, and chemicals are required to drive the overall electrode reactions at either end of the stack, adding further to the expense and complexity of the processes. "We wanted to avoid the need for introducing chemicals to the anode and cathode half cells and to avoid the use of membranes if at all possible" he says.

The team came up with a reversible process consisting of membrane-free electrochemical cells. Reactive electrodes are used to release protons to the seawater fed to the cells, driving the release of the dissolved carbon dioxide from the water. The process is cyclic: It first acidifies the water to convert dissolved inorganic bicarbonates to molecular carbon dioxide, which is collected as a gas under vacuum. Then, the water is fed to a second set of cells with a reversed voltage, to recover the protons and turn the acidic water back to alkaline before releasing it back to the sea. Periodically, the roles of the two cells are reversed once one set of electrodes is depleted of protons (during acidification) and the other has been regenerated during alkalization.

This removal of carbon dioxide and reinjection of alkaline water could slowly start to reverse, at least locally, the acidification of the oceans that has been caused by carbon dioxide buildup, which in turn has threatened coral reefs and shellfish, says Varanasi, a professor of mechanical engineering. The reinjection of alkaline water could be done through dispersed outlets or far offshore to avoid a local spike of alkalinity that could disrupt ecosystems, they say.

"We're not going to be able to treat the entire planet's emissions," Varanasi says. But the reinjection might be done in some cases in places such as fish farms, which tend to acidify the water, so this could be a way of helping to counter that effect.

Once the carbon dioxide is removed from the water, it still needs to be disposed of, as with other carbon removal processes. For example, it can be buried in deep geologic formations under the sea floor, or it can be chemically converted into a compound like ethanol, which can be used as a transportation fuel, or into other specialty chemicals. "You can certainly consider using the captured CO2 as a feedstock for chemicals or materials production, but you're not going to be able to use all of it as a feedstock," says Hatton. "You'll run out of markets for all the products you produce, so not matter what, a significant amount of the captured CO2 will need to be buried underground."

Initially at least, the idea would be to couple such systems with existing or planned infrastructure that already processes seawater, such as desalination plants. "This system is scalable so that we could integrate it potentially into existing processes that are already processing ocean water or in contact with ocean water," Varanasi says. There, the carbon dioxide removal could be a simple add-on to existing processes, which already return vast amounts of water to the sea, and it would not require consumables like chemical additives or membranes.

"With desalination plants, you're already pumping all the water, so why not co-locate there?" Varanasi says. "A bunch of capital costs associated with the way you move the water, and the permitting, all that could already be taken care of."

The system could also be implemented by ships that would process water as they travel, in order to help mitigate the significant contribution of ship traffic to overall emissions. There are already international mandates to lower shipping's emissions, and "this could help shipping companies offset some of their emissions, and turn ships into ocean scrubbers," Varanasi says.

The system could also be implemented at locations such as offshore drilling platforms, or at aquaculture farms. Eventually, it could lead to a deployment of free-standing carbon removal plants distributed globally.

The process could be more efficient than air-capture systems, Hatton says, because the concentration of carbon dioxide in seawater is more than 100 times greater than it is in air. In direct air-capture systems it is first necessary to capture and concentrate the gas before recovering it. "The oceans are large carbon sinks, however, so the capture step has already kind of been done for you," he says. "There's no capture step, only release." That means the volumes of material that need to be handled are much smaller, potentially simplifying the whole process and reducing the footprint requirements.

The research is continuing, with one goal being to find an alternative to the present step that requires a vacuum to remove the separated carbon dioxide from the water. Another need is to identify operating strategies to prevent precipitation of minerals that can foul the electrodes in the alkalinization cell, an inherent issue that reduces the overall efficiency in all reported approaches. Hatton notes that significant progress has been made on these issues, but that it is still too early to report on them. The team expects that the system could be ready for a practical demonstration project within about two years.

Read more at Science Daily

Feb 15, 2023

Four classes of planetary systems

In our solar system, everything seems to be in order: The smaller rocky planets, such as Venus, Earth or Mars, orbit relatively close to our star. The large gas and ice giants, such as Jupiter, Saturn or Neptune, on the other hand, move in wide orbits around the sun. In two studies published in the scientific journal Astronomy & Astrophysics, researchers from the Universities of Bern and Geneva and the National Centre of Competence in Research (NCCR) PlanetS show that our planetary system is quite unique in this respect.

Like peas in a pod

"More than a decade ago, astronomers noticed, based on observations with the then groundbreaking Kepler telescope, that planets in other systems usually resemble their respective neighbours in size and mass – like peas in a pod," says study lead author Lokesh Mishra, researcher at the University of Bern and Geneva, as well as the NCCR PlanetS. But for a long time it was unclear whether this finding was due to limitations of observational methods. "It was not possible to determine whether the planets in any individual system were similar enough to fall into the class of the ‘peas in a pod’ systems, or whether they were rather different – just like in our solar system," says Mishra.

Therefore, the researcher developed a framework to determine the differences and similarities between planets of the same systems. And in doing so, he discovered that there are not two, but four such system architectures.

Four classes of planetary systems

"We call these four classes 'similar', 'ordered', 'anti-ordered' and 'mixed'," says Mishra. Planetary systems in which the masses of neighbouring planets are similar to each other, have similar architecture. Ordered planetary systems are those, in which the mass of the planets tends to increase with distance from the star – just as in our solar system. If, on the other hand, the mass of the planets roughly decreases with distance from the star, researchers speak of an anti-ordered architecture of the system. And mixed architectures occur, when the planetary masses in a system vary greatly from planet to planet.

"This framework can also be applied to any other measurements, such as radius, density or water fractions," says study co-author Yann Alibert, Professor of Planetary Science at the University of Bern and the NCCR PlanetS. "Now, for the first time, we have a tool to study planetary systems as a whole and compare them with other systems."

The findings also raise questions: Which architecture is the most common? Which factors control the emergence of an architecture type? Which factors do not play a role? Some of these, the researchers can answer.

A bridge spanning billions of years

"Our results show that 'similar' planetary systems are the most common type of architecture. About eight out of ten planetary systems around stars visible in the night sky have a 'similar' architecture," says Mishra. "This also explains why evidence of this architecture was found in the first few months of the Kepler mission." What surprised the team was that the "ordered" architecture – the one that also includes the solar system – seems to be the rarest class.

According to Mishra, there are indications that both the mass of the gas and dust disk from which the planets emerge, as well as the abundance of heavy elements in the respective star play a role. "From rather small, low-mass disks and stars with few heavy elements, 'similar' planetary systems emerge. Large, massive disks with many heavy elements in the star give rise to more ordered and anti-ordered systems. Mixed systems emerge from medium-sized disks. Dynamic interactions between planets – such as collisions or ejections – influence the final architecture," Mishra explains.

Read more at Science Daily

How does biodiversity change globally? Detecting accurate trends may be currently unfeasible

Existing data are too biased to provide a reliable picture of the global average of local species richness trends. This is the conclusion of an international research team led by the German Centre for Integrative Biodiversity Research (iDiv) and the Martin Luther University Halle-Wittenberg (MLU). The authors recommend prioritising local and regional assessments of biodiversity change instead of attempting to quantify global change and advocate standardised monitoring programmes, supported by models that take measurement errors and spatial biases into account. The study was published in the journal Ecography.

The global loss of biodiversity has been recognised by society and politicians as one of the most urgent challenges facing humanity in the coming generations. At the World Biodiversity Conference COP15 that recently took place in Montréal, the member states of the UN Convention on Biological Diversity (CBD) adopted new goals and rules accordingly to slow down and eventually reverse this decline. In order to be able to measure the successes of this new agreement, one of these targets calls for improved biodiversity monitoring to record and evaluate trends.

While there are many different ways to measure biodiversity, the most common is species richness at the local scale. However, although species are being lost at alarming rates at the global level, this does not always reflect what is occurring at the local scale. Previous global syntheses have indicated conflicting results on the extent and even direction to which local species richness is changing. "There has been a heated debate on the scientific community on why major global syntheses so far have not found negative trends of local species richness," states Prof Henrique Pereira, head of the Biodiversity and Conservation Research Group at iDiv and MLU and last author of the study. "We show that the declines in local species richness are likely to be much smaller than many anticipated and that, in those conditions, even minor spatial biases and errors in monitoring lead to the lack of detection of global trends."

In order to create a global picture of what is occurring at the local scale, all available observation data must be compiled and evaluated across time. "The occurrence of species is recorded locally all over the world by many different people and organisations," says first author Dr Jose Valdez, a postdoctoral researcher at iDiv and MLU. "The problem with the data is that they were and are recorded under completely different conditions and mostly not under standardised rules. If you then pile them together, the errors and deviations add up, making the result very inaccurate."

The researchers were able to show that the monitoring results are significantly influenced by various factors, such as the time intervals between sampling, the size of the sampling sites, or small errors in counting the number of species at a site. A significant problem in recording global biodiversity trends is also the regional imbalance. For example, most of the data is collected in world regions such as Europe and the United States, particularly habitats such as temperate deciduous and mixed forests. The underrepresentation of the tropical regions and habitats, areas with the highest species richness and also the largest losses, can lead to a significantly distorted impression of the global biodiversity status.

To find out whether and how these biases can be compensated for, the researchers simulated thousands of monitoring networks that varied in the above-mentioned factors. The basis for this was provided by the PREDICTS projections of local species richness trends, based on a model developed with a globally comprehensive compilation of data from over 32,000 sites worldwide and over 51,000 species. The researchers found that global changes in biodiversity could theoretically be determined in hundreds of perfectly sampled sites within a decade and thousands of sites within a 3-year period.

Changes in species richness on a global scale only detectable with unrealistically many sampling sites

However, perfect sampling does not exist in reality. Studies show that monitoring data typically contain 10% to 30% errors due to missing or misidentifying species during sampling. By just adding very small measurement errors of up to 5%, the researchers found that it drastically reduced the ability to detect any global change. With more realistic errors and further imprecision factors, detecting the average global trend may simply be impossible.

"Our results demonstrate that capturing accurate trends in local species richness would require monitoring an unfeasibly large number of perfectly sampled sites," adds Jose Valdez. "However, the question is whether this would even be useful or meaningful for effective and responsive biodiversity conservation. Conservation strategies and measures are coordinated and implemented not on a global level, but at local and national scales. Measuring biodiversity trends at these smaller scales is not only more practical but also helps in understanding the drivers of biodiversity loss and assessing the progress of conservation policies."

"A substantial increase of biodiversity monitoring is needed, combined with analysis that uses models to fill in data gaps," says Henrique Pereira. The authors advise establishing a representative network of sampling sites around the world that provides independent, integrated, and regularly updated biodiversity data. Such an approach is currently being developed for the European Union with the EuropaBON project.

Read more at Science Daily

Oral bacteria may increase heart disease risk

Infection with a bacterium that causes gum disease and bad breath may increase the risk of heart disease, shows a study published today in eLife.

The study suggests another potential risk factor that physicians might screen for to identify individuals at risk of heart disease. It may also indicate that treatments for colonisation or infection with the oral bacterium Fusobacterium nucleatum may help reduce heart disease risk.

A combination of genetic and environmental risk factors contributes to heart disease, which is responsible for about one-third of all deaths worldwide. A build-up of plaque in the arteries that supply the heart with blood causes coronary heart disease -- the most common type of heart disease -- and can also lead to blockages that cause heart attacks. Previous studies have linked certain infections to an increased risk of plaque build-up.

"Although enormous progress has been made in understanding how coronary heart disease develops, our understanding of how infections, inflammation, and genetic risk factors contribute is still incomplete," says lead author Flavia Hodel, former PhD student at the School of Life Sciences of EPFL, Switzerland. "We wanted to help fill some of the gaps in our understanding of coronary heart disease by taking a more comprehensive look at the role of infections."

Hodel and colleagues analysed genetic information, health data, and blood samples from a subset of 3,459 people who participated in the CoLaus|PsyCoLaus Study -- a Swiss population-based cohort. Of the 3,459 participants, around 6% experienced a heart attack or another harmful cardiovascular event during the 12-year follow-up period. The team tested participants' blood samples for the presence of antibodies against 15 different viruses, six bacteria, and one parasite.

Once the authors adjusted the results for known cardiovascular risk factors, they found that antibodies against F. nucleatum, a sign of previous or current infection by the bacterium,were linked with a slightly increased risk of a cardiovascular event.

"F. nucleatum might contribute to cardiovascular risk through increased systemic inflammation due to bacterial presence in the mouth, or through direct colonisation of the arterial walls or plaque lining the arterial walls," Hodel explains.

The authors also confirmed that individuals with high genetic risk scores for coronary heart disease are at elevated risk for cardiovascular events, as previous studies have shown.

If future studies confirm the link between F. nucleatum and heart disease, the authors say it may lead to new approaches to identifying those at risk or preventing cardiovascular events.

Read more at Science Daily

Antibiotic consumption is currently not the main driver of aminoglycoside resistance spread, study suggests

The spread of antibiotic resistance, where infectious bacteria are able to defeat the drugs intended to kill them, may not be primarily driven by antibiotic consumption, according to a study published today in eLife.

Rather, the study suggests that the prevalence of antibiotic resistance across Europe between 1997 and 2018 is mostly explained by exchanges between ecosystems, and human exchanges such as merchandise imports or travel.

The results support the idea that interventional strategies based on reducing antibiotic use should be complemented by a stronger control of exchanges, especially between ecosystems.

Antibiotic resistance represents one of the largest threats to global public health, food security and global development faced today. Due to the spread of antibiotic resistance, a growing number of infections, such as pneumonia and tuberculosis, are becoming harder to treat, leading to longer hospital stays, greater costs and increased mortality.

"Many public health agencies have recommended reducing antibiotic use in response to the challenges caused by resistance," explains co-author Léa Pradier, a former PhD student at University of Montpellier, France. Pradier conducted the study alongside Stéphanie Bedhomme, a researcher at CNRS,. "However, there are cases where developed countries have reduced their antibiotic consumption and not halted the spread of antibiotic resistance genes across bacterial populations, implying other factors are at play," continues Pradier.

To explain this, Pradier and Bedhomme set out to describe the genetic, geographical and ecological distribution of resistances to a class of antibiotics called aminoglycosides, and from this information, quantify the relative contribution of different factors driving the spread of antibiotic resistance. Aminoglycosides have limited clinical use in humans, but are often a last resort for treating multi-resistant infections. They are also commonly used in the treatment of farmyard animals, meaning that resistance to them poses a significant threat to global food security.

They utilised a computational approach to screen the genetic information of over 160,000 bacteria genomes, looking for genes encoding aminoglycoside-modifying enzymes (AMEs) -- the most common mechanism of aminoglycoside resistance. They detected AME genes in around a quarter of genomes screened, and in samples from all continents (excluding Antarctica) and all biomes investigated. The majority of AME-gene-carrying bacteria were found in clinical samples (55.3%), human samples (22.1%) and farm samples (12.3%).

Pradier and Bedhommme then focused on the distribution of AME genes across Europe, from 1997-2018, when the most detailed data was available. During this period, aminoglycoside usage remained relatively constant, but was highly variable between countries. Comparing the prevalence of AME genes between countries with different aminoglycoside usage over time, the team determined that aminoglycoside consumption was only a minor explanatory factor, with few positive or directional effects on AME gene prevalence.

Instead, the dataset implies that human exchanges through trade and migration, and exchanges between biomes, explain most of the spread and maintenance of antibiotic resistance when modelled over time, space and ecology. AME genes can be carried over continents by plant and animal products, and international trade and travellers, and may then spread to local strains of bacteria through a process called horizontal gene transfer -- the movement of genetic information between organisms. The pool of AME genes sampled from plants, wild animals and soil had the strongest overlap with other communities, suggesting these biomes are major hubs for AME gene propagation, either by horizontal resistance gene transfer or by resistant bacteria movement.

The findings suggest that the largest cause of AME gene spread is through the movement of antibiotic-resistant bacteria between ecosystems and biomes. This spread is aided by mobile genetic elements, which increase the likelihood for a genome to carry several copies of the same AME gene. This increases the expression of transferred AME genes and allows bacteria to evolve new antibiotic resistance functions through the duplicated sequences.

These findings are preliminary, as limited by the use of publicly available data, rather than deploying a dedicated sampling method. In addition, the genetic data sourced from multiple different research projects caused a sampling bias towards industrialised countries and biomes with clinical interest, leading to some locations and biomes being over-represented.

Read more at Science Daily

Feb 14, 2023

Coral reefs in the Eastern Pacific could survive into the 2060s

Scientists at the University of Miami Rosenstiel School of Marine, Atmospheric, and Earth Science found that some reefs in the tropical Pacific Ocean could maintain high coral cover into the second half of this century by shuffling the symbiotic algae they host. The findings offer a ray of hope in an often-dire picture of the future of coral reefs worldwide.

While global warming is causing the loss of coral reefs globally, scientists believe that some corals are increasing their tolerance to heat by changing the symbiotic algae communities they host, which through photosynthesis provide them with the energy they need to live.

"Our results suggest that some reefs in the eastern tropical Pacific, which includes the Pacific coasts of Panama, Costa Rica, Mexico, and Colombia, might be able to maintain high coral cover through the 2060s," said coral biologist Ana Palacio-Castro, lead author of the study, alumna of the Rosenstiel School, and a postdoctoral associate at the school's Cooperative Institute for Marine and Atmospheric Studies. "However, while this may be seen as good news for these reefs, their survival may not continue past that date unless we reduce global greenhouse gas emissions and curtail global warming on a larger scale."

Shallow coral reefs in the eastern tropical Pacific Ocean are predominantly built by branching corals in the genus Pocillopora, which are extremely important for the reefs in the region. The microscopic algae they host in their tissue harvest light to help the coral produce energy to grow. The loss of these symbiotic algae causes the coral to turn white, or bleach, and the coral struggles to meet their energy needs, which can often prove fatal.

To better understand how corals improved their tolerance to heat stress, the researchers examined over 40 years' worth of coral reef-monitoring data from Panama, one of the longest datasets of its kind in the world. They analyzed temperature, coral cover, bleaching and mortality data spanning three ocean heatwaves -- in 1982-1983, 1997-1998, and 2015-2016 -- along with data on algal symbiont community data during the last two.

The analysis showed that the 1982-83 heatwave significantly reduced coral cover on the reef, but the effects of the 1997-98 and 2015-16 El Niño were milder, especially for corals in the genus Pocillopora -- sometimes known as cauliflower coral -- the predominant reef-building coral in the eastern tropical Pacific. They also confirmed that during strong ocean heatwaves, the heat-tolerant alga Durusdinium glynnii becomes increasingly common in this particular lineage of corals, allowing them to better withstand periods of elevated temperatures. When combined with climate projections of future heat stress, the reefs that were predominantly composed of Pocillopora corals and that hosted this heat-tolerant alga were found to be better equipped to survive and maintain high levels of coral cover well into the second half of the current century, indicating that some reef systems may be more resilient to warming than previously thought.

"This study shows that there are some unusual reefs that may be able to survive for several decades as a result of their ability to shuffle symbionts," said Andrew Baker, professor of marine biology and ecology at the Rosenstiel School, and senior author of the study. "While we don't think that most reefs will be able to survive in this way, it does suggest that vestiges of our current reefs may persist for longer than we previously thought, although potentially with many fewer species. Coral reefs are incredibly valuable natural assets, providing coastal protection and fisheries benefits, and supporting many local communities. We can still make a difference by protecting them."

Read more at Science Daily

Researchers solve a 150-year-old mystery: Aetosaur find involves juveniles

Aetosaurs had a small head and a crocodile-like body. The land dwellers were up to six meters long and widely distributed geographically. They died out about 204 million years ago, at the end of the Triassic. In Kaltental near Stuttgart, Germany, an assemblage of 24 Aetosaurus ferratus individuals, only between 20 and 82 centimeters long, was discovered in 1877. Since then, scientists have been puzzling over whether they were juveniles or small adults. A team led by Elżbieta M. Teschner from the University of Bonn has now solved the mystery: Bone examination of two specimens shows that they are juveniles. The results have now been published in the Journal of Vertebrate Paleontology.

Reptiles of the genus Aetosaurus ferratus were discovered in a quarry near Kaltental, now a district of Stuttgart, and were first described nearly 150 years ago. The assemblage of about 24 individuals was dated to be about 215 million years old. "What was striking was that the total body length was only between 20 and 82 centimeters," says Elżbieta M. Teschner, who is pursuing a doctorate in paleontology at the University of Bonn while also conducting research at the University of Opole (Poland). "Interestingly, they were also the only fossils found in the area," she adds.

Oscar Fraas provided the first description of the skeletons in 1877 and suggested that they had washed up together. Sixteen years ago, Rainer R. Schoch of the State Natural History Museum in Stuttgart published a more detailed morphological study. Based on features visible to the naked eye, he determined that they must be juveniles. Together with Julia B. Desojo, an Argentine paleontologist from CONICET at the Museo de La Plata, they later described the skull of a larger skeleton of another aetosaur species (Paratypothorax andressorum). The find, more than 50 kilometers from Kaltental, could potentially be the adult form of the small aetosaur species known from the assemblage, they surmised.

Paleohistology enables age determination

The assumption only recently became certainty: With the help of the science of tissue growth (paleohistology) it has now become possible to examine the bones of the Kaltental find. "Long bones are a good model for calculating the age of animals because they deposit growth rings during their life that can be counted -- similar to the growth rings in tree trunks," says Dorota Konietzko-Meier, paleontologist from the University of Bonn. Based on this method, the relative individual age of the studied specimens could be determined.

Read more at Science Daily

Leonardo da Vinci's forgotten experiments explored gravity as a form of acceleration

Engineers from Caltech have discovered that Leonardo da Vinci's understanding of gravity -- though not wholly accurate -- was centuries ahead of his time.

In an article published in the journal Leonardo, the researchers draw upon a fresh look at one of da Vinci's notebooks to show that the famed polymath had devised experiments to demonstrate that gravity is a form of acceleration -- and that he further modeled the gravitational constant to around 97 percent accuracy.

Da Vinci, who lived from 1452 to 1519, was well ahead of the curve in exploring these concepts. It wasn't until 1604 that Galileo Galilei would theorize that the distance covered by a falling object was proportional to the square of time elapsed and not until the late 17th century that Sir Isaac Newton would expand on that to develop a law of universal gravitation, describing how objects are attracted to one another. Da Vinci's primary hurdle was being limited by the tools at his disposal. For example, he lacked a means of precisely measuring time as objects fell.

Da Vinci's experiments were first spotted by Mory Gharib, the Hans W. Liepmann Professor of Aeronautics and Medical Engineering, in the Codex Arundel, a collection of papers written by da Vinci that cover science, art, and personal topics. In early 2017, Gharib was exploring da Vinci's techniques of flow visualization to discuss with students he was teaching in a graduate course when he noticed a series of sketches showing triangles generated by sand-like particles pouring out from a jar in the newly released Codex Arundel, which can be viewed online courtesy of the British Library.

"What caught my eye was when he wrote 'Equatione di Moti' on the hypotenuse of one of his sketched triangles -- the one that was an isosceles right triangle," says Gharib, lead author of the Leonardo paper. "I became interested to see what Leonardo meant by that phrase."

To analyze the notes, Gharib worked with colleagues Chris Roh, at the time a postdoctoral researcher at Caltech and now an assistant professor at Cornell University, as well as Flavio Noca of the University of Applied Sciences and Arts Western Switzerland in Geneva. Noca provided translations of da Vinci's Italian notes (written in his famous left-handed mirror writing that reads from right to left) as the trio pored over the manuscript's diagrams.

In the papers, da Vinci describes an experiment in which a water pitcher would be moved along a straight path parallel to the ground, dumping out either water or a granular material (most likely sand) along the way. His notes make it clear that he was aware that the water or sand would not fall at a constant velocity but rather would accelerate -- also that the material stops accelerating horizontally, as it is no longer influenced by the pitcher, and that its acceleration is purely downward due to gravity.

If the pitcher moves at a constant speed, the line created by falling material is vertical, so no triangle forms. If the pitcher accelerates at a constant rate, the line created by the collection of falling material makes a straight but slanted line, which then forms a triangle. And, as da Vinci pointed out in a key diagram, if the pitcher's motion is accelerated at the same rate that gravity accelerates the falling material, it creates an equilateral triangle -- which is what Gharib originally noticed that da Vinci had highlighted with the note "Equatione di Moti," or "equalization (equivalence) of motions."

Da Vinci sought to mathematically describe that acceleration. It is here, according to the study's authors, that he didn't quite hit the mark. To explore da Vinci's process, the team used computer modeling to run his water vase experiment. Doing so yielded da Vinci's error.

"What we saw is that Leonardo wrestled with this, but he modeled it as the falling object's distance was proportional to 2 to the t power [with t representing time] instead proportional to t squared," Roh says. "It's wrong, but we later found out that he used this sort of wrong equation in the correct way." In his notes, da Vinci illustrated an object falling for up to four intervals of time -- a period through which graphs of both types of equations line up closely.

Read more at Science Daily

Can hearing loss be reversed? Research reveals clues that could regrow the cells that help us hear

Taking a bite of an apple is considered a healthy choice. But have you ever thought about putting in earplugs before your favorite band takes the stage?

Just like your future body will thank you for the apple, your future ears (specifically your cochlear hair cells) will thank you for protecting them. The most common cause of hearing loss is progressive because these hair cells -- the primary cells to detect sound waves -- cannot regenerate if damaged or lost. People who have repeated exposure to loud noises, like military personnel, construction workers, and musicians, are most at risk for this type of hearing loss. But, it can happen to anyone over time (even concert goers).

On the other hand, birds and fish can regenerate these hair cells, and now researchers at the Del Monte Institute for Neuroscience are getting closer to identifying the mechanisms that may promote this type of regeneration in mammals, as explained in research recently published in Frontiers in Cellular Neuroscience.

"We know from our previous work that expression of an active growth gene, called ERBB2, was able to activate the growth of new hair cells (in mammals), but we didn't fully understand why," said Patricia White, PhD, professor of Neuroscience and Otolaryngology at the University of Rochester Medical Center. The 2018 study led by Jingyuan Zhang, PhD, a postdoctoral fellow in the White lab at the time, found that activating the growth gene ERBB2 pathway triggered a cascading series of cellular events by which cochlear support cells began to multiply and activate other neighboring stem cells to become new sensory hair cells.

"This new study tells us how that activation is happening -- a significant advance toward the ultimate goal of generating new cochlear hair cells in mammals," said White.

Using single-cell RNA sequencing in mice, researchers compared cells with an overactive growth gene (ERBB2 signaling) with similar cells that lacked such signaling. They found the growth gene -- ERBB2 -- promoted stem cell-like development by initiating the expression of multiple proteins -- including SPP1, a protein that signals through the CD44 receptor. The CD44 receptor is known to be present in cochlear-supporting cells. This increase in cellular response promoted mitosis in the supporting cells, a key event for regeneration.

"When we checked this process in adult mice, we were able to show that ERBB2 expression drove the protein expression of SPP1 that is necessary to activate CD44 and grow new hair cells," said Dorota Piekna-Przybylska, PhD, a staff scientist in the White Lab and first author of the study. "This discovery has made it clear that regeneration is not only restricted to the early stages of development. We believe we can use these findings to drive regeneration in adults."

"We plan to further investigation of this phenomenon from a mechanistic perspective to determine whether it can improve auditory function after damage in mammals. That is the ultimate goal," said White.

Read more at Science Daily

Feb 13, 2023

New models explain canyons on Pluto moon

In 2015, when NASA's New Horizons spacecraft encountered the Pluto-Charon system, the Southwest Research Institute-led science team discovered interesting, geologically active objects instead of the inert icy orbs previously envisioned. An SwRI scientist has revisited the data to explore the source of cryovolcanic flows and an obvious belt of fractures on Pluto's large moon Charon. These new models suggest that when the moon's internal ocean froze, it may have formed the deep, elongated depressions along its girth but was less likely to lead to cryovolcanoes erupting with ice, water and other materials in its northern hemisphere.

"A combination of geological interpretations and thermal-orbital evolution models implies that Charon had a subsurface liquid ocean that eventually froze," said SwRI's Dr. Alyssa Rhoden, a specialist in the geophysics of icy satellites, particularly those containing oceans, and the evolution of giant planet satellite systems. She authored a new paper on the source of Charon's surface features in Icarus. "When an internal ocean freezes, it expands, creating large stresses in its icy shell and pressurizing the water below. We suspected this was the source of Charon's large canyons and cryovolcanic flows."

New ice forming on the inner layer of the existing ice shell can also stress the surface structure. To better understand the evolution of the moon's interior and surface, Rhoden modeled how fractures formed in Charon's ice shell as the ocean beneath it froze. The team modeled oceans of water, ammonia or a mixture of the two based on questions about the makeup. Ammonia can act as antifreeze and prolong the life of the ocean; however, results did not differ substantially.

When fractures penetrate the entire ice shell and tap the subsurface ocean, the liquid, pressurized by the increase in volume of the newly frozen ice, can be pushed through the fractures to erupt onto the surface. Models sought to identify the conditions that could create fractures that fully penetrate Charon's icy shell, linking its surface and subsurface water to allow ocean-sourced cryovolcanism. However, based on current models of Charon's interior evolution, ice shells were far too thick to be fully cracked by the stresses associated with ocean freezing.

The timing of the ocean freeze is also important. The synchronous and circular orbits of Pluto and Charon stabilized relatively early, so tidal heating only occurred during the first million years.

"Either Charon's ice shell was less than 6 miles (10 km) thick when the flows occurred, as opposed to the more than 60 miles or 100 km indicated, or the surface was not in direct communication with the ocean as part of the eruptive process," Rhoden said. "If Charon's ice shell had been thin enough to be fully cracked, it would imply substantially more ocean freezing than is indicated by the canyons identified on Charon's encounter hemisphere."

Fractures in the ice shell may be the initiation points of these canyons along the global tectonic belt of ridges that traverse the face of Charon, separating the northern and southern geological regions of the moon. If additional large extensional features were identified on the hemisphere not imaged by New Horizons, or compositional analysis could prove that Charon's cryovolcanism originated from the ocean, it would support the idea that its ocean was substantially thicker than expected.

Read more at Science Daily

Deep-sea black carbon comes from hydrothermal vents

Hydrothermal vents have been identified as a previously undiscovered source of dissolved black carbon in the oceans, furthering the understanding of the role of oceans as a carbon sink.

The ocean is one of the largest dynamic carbon sinks in the world, and is susceptible to increased carbon emissions from human activities. There are even proposals to use the ocean to sequester carbon in an effort to reduce the carbon emissions. However, much of the processes by which the ocean functions as a carbon sink are not fully understood.

Associate Professor Youhei Yamashita and grad student Yutaro Mori at Hokkaido University, along with Professor Hiroshi Ogawa at AORI, The University of Tokyo, have revealed conclusive evidence that hydrothermal vents are a previously unknown source of dissolved black carbon in the deep ocean. Their discoveries were published in the journal Science Advances.

"One of the largest carbon pools on the Earth's surface is the dissolved organic carbon in the ocean," explains Ogawa. "We were interested in a portion of this pool, known as dissolved black carbon (DBC), which cannot be utilized by organisms. The source of DBC in the deep sea was unknown, although hydrothermal vents were suspected to be involved."

The researchers analyzed the distribution of DBC in the ocean basins of the North Pacific Ocean and Eastern South Pacific Ocean, and compared the data with previously reported concentrations of a helium isotope that is associated with hydrothermal vent emissions, as well as oxygen utilization in these areas.

Their findings showed that hydrothermal vents were an important source of DBC in the Pacific Ocean. This hydrothermal DBC is most likely formed due to the mixing of the hot fluids from hydrothermal vents with cold seawater, and is transported over long distances -- up to thousands of kilometers away.

Read more at Science Daily

A more healthful, gluten-free flour made from sweet potatoes

Orange, starchy sweet potatoes are great mashed, cut into fries or just roasted whole. But you likely haven't considered grinding them into a flour and baking them into your next batch of cookies -- or at least, not yet! Recent research published in ACS Food Science & Technology has reported the best method to turn sweet potatoes into gluten-free flours that are packed with antioxidants and perfect for thickening or baking.

Wheat flour has been used for tens of thousands of years, and likely isn't going away anytime soon. But for those who face gluten intolerance or have celiac disease, the gluten proteins in wheat flour can lead to stomach pain, nausea and even intestinal damage. Several gluten-free options are either already available or in development, including those made from banana peels, almonds and various grains. But an up-and-coming contender is derived from sweet potatoes, as the hearty tuber is packed with antioxidants and nutrients, along with a slightly sweet flavor and hint of color.

Before it can become a common ingredient in store-bought baked goods, the best practices for processing the flour need to be established. Though previous studies have investigated a variety of parameters, including the way the potatoes are dried and milled, none have yet determined how these different steps could interact with one another to produce flours best suited for certain products. So, Ofelia Rouzaud-Sández and colleagues wanted to investigate how two drying temperatures and grinding processes affected the properties of orange sweet potato flour.

To create their flours, the team prepared samples of orange sweet potatoes (Ipomoea batatas) dried at either 122 or 176 F then ground them once or twice. They investigated many parameters for each sample, comparing them to store-bought sweet potato flour and a traditional wheat one. Regardless of drying temperature, grinding once damaged just enough of the starch to make it ideal for fermented products, such as gluten-free breads. Grinding twice further disrupted the starch's crystallinity, producing thickening agents ideal for porridges or sauces. When baked into a loaf of bread, the high-temperature-dried, single-ground sample featured higher antioxidant capacity than both the store-bought version and the wheat flour. The researchers say that these findings could help expand the applications for orange sweet potato flour, both for home cooks and the packaged food industry.

From Science Daily

The relationship between ghosting and closure

Odds are, you know someone who has been ghosted. And according to a new study from the University of Georgia, it can be a haunting experience.

A recent study conducted by researcher and corresponding author Christina Leckfor and University of Mississippi researcher Natasha Wood found nearly two-thirds of participants have ghosted -- ended a relationship by ignoring the other person, without offering a clear explanation -- and have been ghosted.

And as online dating, dating apps and other social technologies grow in popularity, so does the likelihood that someone is left on read after a few dates. Yet despite its frequent occurrence, little is known about why people ghost or the psychological effects of this social phenomenon.

"Ghosting is becoming a common strategy, and it creates an ambiguous situation where one party doesn't really know what's going on," said Leckfor, a doctoral student in the UGA Department of Psychology. "We were interested in what individual differences or personal characteristics might influence a person's intentions to use ghosting. We also wanted to know if people with a high need for closure were less likely to use ghosting, or if they would hurt more after being ghosted."

On the receiving end of a breakup, ghosting was a negative experience for almost all participants. But for individuals who yearn for closure, the negative effects of ghosting are even more profound.

To gauge the effect of a breakup, study participants reflected on a past relationship, either a time they were ghosted or directly rejected, and then answered questions about their psychological needs satisfaction -- feelings of belonging, self-esteem, control and meaningful existence. Ghosted participants had some of the lowest needs satisfaction, meaning they were hit hardest by the rejection, and those who wanted closure reported even lower needs satisfaction.

"For recipients, desire for closure has this magnifying effect. When someone with a high need for closure recalled a time where they were ghosted or directly rejected, it hurt more than if they had a low need for closure," Leckfor said. "But they also felt more positive after recalling times when they were acknowledged by their partner."

In contrast, when someone considered initiating a breakup, the connection between closure and ghosting varied.

"We actually found that people who had a higher need for closure were slightly more likely to intend to use ghosting to end a relationship," Leckfor said. "Even though things may be ambiguous on the recipient side, the person who is ghosting sees it as a distinct end to the relationship. Those results weren't definitive in our study, but they pose an interesting avenue for future research."

And ghosting's not just for dating apps anymore. More than half of the study participants wrote about a time when they were ghosted by a friend, rather than a romantic partner.

"The individuals who were ghosted by a friend reported feeling just as bad about the relationship as those who wrote about a time when they were ghosted by a romantic partner," Leckfor said. "In psychology in general, a lot of literature regarding adult relationships focuses on romantic relationships. This [research] shows that friendships are really important to study as well."

It also relates back to the role of technology in our relationships. There have been several studies on how people initiate, maintain and end relationships without technology, but as more human connectivity moves to social media, dating apps, texting or Zoom, those relationships can change. And individual traits, such as a need for closure, will factor into how we use those technologies.

Read more at Science Daily

Feb 12, 2023

Footprints of galactic immigration uncovered in Andromeda galaxy

Over the course of billions of years, galaxies grow and evolve by forging new stars and merging with other galaxies through aptly named "galactic immigration" events. Astronomers try to uncover the histories of these immigration events by studying the motions of individual stars throughout a galaxy and its extended halo of stars and dark matter. Such cosmic archaeology, however, has only been possible in our own galaxy, the Milky Way, until now.

An international team of researchers has uncovered striking new evidence of a large galactic immigration event in the Andromeda Galaxy, the Milky Way's nearest large galactic neighbor. The new results were made with the DOE's Dark Energy Spectroscopic Instrument (DESI) on the Nicholas U. Mayall 4-meter Telescope at Kitt Peak National Observatory, a Program of NSF's NOIRLab.

By measuring the motions of nearly 7500 stars in the inner halo of the Andromeda Galaxy, also known as Messier 31 (M31), the team discovered telltale patterns in the positions and motions of stars that revealed how these stars began their lives as part of another galaxy that merged with M31 about 2 billion years ago. While such patterns have long been predicted by theory, they have never been seen with such clarity in any galaxy.

"Our new observations of the Milky Way's nearest large galactic neighbor, the Andromeda Galaxy, reveal evidence of a galactic immigration event in exquisite detail," explained Arjun Dey, astronomer at NSF's NOIRLab and the lead author of the paper presenting this research. "Although the night sky may seem unchanging, the Universe is a dynamic place. Galaxies like M31 and our Milky Way are constructed from the building blocks of many smaller galaxies over cosmic history. "

"We have never before seen this so clearly in the motions of stars, nor had we seen some of the structures that result from this merger," said Sergey Koposov, an astrophysicist at the University of Edinburgh and coauthor of the paper. "Our emerging picture is that the history of the Andromeda Galaxy is similar to that of our own Galaxy, the Milky Way. The inner halos of both galaxies are dominated by a single immigration event."

This research sheds light on not only the history of our galactic neighbors but also the history of our own galaxy. Most of the stars in the Milky Way's halo were formed in another galaxy and later migrated into our own in a galactic merger 8-10 billion years ago. Studying the relics of a similar, but more recent, galaxy merger in M31 gives astronomers a window onto one of the major events in the Milky Way's past.

To trace the history of migration in M31, the team turned to DESI. DESI was constructed to map tens of millions of galaxies and quasars in the nearby Universe in order to measure the effect of dark energy on the expansion of the Universe. It is the most powerful multi-object survey spectrograph in the world, and is capable of measuring the spectra of more than 100,000 galaxies a night. DESI's world-class capabilities can also be put to use closer to home, however, and the instrument was crucial to the team's survey of M31.

"This science could not have been done at any other facility in the world. DESI's amazing efficiency, throughput, and field of view make it the best system in the world to carry out a survey of the stars in the Andromeda Galaxy," said Dey. "In only a few hours of observing time, DESI was able to surpass more than a decade of spectroscopy with much larger telescopes."

Even though the Mayall Telescope was completed 50 years ago (it achieved first light in 1973), it remains a world-class astronomical facility thanks to continued upgrades and state-of-the-art instrumentation. "Fifty years sounds like a long time, and naïvely one might think that's the natural lifetime of a facility," said co-author Joan R. Najita, also at NOIRLab. "But with renewal and reuse, a venerable telescope like the Mayall can continue to make amazing discoveries despite being relatively small by today's standards."

The research was carried out in collaboration with two Harvard University undergraduates, Gabriel Maxemin and Joshua Josephy-Zack, who connected with the project through the Radcliffe Institute for Advanced Study. Najita was a Radcliffe Fellow from 2021 to 2022.

The team now plans to use the unparalleled capabilities of DESI and the Mayall Telescope to explore more of M31's outlying stars, with the aim of revealing its structure and immigration history in unprecedented detail.

Read more at Science Daily

Computer model IDs roles of individual genes in early embryonic development

Computer software developed at Washington University School of Medicine in St. Louis can predict what happens to complex gene networks when individual genes are missing or dialed up more than usual. Such genetic networks play key roles in early embryonic development, guiding stem cells to form specific cell types that then build tissues and organs. Mapping the roles of single genes in these networks is key to understanding healthy development and finding ways to regrow damaged cells and tissues. Likewise, understanding genetic errors could provide insight into birth defects, miscarriage or even cancer.

Such genetic experiments -- typically conducted in the laboratory in animal models such as mice and zebrafish -- have been a mainstay of developmental biology research for decades. Much can be learned about a gene's function in animal studies in which a gene is missing or overexpressed, but these experiments are also expensive and time-consuming.

In contrast, the newly developed software called CellOracle -- described Feb. 8 in the journal Nature -- can model hundreds of genetic experiments in a matter of minutes, helping scientists identify key genes that play important roles in development but that may have been missed by older, slower techniques. CellOracle is open source, with the code and information about the software available at this link.

"The scientific community has collected enough data from animal experiments that we now can do more than observe biology happening -- we can build computer models of how genes interact with each other and predict what will happen when one gene is missing," said senior author Samantha A. Morris, PhD, an associate professor of developmental biology and of genetics. "And we can do this without any experimental intervention. Once we identify an important gene, we still need to do the lab experiments to verify the finding. But this computational method helps scientists narrow down which genes are most important."

CellOracle, which was included in a recent technology feature in the journal Nature, is one of a number of relatively new software systems designed to model insights into cellular gene regulation. Rather than simply identify the networks, CellOracle is unique in its ability to let researchers test out what happens when a network is disrupted in a specific way.

Morris and her team harnessed the well-known developmental processes of blood cell formation in mice and humans and embryonic development in zebrafish to validate that CellOracle works properly. Their studies, in collaboration with the lab of co-author and zebrafish development expert Lilianna Solnica-Krezel, PhD, the Alan A. and Edith L. Wolff Distinguished Professor and head of the Department of Developmental Biology, also uncovered new roles for certain genes in zebrafish development that had not previously been identified.

And in a related paper online in the journal Stem Cell Reports, Morris and her colleagues used CellOracle to predict what happens when certain genes are dialed up beyond their usual expression levels.

"We found that if we dialed up two specific genes, we can transform skin cells into a type of cell that can repair damaged intestine and liver," Morris said. "In terms of regenerative medicine, these predictive tools are valuable in modeling how we can reprogram cells into becoming the types of cells that can promote healing after injury or disease."

According to Morris, most laboratory methods for converting stem cells into different cell types, such as blood cells or liver cells, are inefficient. Maybe 2% of the cells arrive at the desired destination. Tools like CellOracle can help scientist identify what factors should be added to the cocktail to guide more cells into the desired cell type, such as those capable repairing the gut and liver.

At present, CellOracle can model cell identity in more than 10 different species, including humans, mice, zebrafish, yeast, chickens, Guinea pigs, rats, fruit flies, roundworms, the Arabidopsis plant and two species of frog.

Read more at Science Daily

Whiskers help nectar-eating 'acro bats' hover like hummingbirds

From dragonflies to hummingbirds, hovering flight is among the most complex and captivating forms of animal movement -- a physiological feat of size, musculature and wing development.

For nectar-feeding bats that hover as they feed from flowers, this aerial maneuver also depends on extra-long whiskers unlike those of most other bat species, according to a Dartmouth College-led study in the journal Proceedings of the Royal Society B. The researchers used high-speed cameras to capture how the stiff hairs jutting forward from the face of nectar-eating bats provide enhanced spatial information that guides the animals as they swoop in to quickly feed -- within a second or less -- on succulent flowers without landing.

"The whiskers of nectar-feeding bats are critical sensory organs that provide high-quality input the brain works with to optimize hovering. It's a cool junction between sensory biology and bio-kinematics, between form and function," said lead author Eran Amichai, a postdoctoral researcher in biological sciences at Dartmouth who studies echolocation in bats. Co-authors are postdoctoral fellow David Boerma from the American Museum of Natural history, animal behavioralist Rachel Page at the Smithsonian Tropical Research Institute in Panama, Sharon Swartz, a professor of biology and engineering at Brown University, and Hannah ter Hofstede, a past assistant professor of biological sciences at Dartmouth now at the University of Windsor in Canada.

The researchers worked at the Smithsonian Tropical Research Institute recording Pallas's long-tongued bats -- a South and Central American bat that has the fastest metabolism of any mammal -- as they drank from hand-blown glass flowers designed for the study to replicate the plants the animals feed from. High-speed infrared cameras captured photos and video of the bats as they descended upon the glass flowers and navigated their muzzles and tongues into the "bloom" to eat the nectar. Feedings typically lasted between a half- to one second.

The researchers found that bats with clipped whiskers were less agile and accurate during feeding and flight than animals with untouched whiskers. The animals with clipped whiskers were held for a few days until the hairs regrew, then released back into the jungle. "Clipping the whiskers doesn't reduce the bats' ability to feed, they just do it a little less gracefully," Amichai said. "If it were gymnastics, they'd get an 8.5 instead of a 9.8."

The role of long whiskers in nectar-feeding bats' flight control provides new insight into the coevolution of the bats with the flowers they feed on, Amichai said. The majority of bats possess short whiskers not arranged in any particular pattern or direction. But the researchers found that whisker length in nectar-eating bats evolved at least twice to -- along with long tongues and faces -- potentially help them better navigate the deep chambers of the flowers they prefer. In turn, the long reach these flowers require results in more pollen sticking to their pollinators and thus the broader proliferation of their kind.

The researchers plan to continue their work using higher-resolution images, flowers that move, interactions with predators and other expansions on the experimental model, Amichai said.

In the meantime, the latest study offers a fascinating glimpse into how nectar-feeding bats combine various forms of sensory information to navigate the world around them, Amichai said. Their world is a combination of scent, echolocation, spatial memory, knowledge of the seasons and the physical sensation and equilibrium provided by their whiskers.

"I find thinking in these terms of switching back and forth between completely different ways to perceive the world -- and seamlessly integrating their input -- to be a mind-blowing concept," Amichai said. Understanding how animals perceive and interact with their surroundings helps scientists develop better conservation strategies, he said.

Read more at Science Daily