Dec 31, 2021

Happy New Year

It's that time of the year again. It's New Years Eve. It's the end of the year and soon the beginning of a new one. One tradition here at A Magical Journey is to put up ABBA's Happy New Year and what are traditions for if not to break them from time to time? Here's ABBA's Happy New Year on guitar played by Gabriella Quevedo:



Dec 30, 2021

Himalayan glaciers melting at 'exceptional rate'

The accelerating melting of the Himalayan glaciers threatens the water supply of millions of people in Asia, new research warns.

The study, led by the University of Leeds, concludes that over recent decades the Himalayan glaciers have lost ice ten times more quickly over the last few decades than on average since the last major glacier expansion 400-700 years ago, a period known as the Little Ice Age.

The study also reveals that Himalayan glaciers are shrinking far more rapidly than glaciers in other parts of the world -- a rate of loss the researchers describe as "exceptional."

The paper, which is published in Scientific Reports, made a reconstruction of the size and ice surfaces of 14,798 Himalayan glaciers during the Little Ice Age. The researchers calculate that the glaciers have lost around 40 per cent of their area -- shrinking from a peak of 28,000 km2 to around 19,600 km2 today.

During that period they have also lost between 390 km3 and 586 km3 of ice -- the equivalent of all the ice contained today in the central European Alps, the Caucasus, and Scandinavia combined. The water released through that melting has raised sea levels across the world by between 0.92 mm and 1.38 mm, the team calculates.

Dr Jonathan Carrivick, corresponding author and Deputy Head of the University of Leeds School of Geography, said: "Our findings clearly show that ice is now being lost from Himalayan glaciers at a rate that is at least ten times higher than the average rate over past centuries. This acceleration in the rate of loss has only emerged within the last few decades, and coincides with human-induced climate change."

The Himalayan mountain range is home to the world's third-largest amount of glacier ice, after Antarctica and the Arctic and is often referred to as 'the Third Pole'.

The acceleration of melting of Himalayan glaciers has significant implications for hundreds of millions of people who depend on Asia's major river systems for food and energy. These rivers include the Brahmaputra, Ganges and Indus.

The team used satellite images and digital elevation models to produce outlines of the glaciers' extent 400-700 years ago and to 'reconstruct' the ice surface. The satellite images revealed ridges that mark the former glacier boundaries and the researchers used the geometry of these ridges to estimate the former glacier extent and ice surface elevation. Comparing the glacier reconstruction to the glacier now, determined the volume and hence mass loss between the Little Ice Age and now.

The Himalayan glaciers are generally losing mass faster in the eastern regions -- taking in east Nepal and Bhutan north of the main divide. The study suggests this variation is probably due to differences in geographical features on the two sides of the mountain range and their interaction with the atmosphere -- resulting in different weather patterns.

Himalayan glaciers are also declining faster where they end in lakes, which have several warming effects, rather than where they end on land. The number and size of these lakes are increasing so continued acceleration in mass loss can be expected.

Similarly, glaciers which have significant amounts of natural debris upon their surfaces are also losing mass more quickly: they contributed around 46.5% of total volume loss despite making up only around 7.5% of the total number of glaciers.

Dr Carrivick said: "While we must act urgently to reduce and mitigate the impact of human-made climate change on the glaciers and meltwater-fed rivers, the modelling of that impact on glaciers must also take account of the role of factors such as lakes and debris."

Read more at Science Daily

Understanding human-elephant conflict and vulnerability in the face of climate change

Human-wildlife conflict is a central issue in the conservation sciences. Whether it be reintroducing wolves into key ecosystems of the southwestern U.S. -- which is having an impact on livestock and cattle ranchers -- or the ongoing challenge of elephants living alongside communities on the African savannah, the effects of this conflict on people's livelihoods can be significant. In African landscapes where growing human and elephant populations compete over limited resources, for example, human-elephant conflict causes crop loss, and may even result in human injury and death and subsequent retaliatory killing of wildlife.

Despite all that is known about the challenges of human-wildlife conflict, however, measuring its impact on human livelihoods is complicated. An international team of researchers, including Northern Arizona University professor Duan Biggs, spent three yearsinvestigating the dynamics between wildlife, people and the environment across the Kavango Zambezi Transfrontier Conservation Area, the world's largest terrestrial transboundary conservation area, extending across five African countries.

The study, led by Jonathan Salerno of Colorado State University and funded by the National Science Foundation, involved a large team of collaborators, including researchers from the University of Colorado Boulder, the University of Louisville, the University of California Berkeley, the University of North Carolina Wilmington, the University of Botswana, the University of Namibia, Stellenbosch University and Griffith University as well as The Nature Conservancy South Africa and the Department of National Parks and Wildlife of Zambia.

As described in their paper recently published in Current Biology, "Wildlife impacts and changing climate pose compounding threats to human food security," the team used interdisciplinary approaches across a wide study area to better understand how climate change interacts with human-elephant conflict to affect household food insecurity.

The goals of the project were to identify the socio-ecological conditions and patterns that affect household and community vulnerability and to determine leverage points that may aid in mitigating how land-use decisions and land-cover change affect vulnerability in the Kavango Zambezi Transfrontier Conservation Area in southern Africa. The investigators combined household surveys and participatory mapping to characterize how indicators of vulnerability shape smallholders' land use decisions. They integrated data on the environment, market factors, government policy and subsidies, culture and ethnicity and the presence and intervention of non-governmental organizations with remotely sensed imagery to compare trajectories of land-use and land-cover change with underlying socioecological drivers. By advancing the understanding of vulnerability, this research identifies how vulnerability influences and is affected by socioeconomic and biophysical drivers at multiple scales.

"The project as a whole is focused on understanding human vulnerability and adaptive capacity in the context of environmental change," Salerno said. "Taking a systems-level view of this problem is important because we're studying human vulnerability, which can be defined and impacted by many different things."

Another important finding of the study was that the people within these affected communities have the adaptive capacity to gather food resources and buffer the impacts of elephant conflict and short rain seasons. Although the individual communities may be resilient, larger institutions such as governments and aid organizations are not currently sufficiently supporting effective risk mitigation or risk reduction strategies for households. The team also advocates that in addition to habitat protection, there need to be appropriate resources and funding put toward human-wildlife conflict mitigation programs to support the conservation of African savannah elephants.

Biggs was born and raised in southern Africa and has worked extensively on community engagement for conservation and human-wildlife conflict. He contributed his experience from the region and social-ecological expertise to the study.

"Our findings highlighting the dependence of both humans and elephants on the same resources, especially during drought, shows that we need to tackle the challenge of human-elephant co-existence and local adaptations to climatic change simultaneously," Biggs said.

Read more at Science Daily

Blueprint reveals how plants build a sugar transport lane

A tiny region at the root tip has been found to be responsible for orchestrating the growth and development of the complex network of vascular tissues that transport sugars through plant roots.

In a paper published in Science today, an international team of scientists present a detailed blueprint of how plants construct phloem cells -- the tissue responsible for transporting and accumulating sugars and starch in the parts of the plant that we harvest (seeds, fruits and storage tubers) to feed much of the world.

This pivotal research reveals how global signals in root meristems coordinate distinct maturation phases of the phloem tissue.

Phloem is a highly specialised vascular tissue that forms an interconnected network of continuous strands throughout a plant's body. It transports sugars, nutrients and a range of signalling molecules between leaves, roots, flowers and fruits.

As a result, phloem is central to plant function. Understanding how the phloem network is initiated and develops is important for future applications in agriculture, forestry and biotechnology as it could reveal how to better transport this sugar energy to where it is needed.

How do plants build a sugar lane in a multi-lane highway?

Plant roots continue to grow throughout a plant's life. This phenomenon, known as indeterminate growth, means roots continually elongate as they add new tissues to the tip of the root -- like constructing a never-ending highway. A continuous file of specialised phloem cells running the length of roots (analogous to a lane on a highway) delivers the primary nutrient, sucrose, to the parts of the plant where it is needed for growth. To fulfil this vital role, phloem tissue must develop and mature rapidly so it can supply sugars to surrounding tissues -- akin to building a service lane that needs to be completed in the first stage of constructing a multi-lane highway.

The problem that has long puzzled plant scientists is how a single instructive gradient of proteins are able to stage the construction phases across all the different specialised cell files (highway lanes) that are present in roots. How does one cell type read the same gradient as its neighbours, but interprets it differently to stage its own specialised development is a question that plant scientists have been working to resolve.

Over the past 15 years, researchers in Yrjö Helariutta's teams at the University of Cambridge and University of Helsinkihave uncovered the central role of cell-to-cell communication and complex feedback-mechanisms involved in vascular patterning. This new research, undertaken with collaborators at New York University and North Carolina State University, reveals how this single lane of phloem cells is constructed independently of surrounding cells.

The Sainsbury/Helsinki group dissected each step in the construction of the phloem cell file (the sugar transport lane) in the model plant Arabidopsis thaliana using single-cell RNA-seq and live imaging. Their work showed how the proteins that control the broad maturation gradient of the root interact with the genetic machinery that specifically controls phloem development.

This is one mechanism that appears to help the phloem cell file to fast-track maturation using its own machinery to interpret the maturation cues. Dr Pawel Roszak, co-first-author of the study and researcher at the Sainsbury Laboratory Cambridge University (SLCU), explains: "We have shown how global signals in the root meristem interact with the cell type specific factors to determine distinct phases of phloem development at the cellular resolution. Using cell sorting followed by deep, high-resolution single-cell sequencing of the underlying gene regulatory network revealed a "seesaw" mechanism of reciprocal genetic repression that triggers rapid developmental transitions."

The group also showed how phloem development is staged over time, with early genetic programs inhibiting late genetic programs and vice versa -- just as the road asphalt-laying work crews' hand over construction to lane painters in the latter stages of highway construction. In addition, they showed how early phloem regulators instructed specific genes to split the phloem cells into two different subtypes -- like the construction of a fork in the road leading to two separate destinations.

Co-leader of the work, Professor Yrjö Helariutta, said his teams' reconstruction of the steps from birth to terminal differentiation of protophloem in the Arabidopsis root exposed the steps. Helariutta said: "Broad maturation gradients interfacing with cell-type specific transcriptional regulators to stage cellular differentiation is required for phloem development."

"By combining single-cell transcriptomics with live imaging, here we have mapped the cellular events from the birth of the phloem cell to its terminal differentiation into phloem sieve element cells. This allowed us to uncover genetic mechanisms that coordinate cellular maturation and connect the timing of the genetic cascade to broadly expressed master regulators of meristem maturation. The precise timing of developmental mechanisms was critical for proper phloem development, with apparent "fail safe" mechanisms to ensure transitions."

Read more at Science Daily

Microglial methylation 'landscape' in human brain

In the central nervous system, microglial cells play critical roles in development, aging, brain homeostasis, and pathology. Recent studies have shown variation in the gene-expression profile and phenotype of microglia across brain regions and between different age and disease states. But the molecular mechanisms that contribute to these transcriptomic changes in the human brain are not well understood. Now, a new study targets the methylation profile of microglia from human brain.

The study appears in Biological Psychiatry, published by Elsevier.

Microglia, the brain's own immune cells, were once thought of as a homogenous population that was either "activated" or "inactivated," with either pro-inflammatory or neuroprotective effects. But the cells are now recognized to have a vast array of phenotypes depending on environmental conditions with myriad functional consequences. Microglia are increasingly appreciated as critical players in neurologic and psychiatric disorders.

Fatemeh Haghighi, PhD, senior author of the new work, said: "To address this gap in knowledge, we set out to characterize the DNA methylation landscape of human primary microglia cells and factors that contribute to variations in the microglia methylome."

DNA methylation is the main form of epigenetic regulation, which determines the pattern of which genes are being turned "on" or "off" in various circumstances over time.

The researchers studied isolated microglia cells from post-mortem human brain tissue from 22 donors of various age, including 1 patient with schizophrenia, 13 with mood disorder, and 8 controls with no psychiatric disorder, taken from 4 brain regions. They analyzed the microglia using genome-scale methylation microarrays.

Unsurprisingly, microglia showed DNA methylation profiles that were distinct from other cells in the central nervous system. But less expected, said Haghighi, "we found that interindividual differences rather than brain region differences had a much larger effect on the DNA methylation variability." In addition, an exploratory analysis showed differences in the methylation profile of microglia from brains of subjects with psychiatric disorders compared to controls.

Read more at Science Daily

Dec 29, 2021

Templating approach stabilizes 'ideal' material for alternative solar cells

Researchers have developed a method to stabilise a promising material known as perovskite for cheap solar cells, without compromising its near-perfect performance.

The researchers, from the University of Cambridge, used an organic molecule as a 'template' to guide perovskite films into the desired phase as they form. Their results are reported in the journal Science.

Perovskite materials offer a cheaper alternative to silicon for producing optoelectronic devices such as solar cells and LEDs.

There are many different perovskites, resulting from different combinations of elements, but one of the most promising to emerge in recent years is the formamidinium (FA)-based FAPbI3 crystal.

The compound is thermally stable and its inherent 'bandgap' -- the property most closely linked to the energy output of the device -- is not far off ideal for photovoltaic applications.

For these reasons, it has been the focus of efforts to develop commercially available perovskite solar cells. However, the compound can exist in two slightly different phases, with one phase leading to excellent photovoltaic performance, and the other resulting in very little energy output.

"A big problem with FAPbI3 is that the phase that you want is only stable at temperatures above 150 degrees Celsius," said co-author Tiarnan Doherty from Cambridge's Cavendish Laboratory. "At room temperature, it transitions into another phase, which is really bad for photovoltaics."

Recent solutions to keep the material in its desired phase at lower temperatures have involved adding different positive and negative ions into the compound.

"That's been successful and has led to record photovoltaic devices but there are still local power losses that occur," said Doherty. "You end up with local regions in the film that aren't in the right phase."

Little was known about why the additions of these ions improved stability overall, or even what the resulting perovskite structure looked like.

"There was this common consensus that when people stabilise these materials, they're an ideal cubic structure," said Doherty. "But what we've shown is that by adding all these other things, they're not cubic at all, they're very slightly distorted. There's a very subtle structural distortion that gives some inherent stability at room temperature."

The distortion is so minor that it had previously gone undetected, until Doherty and colleagues used sensitive structural measurement techniques that have not been widely used on perovskite materials.

The team used scanning electron diffraction, nano-X-ray diffraction and nuclear magnetic resonance to see, for the first time, what this stable phase really looked like.

"Once we figured out that it was the slight structural distortion giving this stability, we looked for ways to achieve this in the film preparation without adding any other elements into the mix."

Co-author Satyawan Nagane used an organic molecule called Ethylenediaminetetraacetic acid (EDTA) as an additive in the perovskite precursor solution, which acts as a templating agent, guiding the perovskite into the desired phase as it forms. The EDTA binds to the FAPbI3 surface to give a structure-directing effect, but does not incorporate into the FAPbI3 structure itself.

"With this method, we can achieve that desired band gap because we're not adding anything extra into the material, it's just a template to guide the formation of a film with the distorted structure -- and the resulting film is extremely stable," said Nagane.

"In this way, you can create this slightly distorted structure in just the pristine FAPbI3 compound, without modifying the other electronic properties of what is essentially a near-perfect compound for perovskite photovoltaics," said co-author Dominik Kubicki from the Cavendish Laboratory, who is now based at the University of Warwick.

The researchers hope this fundamental study will help improve perovskite stability and performance. Their own future work will involve integrating this approach into prototype devices to explore how this technique may help them achieve the perfect perovskite photovoltaic cells.

"These findings change our optimisation strategy and manufacturing guidelines for these materials," said senior author Dr Sam Stranks from Cambridge's Department of Chemical Engineering & Biotechnology. "Even small pockets that aren't slightly distorted will lead to performance losses, and so manufacturing lines will need to have very precise control of how and where the different components and 'distorting' additives are deposited. This will ensure the small distortion is uniform everywhere -- with no exceptions."

Read more at Science Daily

Earth's first giant

The two-meter skull of a newly discovered species of giant ichthyosaur, the earliest known, is shedding new light on the marine reptiles' rapid growth into behemoths of the Dinosaurian oceans, and helping us better understand the journey of modern cetaceans (whales and dolphins) to becoming the largest animals to ever inhabit the Earth.

While dinosaurs ruled the land, ichthyosaurs and other aquatic reptiles (that were emphatically not dinosaurs) ruled the waves, reaching similarly gargantuan sizes and species diversity. Evolving fins and hydrodynamic body-shapes seen in both fish and whales, ichthyosaurs swam the ancient oceans for nearly the entirety of the Age of Dinosaurs.

"Ichthyosaurs derive from an as yet unknown group of land-living reptiles and were air-breathing themselves," says lead author Dr. Martin Sander, paleontologist at the University of Bonn and Research Associate with the Dinosaur Institute at the Natural History Museum of Los Angeles County (NHM). "From the first skeleton discoveries in southern England and Germany over 250 years ago, these 'fish-saurians' were among the first large fossil reptiles known to science, long before the dinosaurs, and they have captured the popular imagination ever since."

Excavated from a rock unit called the Fossil Hill Member in the Augusta Mountains of Nevada, the well-preserved skull, along with part of the backbone, shoulder, and forefin, date back to the Middle Triassic (247.2-237 million years ago), representing the earliest case of an ichthyosaur reaching epic proportions. As big as a large sperm whale at more than 17 meters (55.78 feet) long, the newly named Cymbospondylus youngorum is the largest animal yet discovered from that time period, on land or in the sea. In fact, it was the first giant creature to ever inhabit the Earth that we know of.

"The importance of the find was not immediately apparent," notes Dr. Sander, "because only a few vertebrae were exposed on the side of the canyon. However, the anatomy of the vertebrae suggested that the front end of the animal might still be hidden in the rocks. Then, one cold September day in 2011, the crew needed a warm-up and tested this suggestion by excavation, finding the skull, forelimbs, and chest region."

The new name for the species, C. youngorum, honors a happy coincidence, the sponsoring of the fieldwork by Great Basin Brewery of Reno, owned and operated by Tom and Bonda Young, the inventors of the locally famous Icky beer which features an ichthyosaur on its label.

In other mountain ranges of Nevada, paleontologists have been recovering fossils from the Fossil Hill Member's limestone, shale, and siltstone since 1902, opening a window into the Triassic. The mountains connect our present to ancient oceans and have produced many species of ammonites, shelled ancestors of modern cephalopods like cuttlefish and octopuses, as well as marine reptiles. All these animal specimens are collectively known as the Fossil Hill Fauna, representing many of C. youngorum's prey and competitors.

C. youngorum stalked the oceans some 246 million years ago, or only about three million years after the first ichthyosaurs got their fins wet, an amazingly short time to get this big. The elongated snout and conical teeth suggest that C. youngorum preyed on squid and fish, but its size meant that it could have hunted smaller and juvenile marine reptiles as well.

The giant predator probably had some hefty competition. Through sophisticated computational modeling, the authors examined the likely energy running through the Fossil Hill Fauna's food web, recreating the ancient environment through data, finding that marine food webs were able to support a few more colossal meat-eating ichthyosaurs. Ichthyosaurs of different sizes and survival strategies proliferated, comparable to modern cetaceans' -- from relatively small dolphins to massive filter-feeding baleen whales, and giant squid-hunting sperm whales.

Co-author and ecological modeler Dr. Eva Maria Griebeler from the University of Mainz in Germany notes, "due to their large size and resulting energy demands, the densities of the largest ichthyosaurs from the Fossil Hill Fauna including C. youngourum must have been substantially lower than suggested by our field census. The ecological functioning of this food web from ecological modeling was very exciting as modern highly productive primary producers were absent in Mesozoic food webs and were an important driver in the size evolution of whales."

Whales and ichthyosaurs share more than a size range. They have similar body plans, and both initially arose after mass extinctions. These similarities make them scientifically valuable for comparative study. The authors combined computer modeling and traditional paleontology to study how these marine animals reached record-setting sizes independently.

"One rather unique aspect of this project is the integrative nature of our approach. We first had to describe the anatomy of the giant skull in detail and determine how this animal is related to other ichthyosaurs," says senior author Dr. Lars Schmitz, Associate Professor of Biology at Scripps College and Dinosaur Institute Research Associate. "We did not stop there, as we wanted to understand the significance of the new discovery in the context of the large-scale evolutionary pattern of ichthyosaur and whale body sizes, and how the fossil ecosystem of the Fossil Hill Fauna may have functioned. Both the evolutionary and ecological analyses required a substantial amount of computation, ultimately leading to a confluence of modeling with traditional paleontology."

They found that while both cetaceans and ichthyosaurs evolved very large body sizes, their respective evolutionary trajectories toward gigantism were different. Ichthyosaurs had an initial boom in size, becoming giants early on in their evolutionary history, while whales took much longer to reach the outer limits of huge. They found a connection between large size and raptorial hunting -- think of a sperm whale diving down to hunt giant squid -- and a connection between large size and a loss of teeth -- think of the giant filter-feeding whales that are the largest animals ever to live on Earth.

Ichthyosaurs' initial foray into gigantism was likely thanks to the boom in ammonites and jawless eel-like conodonts filling the ecological void following the end-Permian mass extinction. While their evolutionary routes were different, both whales and ichthyosaurs relied on exploiting niches in the food chain to make it really big.

Read more at Science Daily

Geneticists’ new research on ancient Britain contains insights on language, ancestry, kinship, milk

New research revealing a major migration to the island of Great Britain offers fresh insights into the languages spoken at the time, the ancestry of present-day England and Wales, and even ancient habits of dairy consumption.

The findings are described in Nature by a team of more than 200 international researchers led by Harvard geneticists David Reich and Nick Patterson. Michael Isakov, a Harvard undergraduate who discovered the existence of the 3,000-year-old migration, is one of the co-first authors.

The analysis is one of two Reich-led studies of DNA data from ancient Britain that Nature published on Tuesday. Both highlight technological advances in large-scale genomics and open new windows into the lives of ancient people.

"This shows the power of large-scale genetic data in concert with archaeological and other data to get rich information about our past from a time before writing," said Reich, a professor in the Department of Human Evolutionary Biology and a professor of genetics at Harvard Medical School. "The studies are not only important for Great Britain, where we now have far more ancient DNA data than in any other region, but also because of what they show about the promise of similar studies elsewhere in the world."

The researchers analyzed the DNA of 793 newly reported individuals in the largest genome-wide study involving ancient humans. Their findings reveal a large-scale migration likely from somewhere in France to the southern part of Great Britain, or modern-day England and Wales, that eventually replaced about 50 percent of the ancestry of the island during the Late Bronze Age (1200 to 800 B.C.).

The study supports a recent theory that early Celtic languages came to Great Britain from France during the Late Bronze Age. It challenges two prominent theories: that the languages arrived hundreds of years later, in the Iron Age, or 1,500 years earlier at the dawn of the Bronze Age.

Previous research has shown that large-scale movement often accompanied language changes in pre-state societies. The Reich team argues that this untold migration event makes more sense for the spread of early Celtic languages into Britain.

"By using genetic data to document times when there were large-scale movements of people into a region, we can identify plausible times for a language shift," Reich said. "Known Celtic languages are too similar in their vocabularies to plausibly descend from a common ancestor 4,500 years ago, which is the time of the earlier pulse of large-scale migration, and very little migration occurred in the Iron Age. If you're a serious scholar, the genetic data should make you adjust your beliefs: downweighting the scenario of early Celtic language coming in the Iron Age [and early Bronze Age] and upweighting the Late Bronze Age."

As part of the genetic analysis, the researchers found that the ability to digest cow's milk dramatically increased in Britain from 1200 to 200 B.C., which is about a millennium earlier than it did in central Europe. These findings illuminate a different role for dairy consumption in Britain during this period compared with the rest of mainland Europe. More study is needed to define that role, the researchers said. Increased milk tolerance would have provided a big advantage in the former of higher survival rates among the children of people carrying this genetic adaptation.

The newly discovered ancestry change happened around 3,000 years ago, more than a millennium and a half before the Saxon period. The team was aware of a migration into England at some point during this gap because of an observation they made in research published in 2016. That study showed that contemporary English people have more DNA from early European farmers than people who lived in England about 4,000 years ago. The team set out to collect DNA from later periods to detect the shift.

The discontinuity -- a specific point in time when the percentage of farmer ancestry in English genomes changed -- was first noticed in the summer of 2019 by Isakov, an applied mathematics concentrator. He had started working as a researcher in Reich's lab the summer after his first year and was able to increase the statistical power of the group's ancestry tests. When he noticed some outliers in the data from people living 3,000 years ago, he led a closer analysis and discovered the migration.

"It's an extraordinary outcome and I'm very happy that I was able to get through it," said Isakov, who will graduate in May.

The second paper looks at kinship practices of 35 individuals who lived about 5,700 years ago and were buried in a tomb at Hazleton North in Gloucestershire, England. The researchers found a 27-person family -- three times larger than the second-largest documented ancient family -- whose kin relationships could be precisely determined by analyzing their DNA. The team created a family tree that covered five generations and found examples of polygyny, polyandry, adoption, and a key role for both patrilineal and matrilineal descent.

The lab's research illustrates the interdisciplinary collaborations that are required to tell the richest stories of the ancient past, Isakov said.

"It's sort of incredible that we have geneticists, we have statisticians, we have archaeologists, linguists, and even chemical analysis coming together. I think that the fact that we're able to like merge all these fields and have an actual insight that's culturally important is a great example of interdisciplinary science."

Read more at Science Daily

Researchers develop structural blueprint of nanoparticles to target white blood cells responsible for acute lung inflammation

The COVID-19 pandemic highlighted the devastating impact of acute lung inflammation (ALI), which is part of the acute respiratory distress syndrome (ARDS) that is the dominant cause of death in COVID-19. A potential new route to the diagnosis and treatment of ARDS comes from studying how neutrophils -- the white blood cells responsible for detecting and eliminating harmful particles in the body -- differentiate what materials to uptake by the material's surface structure, and favor uptake of particles that exhibit "protein clumping," according to new research from the Perelman School of Medicine at the University of Pennsylvania. The findings are published in Nature Nanotechnology.

Researchers investigated how neutrophils are able to differentiate between bacteria to be destroyed and other compounds in the bloodstream, such as cholesterol particles. They tested a library consisting of 23 different protein-based nanoparticles in mice with ALI which revealed a set of "rules" that predict uptake by neutrophils. Neutrophils don't take up symmetrical, rigid particles, such as viruses, but they do take up particles that exhibited "protein clumping," which the researchers call nanoparticles with agglutinated protein (NAPs).

"We want to utilize the existing function of neutrophils that identifies and eliminates invaders to inform how to design a 'Trojan horse' nanoparticle that overactive neutrophils will intake and deliver treatment to alleviate ALI and ARDS," said study lead author Jacob Myerson, PhD, a postdoctoral research fellow in the Department of Systems Pharmacology and Translational Therapeutics. "In order to build this 'Trojan horse' delivery system, though, we had to determine how neutrophils identify which particles in the blood to take up."

ALI and ARDS are life-threatening forms of respiratory failure with high morbidity and mortality rates. Prior to COVID-19, there were 190,000 annual cases of ARDS in the U.S. and 75,000 deaths, with the ARDS being caused by pneumonia, sepsis, and trauma. However, COVID has increased ARDS cases into the millions. When ALI or ARDS occurs, the lung's air sacs recruit neutrophils to the lungs in order to eliminate circulating microbes. This process causes neutrophils to release compounds that further aggravate lung injury and damage the air sacs, so patients develop low blood oxygen levels. Unfortunately, despite the severity of ALI/ARDS, there is no effective drug to control it, and treatment currently focuses on supporting patients while the lungs naturally, but slowly, heal.

To address ARDS and other medical problems, researchers at Penn and elsewhere have been using nanoparticles to concentrate drugs in injured or diseased organs. Such nanoparticles are also being used for gene therapy and immunotherapy.

The researchers note that while the development of viable therapies for ALI/ARDS using nanoparticles to deliver treatments via neutrophils are a long way off, this research represents a significant step in understanding the condition and function of the immune system.

"Now that we have determined that neutrophils patrol for nanoparticles with agglutinated protein, our next step is to understand how and why other microbes, like viruses, which are rigid and symmetrical, evolved to evade neutrophils," said senior author Jacob Brenner, MD, PhD, an associate professor of Pulmonary Medicine in the Division of Pulmonary, Allergy, and Critical Care. "With this knowledge, we can continue to utilize this unique combination of material science and engineering, to create disease-specific therapies that target more advanced and complicated pathologies."

Read more at Science Daily

Dec 28, 2021

Earth and Mars were formed from inner Solar System material

Earth and Mars were formed from material that largely originated in the inner Solar System; only a few percent of the building blocks of these two planets originated beyond Jupiter's orbit. A group of researchers led by the University of Münster (Germany) report these findings today in the journal Science Advances. They present the most comprehensive comparison to date of the isotopic composition of Earth, Mars and pristine building material from the inner and outer Solar System. Some of this material is today still found largely unaltered in meteorites. The results of the study have far-reaching consequences for our understanding of the process that formed the planets Mercury, Venus, Earth, and Mars. The theory postulating that the four rocky planets grew to their present size by accumulating millimeter-sized dust pebbles from the outer Solar System is not tenable.

Approximately 4.6 billion years ago in the early days of our Solar System, a disk of dust and gases orbited the young Sun. Two theories describe how in the course of millions of years the inner rocky planets formed from this original building material. According to the older theory, the dust in the inner Solar System agglomerated to ever larger chunks gradually reaching approximately the size of our Moon. Collisions of these planetary embryos finally produced the inner planets Mercury, Venus, Earth, and Mars. A newer theory, however, prefers a different growth process: millimeter-sized dust "pebbles" migrated from the outer Solar System towards the Sun. On their way, they were accreted onto the planetary embryos of the inner Solar System, and step by step enlarged them to their present size.

Both theories are based on theoretical models and computer simulations aimed at reconstructing the conditions and dynamics in the early Solar System; both describe a possible path of planet formation. But which one is right? Which process actually took place? To answer these questions, in their current study researchers from the University of Münster (Germany), the Observatoire de la Cote d'Azur (France), the California Institute of Technology (USA), the Natural History Museum Berlin (Germany), and the Free University of Berlin (Germany) determined the exact composition of the rocky planets Earth and Mars. "We wanted to find out whether the building blocks of Earth and Mars originated in the outer or inner Solar System," says Dr. Christoph Burkhardt of the University of Münster, the study's first author. To this end, the isotopes of the rare metals titanium, zirconium and molybdenum found in minute traces in the outer, silicate-rich layers of both planets provide crucial clues. Isotopes are different varieties of the same element, which differ only in the weight of their atomic nucleus.

Meteorites as a reference

Scientists assume that in the early Solar System these and other metal isotopes were not evenly distributed. Rather, their abundance depended on the distance from the Sun. They therefore hold valuable information about where in the early Solar System a certain body's building blocks originated.

As a reference for the original isotopic inventory of the outer and inner Solar System, the researchers used two types of meteorites. These chunks of rock generally found their way to Earth from the asteroid belt, the region between the orbits of Mars and Jupiter. They are considered to be largely pristine material from the beginnings of the Solar System. While so-called carbonaceous chondrites, which can contain up to a few percent carbon, originated beyond Jupiter's orbit and only later relocated to the asteroid belt due to influence of the growing gas giants, their more carbon-depleted cousins, the non-carbonaceous chondrites, are true children of the inner Solar System.

The precise isotopic composition of Earth's accessible outer rock layers and that of both types of meteorites have been studied for some time; however, there have been no comparably comprehensive analyses of Martian rocks. In their current study, the researchers now examined samples from a total of 17 Martian meteorites, which can be assigned to six typical types of Martian rock. In addition, the scientists for the first time investigated the abundances of three different metal isotopes.

The samples of Martian meteorites were first powdered and subjected to complex chemical pretreatment. Using a multicollector plasma mass spectrometer at the Institute of Planetology at the University of Münster, the researchers were then able to detect tiny amounts of titanium, zirconium, and molybdenum isotopes. They then performed computer simulations to calculate the ratio in which building material found today in carbonaceous and non-carbonaceous chondrites must have been incorporated into Earth and Mars in order to reproduce their measured compositions. In doing so, they considered two different phases of accretion to account for the different history of the titanium and zirconium isotopes as well as of the molybdenum isotopes, respectively. Unlike titanium and zirconium, molybdenum accumulates mainly in the metallic planetary core. The tiny amounts still found today in the silicate-rich outer layers can therefore only have been added during the very last phase of the planet's growth.

The researchers' results show that the outer rock layers of Earth and Mars have little in common with the carbonaceous chondrites of the outer Solar System. They account for only about four percent of both planets' original building blocks. "If early Earth and Mars had mainly accreted dust grains from the outer Solar System, this value should be almost ten times higher," says Prof. Dr. Thorsten Kleine of the University of Münster, who is also director at the Max Planck Institute for Solar System Research in Göttingen. "We thus cannot confirm this theory of the formation of the inner planets," he adds.

Lost building material

But the composition of Earth and Mars does not exactly match the material of the non-carbonaceous chondrites either. The computer simulations suggest that another, different kind of building material must also have been in play. "The isotopic composition of this third type of building material as inferred by our computer simulations implies it must have originated in the innermost region of the Solar System," explains Christoph Burkhardt. Since bodies from such close proximity to the Sun were almost never scattered into the asteroid belt, this material was almost completely absorbed into the inner planets and thus does not occur in meteorites. "It is, so to speak, 'lost building material' to which we no longer have direct access today," says Thorsten Kleine.

Read more at Science Daily

Radioactive radiation could damage biological tissue also via a previously unnoticed mechanism

When cells are exposed to ionizing radiation, more destructive chain reactions may occur than previously thought. An international team led by researchers from the Max Planck Institute for Nuclear Physics in Heidelberg has for the first time observed intermolecular Coulombic decay in organic molecules. This is triggered by ionizing radiation such as from radioactivity or from space. The effect damages two neighbouring molecules and ultimately leads to the breaking of bonds -- like the ones in DNA and proteins. The finding not only improves the understanding of radiation damage but could also help in the search for more effective substances to support radiation therapy.

Sometimes radioactive damage cannot be great enough -- especially when it comes to destroying tumour tissue with ionizing radiation. In radiation therapy, substances that specifically enhance the damage of the radiation in the tumour tissue are used. "The intermolecular Coulombic decay we found could help make such sensitizers more effective," says Alexander Dorn, who heads a research group at the Max Planck Institute for Nuclear Physics and was instrumental in the current study. His team's observations could also improve our understanding of how artificial or natural ionizing radiation damages the genetic material of healthy tissue.

Excess energy leads to a Coulomb explosion

The DNA double helix of the genome resembles a rope ladder with rungs of nucleic base pairs. "Because experiments with the free nucleic bases are difficult, we initially studied pairs of benzene molecules as a model system," explains Dorn. These hydrocarbon rings are connected in a similar way to the nucleic bases stacked on top of each other in a strand of DNA. The researchers bombarded the benzene pairs with electrons, thereby imitating radioactive radiation to a certain extent. When an electron hit a benzene molecule, it was ionized and charged with a lot of energy. The team has now observed that the molecule transferred some of this energy to its partner molecule. This energy boost was enough to ionize the second molecule as well. Both molecules were thus positively charged. Of course, that didn't last long. The two molecular ions repelled each other and flew apart in a Coulomb explosion.

Until now, scientists had assumed that ionizing radiation damages biomolecules mainly indirectly. The high-energy radiation also ionizes the water of which a cell is largely composed and which surrounds biomolecules such as DNA. The ionized water molecules -- especially hydroxide ions -- then attack the DNA. And if an electron of the beta radiation or a gamma quantum does hit a DNA molecule directly, the excess energy normally is dissipated by processes in the molecule itself. It thus remains intact. Or at least that was the assumption up to now. In any case, the weak bonds between different molecules or different parts of the molecule -- as they exist in DNA and proteins -- should not be affected by this either. However, in their reaction microscope, the researchers observed that radioactive radiation can indeed break such bonds. This instrument allows them not only to detect the two separating benzene molecules and measure their energy but also to characterize the electrons emitted.

Fatal consequences of multiple DNA breaks

"It is not yet clear how the intermolecular Coulombic decay affects the DNA strand," says Dorn. If a single strand in the DNA ladder breaks, the consequences should not be too serious. However, the mechanism observed also releases several electrons that can "blow up" further pairs of molecules. And if both strands of DNA are broken in the immediate vicinity, this could have fatal consequences.

Read more at Science Daily

‘Battle of the sexes’ begins in womb as father and mother’s genes tussle over nutrition

Cambridge scientists have identified a key signal that the fetus uses to control its supply of nutrients from the placenta, revealing a tug-of-war between genes inherited from the father and from the mother. The study, carried out in mice, could help explain why some babies grow poorly in the womb.

As the fetus grows, it needs to communicate its increasing needs for food to the mother. It receives its nourishment via blood vessels in the placenta, a specialised organ that contains cells from both baby and mother.

Between 10% and 15% of babies grow poorly in the womb, often showing reduced growth of blood vessels in the placenta. In humans, these blood vessels expand dramatically between mid and late gestation, reaching a total length of approximately 320 kilometres at term.

In a study published today in Developmental Cell, a team led by scientists at the University of Cambridge used genetically engineered mice to show how the fetus produces a signal to encourage growth of blood vessels within the placenta. This signal also causes modifications to other cells of the placenta to allow for more nutrients from the mother to go through to the fetus.

Dr Ionel Sandovici, the paper's first author, said: "As it grows in the womb, the fetus needs food from its mum, and healthy blood vessels in the placenta are essential to help it get the correct amount of nutrients it needs.

"We've identified one way that the fetus uses to communicate with the placenta to prompt the correct expansion of these blood vessels. When this communication breaks down, the blood vessels don't develop properly and the baby will struggle to get all the food it needs."

The team found that the fetus sends a signal known as IGF2 that reaches the placenta through the umbilical cord. In humans, levels of IGF2 in the umbilical cord progressively increase between 29 weeks of gestation and term: too much IGF2 is associated with too much growth, while not enough IGF2 is associated with too little growth. Babies that are too large or too small are more likely to suffer or even die at birth, and have a higher risk to develop diabetes and heart problems as adults.

Dr Sandovici added: "We've known for some time that IGF2 promotes the growth of the organs where it is produced. In this study, we've shown that IGF2 also acts like a classical hormone -- it's produced by the fetus, goes into the fetal blood, through the umbilical cord and to the placenta, where it acts."

Particularly interesting is what their findings reveal about the tussle taking place in the womb.

In mice, the response to IGF2 in the blood vessels of the placenta is mediated by another protein, called IGF2R. The two genes that produce IGF2 and IGF2R are 'imprinted' -- a process by which molecular switches on the genes identify their parental origin and can turn the genes on or off. In this case, only the copy of the igf2 gene inherited from the father is active, while only the copy of igf2r inherited from the mother is active.

Lead author Dr Miguel Constância, said: "One theory about imprinted genes is that paternally-expressed genes are greedy and selfish. They want to extract the most resources as possible from the mother. But maternally-expressed genes act as countermeasures to balance these demands."

"In our study, the father's gene drives the fetus's demands for larger blood vessels and more nutrients, while the mother's gene in the placenta tries to control how much nourishment she provides. There's a tug-of-war taking place, a battle of the sexes at the level of the genome."

The team say their findings will allow a better understanding of how the fetus, placenta and mother communicate with each other during pregnancy. This in turn could lead to ways of measuring levels of IGF2 in the fetus and finding ways to use medication to normalise these levels or promote normal development of placental vasculature.

The researchers used mice, as it is possible to manipulate their genes to mimic different developmental conditions. This enables them to study in detail the different mechanisms taking place. The physiology and biology of mice have many similarities with those of humans, allowing researchers to model human pregnancy, in order to understand it better.

Read more at Science Daily

Contorted oceanic plate caused complex quake off New Zealand’s East Cape

Subduction zones, where a slab of oceanic plate is pushed beneath another tectonic plate down into the mantle, cause the world's largest and most destructive earthquakes. Reconstructing the geometry and stress conditions of the subducted slabs at subduction zones is crucial to understanding and preparing for major earthquakes. However, the tremendous depths of these slabs make this challenging -- seismologists rely mainly on the rare windows into these deeply buried slabs provided by the infrequent but strong earthquakes, termed intraslab earthquakes, that occur within them.

In a new study published in Geophysical Research Letters, a research team led by the University of Tsukuba used seismic data generated by a magnitude 7.3 earthquake that occurred off the northeasternmost tip of New Zealand's North Island on March 4, 2021, detected by seismometers around the world, to investigate the particularly unusual geometry and stress states of the subducted slab deep below the surface in this region.

"The 2021 East Cape earthquake showed a complex rupture process, likely because of its location at the boundary between the Kermadec Trench to the north and the Hikurangi Margin to the south," lead author of the study Assistant Professor Ryo Okuwaki explains. "To investigate the geometry of the stress field and earthquake rupture process, we used a novel finite-fault inversion technique that required no pre-existing knowledge of the area's faults."

This investigation revealed multiple episodes of rupture, generated by both compression and extension in the subsurface at different depths. These episodes included shallow (~30 km) rupture due to extension perpendicular to the trench as would typically be expected in a subduction zone. Unexpectedly, however, the deep (~70 km) rupture occurred with compression parallel to the subduction trench.

"Two alternative or inter-related factors may explain the unique rupture geometry of the 2021 East Cape earthquake," senior author Professor Yuji Yagi explains. "First, subduction of a seamount or multiple seamounts along with the subducted slab could contort the slab and create local changes in the stress field. Second, the transition from the Kermadec Trench to the Hikurangi Margin, where the subducted oceanic crust is considerably thicker, could create the local conditions responsible for the unusual faulting pattern."

Because of the rarity of deep intraslab earthquakes in this region, distinguishing between these two possibilities is currently challenging, and indeed both factors might play significant roles in creating the complex stress field revealed by the East Cape earthquake. Additional earthquakes off the northeast coast of New Zealand in the future may shed further light on this deep tectonic mystery.

Read more at Science Daily

Dec 27, 2021

NASA's Webb telescope launches to see first galaxies, distant worlds

NASA's James Webb Space Telescope launched at 7:20 a.m. EST Saturday on an Ariane 5 rocket from Europe's Spaceport in French Guiana, South America.

A joint effort with ESA (European Space Agency) and the Canadian Space Agency, the Webb observatory is NASA's revolutionary flagship mission to seek the light from the first galaxies in the early universe and to explore our own solar system, as well as planets orbiting other stars, called exoplanets.

"The James Webb Space Telescope represents the ambition that NASA and our partners maintain to propel us forward into the future," said NASA Administrator Bill Nelson. "The promise of Webb is not what we know we will discover; it's what we don't yet understand or can't yet fathom about our universe. I can't wait to see what it uncovers!"

Ground teams began receiving telemetry data from Webb about five minutes after launch. The Arianespace Ariane 5 rocket performed as expected, separating from the observatory 27 minutes into the flight. The observatory was released at an altitude of approximately 75 miles (120 kilometers). Approximately 30 minutes after launch, Webb unfolded its solar array, and mission managers confirmed that the solar array was providing power to the observatory. After solar array deployment, mission operators will establish a communications link with the observatory via the Malindi ground station in Kenya, and ground control at the Space Telescope Science Institute in Baltimore will send the first commands to the spacecraft.

Engineers and ground controllers will conduct the first of three mid-course correction burns about 12 hours and 30 minutes after launch, firing Webb's thrusters to maneuver the spacecraft on an optimal trajectory toward its destination in orbit about 1 million miles from Earth.

"I want to congratulate the team on this incredible achievement -- Webb's launch marks a significant moment not only for NASA, but for thousands of people worldwide who dedicated their time and talent to this mission over the years," said Thomas Zurbuchen, associate administrator for the Science Mission Directorate at NASA Headquarters in Washington. "Webb's scientific promise is now closer than it ever has been. We are poised on the edge of a truly exciting time of discovery, of things we've never before seen or imagined."

The world's largest and most complex space science observatory will now begin six months of commissioning in space. At the end of commissioning, Webb will deliver its first images. Webb carries four state-of-the-art science instruments with highly sensitive infrared detectors of unprecedented resolution. Webb will study infrared light from celestial objects with much greater clarity than ever before. The premier mission is the scientific successor to NASA's iconic Hubble and Spitzer space telescopes, built to complement and further the scientific discoveries of these and other missions.

"The launch of the Webb Space Telescope is a pivotal moment -- this is just the beginning for the Webb mission," said Gregory L. Robinson, Webb's program director at NASA Headquarters. "Now we will watch Webb's highly anticipated and critical 29 days on the edge. When the spacecraft unfurls in space, Webb will undergo the most difficult and complex deployment sequence ever attempted in space. Once commissioning is complete, we will see awe-inspiring images that will capture our imagination."

The telescope's revolutionary technology will explore every phase of cosmic history -- from within our solar system to the most distant observable galaxies in the early universe, to everything in between. Webb will reveal new and unexpected discoveries and help humanity understand the origins of the universe and our place in it.

Read more at Science Daily

Computer simulation models potential asteroid collisions

An asteroid impact can be enough to ruin anyone's day, but several small factors can make the difference between an out-of-this-world story and total annihilation. In AIP Advances, by AIP Publishing, a researcher from the National Institute of Natural Hazards in China developed a computer simulation of asteroid collisions to better understand these factors.

The computer simulation initially sought to replicate model asteroid strikes performed in a laboratory. After verifying the accuracy of the simulation, Duoxing Yang believes it could be used to predict the result of future asteroid impacts or to learn more about past impacts by studying their craters.

"From these models, we learn generally a destructive impact process, and its crater formation," said Yang. "And from crater morphologies, we could learn impact environment temperatures and its velocity."

Yang's simulation was built using the space-time conservation element and solution element method, designed by NASA and used by many universities and government agencies, to model shock waves and other acoustic problems.

The goal was to simulate a small rocky asteroid striking a larger metal asteroid at several thousand meters per second. Using his simulation, Yang was able to calculate the effects this would have on the metal asteroid, such as the size and shape of the crater.

The simulation results were compared against mock asteroid impacts created experimentally in a laboratory. The simulation held up against these experimental tests, which means the next step in the research is to use the simulation to generate more data that can't be produced in the laboratory.

This data is being created in preparation for NASA's Psyche mission, which aims to be the first spacecraft to explore an asteroid made entirely of metal. Unlike more familiar rocky asteroids, which are made of roughly the same materials as the Earth's crust, metal asteroids are made of materials found in the Earth's inner core. NASA believes studying such an asteroid can reveal more about the conditions found in the center of our own planet.

Yang believes computer simulation models can generalize his results to all metal asteroid impacts and, in the process, answer several existing questions about asteroid interactions.

Read more at Science Daily

Astronomers capture black hole eruption spanning 16 times the full Moon in the sky

Astronomers have produced the most comprehensive image of radio emission from the nearest actively feeding supermassive black hole to Earth.

The emission is powered by a central black hole in the galaxy Centaurus A, about 12 million light years away.

As the black hole feeds on in-falling gas, it ejects material at near light-speed, causing 'radio bubbles' to grow over hundreds of millions of years.

When viewed from Earth, the eruption from Centaurus A now extends eight degrees across the sky -- the length of 16 full Moons laid side by side.

It was captured using the Murchison Widefield Array (MWA) telescope in outback Western Australia.

The research was published today in the journal Nature Astronomy.

Lead author Dr Benjamin McKinley, from the Curtin University node of the International Centre for Radio Astronomy Research (ICRAR), said the image reveals spectacular new details of the radio emission from the galaxy.

"These radio waves come from material being sucked into the supermassive black hole in the middle of the galaxy," he said.

"It forms a disc around the black hole, and as the matter gets ripped apart going close to the black hole, powerful jets form on either side of the disc, ejecting most of the material back out into space, to distances of probably more than a million light years.

"Previous radio observations could not handle the extreme brightness of the jets and details of the larger area surrounding the galaxy were distorted, but our new image overcomes these limitations."

Centaurus A is the closest radio galaxy to our own Milky Way.

"We can learn a lot from Centaurus A in particular, just because it is so close and we can see it in such detail," Dr McKinley said.

"Not just at radio wavelengths, but at all other wavelengths of light as well.

"In this research we've been able to combine the radio observations with optical and x-ray data, to help us better understand the physics of these supermassive black holes."

Astrophysicist Dr Massimo Gaspari, from Italy's National Institute for Astrophysics, said the study corroborated a novel theory known as 'Chaotic Cold Accretion' (CCA), which is emerging in different fields.

"In this model, clouds of cold gas condense in the galactic halo and rain down onto the central regions, feeding the supermassive black hole," he said.

"Triggered by this rain, the black hole vigorously reacts by launching energy back via radio jets that inflate the spectacular lobes we see in the MWA image. This study is one of the first to probe in such detail the multiphase CCA 'weather' over the full range of scales," Dr Gaspari concluded.

Dr McKinley said the galaxy appears brighter in the centre where it is more active and there is a lot of energy.

"Then it's fainter as you go out because the energy's been lost and things have settled down," he said.

"But there are interesting features where charged particles have re-accelerated and are interacting with strong magnetic fields."

MWA director Professor Steven Tingay said the research was possible because of the telescope's extremely wide field-of-view, superb radio-quiet location, and excellent sensitivity.

"The MWA is a precursor for the Square Kilometre Array (SKA) -- a global initiative to build the world's largest radio telescopes in Western Australia and South Africa," he said.

Read more at Science Daily

1,500 endangered languages at high risk of being lost this century

A world-first study warns 1,500 endangered languages could no longer be spoken by the end of this century.

The study, led by The Australian National University (ANU), identified predictors that put endangered languages at high risk.

Co-author Professor Lindell Bromham said that of the world's 7,000 recognised languages, around half were currently endangered.

"We found that without immediate intervention, language loss could triple in the next 40 years. And by the end of this century, 1,500 languages could cease to be spoken."

Published in Nature Ecology and Evolution, they study charts the widest range of factors ever putting endangered languages under pressure.

One finding was that more years of schooling increased the level of language endangerment. The researchers say it shows we need to build curricula that support bilingual education, fostering both indigenous language proficiency as well as use of regionally-dominant languages.

"Across the 51 factors or predictors we investigated, we also found some really unexpected and surprising pressure points. This included road density," Professor Bromham said.

"Contact with other local languages is not the problem -- in fact languages in contact with many other Indigenous languages tend to be less endangered.

"But we found that the more roads there are, connecting country to city, and villages to towns, the higher the risk of languages being endangered. It's as if roads are helping dominant languages 'steam roll' over other smaller languages."

The researchers say the findings also have important lessons for preserving many of the endangered languages spoken by Australia's First Nations peoples.

"Australia has the dubious distinction of having one of the highest rates of language loss worldwide," Professor Felicity Meakins, from the University of Queensland and one of the study's co-authors, said.

"Prior to colonisation, more than 250 First Nations languages were spoken, and multilingualism was the norm. Now, only 40 languages are still spoken and just 12 are being learnt by children.

"First Nations languages need funding and support. Australia only spends $20.89 annually per capita of the Indigenous population on languages, which is abysmal compared with Canada's $69.30 and New Zealand's $296.44."

Professor Bromham said that as the world enters the UNESCO Decade of Indigenous Languages in 2022, the study's findings were a vital reminder that more action was urgently needed to preserve at-risk languages.

"When a language is lost, or is 'Sleeping' as we say for languages that are no longer spoken, we lose so much of our human cultural diversity. Every language is brilliant in its own way.

Read more at Science Daily

Solar flare throws light on ancient trade between the Islamic Middle East and the Viking Age

Mobility shaped the human world profoundly long before the modern age. But archaeologists often struggle to create a timeline for the speed and impact of this mobility. An interdisciplinary team of researchers at the Danish National Research Foundation's Centre for Urban Network Evolutions at Aarhus University (UrbNet) has now made a breakthrough by applying new astronomical knowledge about the past activity of the sun to establish an exact time anchor for global links in the year 775 CE.

In collaboration with the Museum of Southwest Jutland in the Northern Emporium Project, the team has conducted a major excavation at Ribe, one of Viking-age Scandinavia's principal trading towns. Funded by the Carlsberg Foundation, the dig and the subsequent research project were able to establish the exact sequence of the arrival of objects from various corners of the world at the market in Ribe. In this way, they were able to trace the emergence of the vast network of Viking-age trade connections with regions such as North Atlantic Norway, Frankish Western Europe and the Middle East. To obtain a chronology for these events, the team has pioneered a new use of radiocarbon dating.

New use of radiocarbon dating

"The applicability of radiocarbon dating has hitherto been limited due to the broad age ranges of this method. Recently, however, it has been discovered that solar particle events, also known as Miyake events, cause sharp spikes in atmospheric radiocarbon for a single year. They are named after the female Japanese researcher Fusa Miyake, who first identified these events in 2012. When these spikes are identified in detailed records such as tree rings or in an archaeological sequence, it reduces the uncertainty margins considerably," says lead author Bente Philippsen.

The team applied a new, improved calibration curve, based on annual samples, to identify a 775 CE Miyake event in one floor layer in Ribe. This enabled the team to anchor the entire sequence of layers and 140 radiocarbon dates around this single year.

"This result shows that the expansion of Afro-Eurasian trade networks, characterised by the arrival of large numbers of Middle Eastern beads, can be dated in Ribe with precision to 790±10 CE -- coinciding with the beginning of the Viking Age. However, imports brought by ship from Norway were arriving as early as 750 CE," says Professor Søren Sindbæk, who is also a member of the team.

This groundbreaking result challenges one of the most widely accepted explanations for maritime expansions in the Viking Age -- that Scandinavian seafaring took off in response to growing trade with the Middle East through Russia. Maritime networks and long-distance trade were already established decades before impulses from the Middle East caused a further expansion of these networks.

The construction of the new, annual calibration curve is a global effort to which the researchers from UrbNet and the Aarhus AMS Centre at the Department of Physics and Astronomy at Aarhus University have contributed.

"The construction of a calibration curve is a huge international effort with contributions from many laboratories around the world. Fusa Miyake's discovery in 2012 has revolutionized our work, so that we now work with annual time resolution. New calibration curves are recurrently released, most recently in 2020, and Aarhus AMS centre has contributed significantly. The new high-resolution data from the present study will enter into a future update of the calibration curve and thus contribute to improve the precision of archaeological dates worldwide. This will provide better opportunities to understand rapid developments such as trade flows or environmental change in the past," says Jesper Olsen, Associate Professor at Aarhus AMS Centre.

The global trends revealed by the study are essential for the archaeology of trading towns like Ribe. "The new results enable us to date the influx of new artefacts and far-reaching contacts on a much better background. This will help us to visualise and describe Viking Age Ribe in a way that will have great value for scientists, as well as helping us to present the new insight to the general public," says Claus Feveile, curator of the Museum of Southwest Jutland.

Background facts


One of the most spectacular episodes of pre-modern global connectivity happened in the period c. 750-1000 CE, when trade with the burgeoning Islamic empire in the Middle East connected virtually all corners of Afro-Eurasia.

The spread of coins, trade beads and other exotic artefacts provides archaeological evidence of the trade links stretching from Southeast Asia and Africa to Siberia and the northernmost corners of Scandinavia. In the north, these long-distance connections mark the beginning of the maritime adventures that define the Viking Age. Researchers have even suggested that it was the arrival of silver and other valuable objects via Eastern Europe which sparked the first Scandinavian Viking expeditions.

Read more at Science Daily

Dec 23, 2021

70 new rogue planets discovered in our galaxy

Rogue planets are elusive cosmic objects that have masses comparable to those of the planets in our Solar System but do not orbit a star, instead roaming freely on their own. Not many were known until now, but a team of astronomers, using data from several European Southern Observatory (ESO) telescopes and other facilities, have just discovered at least 70 new rogue planets in our galaxy. This is the largest group of rogue planets ever discovered, an important step towards understanding the origins and features of these mysterious galactic nomads.

"We did not know how many to expect and are excited to have found so many," says Núria Miret-Roig, an astronomer at the Laboratoire d'Astrophysique de Bordeaux, France and the University of Vienna, Austria, and the first author of the new study published today in Nature Astronomy.

Rogue planets, lurking far away from any star illuminating them, would normally be impossible to image. However, Miret-Roig and her team took advantage of the fact that, in the few million years after their formation, these planets are still hot enough to glow, making them directly detectable by sensitive cameras on large telescopes. They found at least 70 new rogue planets with masses comparable to Jupiter's in a star-forming region close to our Sun, in the Upper Scorpius and Ophiuchus constellations.

To spot so many rogue planets, the team used data spanning about 20 years from a number of telescopes on the ground and in space. "We measured the tiny motions, the colours and luminosities of tens of millions of sources in a large area of the sky," explains Miret-Roig. "These measurements allowed us to securely identify the faintest objects in this region, the rogue planets."

The team used observations from ESO's Very Large Telescope (VLT), the Visible and Infrared Survey Telescope for Astronomy (VISTA), the VLT Survey Telescope (VST) and the MPG/ESO 2.2-metre telescope located in Chile, along with other facilities. "The vast majority of our data come from ESO observatories, which were absolutely critical for this study. Their wide field of view and unique sensitivity were keys to our success," explains Hervé Bouy, an astronomer at the Laboratoire d'Astrophysique de Bordeaux, France, and project leader of the new research. "We used tens of thousands of wide-field images from ESO facilities, corresponding to hundreds of hours of observations, and literally tens of terabytes of data."

The team also used data from the European Space Agency's Gaia satellite, marking a huge success for the collaboration of ground- and space-based telescopes in the exploration and understanding of our Universe.

The study suggests there could be many more of these elusive, starless planets that we have yet to discover. "There could be several billions of these free-floating giant planets roaming freely in the Milky Way without a host star," Bouy explains.

By studying the newly found rogue planets, astronomers may find clues to how these mysterious objects form. Some scientists believe rogue planets can form from the collapse of a gas cloud that is too small to lead to the formation of a star, or that they could have been kicked out from their parent system. But which mechanism is more likely remains unknown.

Further advances in technology will be key to unlocking the mystery of these nomadic planets. The team hopes to continue to study them in greater detail with ESO's forthcoming Extremely Large Telescope (ELT), currently under construction in the Chilean Atacama Desert and due to start observations later this decade. "These objects are extremely faint and little can be done to study them with current facilities," says Bouy. "The ELT will be absolutely crucial to gathering more information about most of the rogue planets we have found."

Read more at Science Daily

Tracking down the forces that shaped our Solar System’s evolution

Meteorites are remnants of the building blocks that formed Earth and the other planets orbiting our Sun. Recent analysis of their isotopic makeup led by Carnegie's Nicole Nie and published in Science Advances settles a longstanding debate about the geochemical evolution of our Solar System and our home planet.

In their youth, stars are surrounded by a rotating disk of gas and dust. Over time, these materials aggregate to form larger bodies, including planets. Some of these objects are broken up due to collisions in space, the remnants of which sometimes hurtle through Earth's atmosphere as meteorites.

By studying a meteorite's chemistry and mineralogy, researchers like Nie and Carnegie's Anat Shahar can reveal details about the conditions these materials were exposed to during the Solar System's tumultuous early years. Of particular interest is why so-called moderately volatile elements are more depleted on Earth and in meteoritic samples than the average Solar System, represented by the Sun's composition. They are named because their relatively low boiling points mean they evaporate easily.

It's long been theorized that periods of heating and cooling resulted in the evaporation of volatiles from meteorites. Nie and her team showed that an entirely different phenomenon is the culprit in the case of the missing volatiles.

Solving the mystery involved studying a particularly primitive class of meteorites called carbonaceous chondrites that contain crystalline droplets, called chondrules, which were part of the original disk of materials surrounding the young Sun. Because of their ancient origins, these beads are an excellent laboratory for uncovering the Solar System's geochemical history.

"Understanding the conditions under which these volatile elements are stripped from the chondrules can help us work backward to learn the conditions they were exposed to in the Solar System's youth and all the years since," Nie explained.

She and her co-authors set out to probe the isotopic variability of potassium and rubidium, two moderately volatile elements. The research team included Shahar and colleagues from The University of Chicago, where Nie was a graduate student prior to joining Carnegie -- Timo Hopp, Justin Y. Hu, Zhe J. Zhang, and Nicolas Dauphas -- as well as Xin-Yang Chen and Fang-Zhen Teng from University of Washington Seattle.

Each element contains a unique number of protons, but its isotopes have varying numbers of neutrons. This means that each isotope has a slightly different mass than the others. As a result, chemical reactions discriminate between the isotopes, which, in turn, affects the proportion of that isotope in the reaction's end products.

"This means that the different kinds of chemical processing that the chondrules experienced will be evident in their isotopic composition, which is something we can probe using precision instruments," Nie added.

Their work enabled the researchers to settle the debate about how and when in their lifespans the chondrules lost their volatiles. The isotopic record unveiled by Nie and her team indicates that the volatiles were stripped as a result of massive shockwaves passing through the material circling the young Sun that likely drove melting of the dust to form the chondrules. These types of events can be generated by gravitational instability or by larger baby planets moving through the nebular gas.

"Our findings offer new information about our Solar System's youth and the events that shaped the geochemistry of the planets, including our own," Nie concluded.

Read more at Science Daily

Ancient DNA reveals the world’s oldest family tree

Analysis of ancient DNA from one of the best-preserved Neolithic tombs in Britain has revealed that most of the people buried there were from five continuous generations of a single extended family.

By analysing DNA extracted from the bones and teeth of 35 individuals entombed at Hazleton North long cairn in the Cotswolds-Severn region, the research team was able to detect that 27 of them were close biological relatives. The group lived approximately 5700 years ago -- around 3700-3600 BC -- around 100 years after farming had been introduced to Britain.

Published in Nature, it is the first study to reveal in such detail how prehistoric families were structured, and the international team of archaeologists and geneticists say that the results provide new insights into kinship and burial practices in Neolithic times.

The research team -- which included archaeologists from Newcastle University, UK, and geneticists from the University of the Basque Country, University of Vienna and Harvard University -- show that most of those buried in the tomb were descended from four women who had all had children with the same man.

The cairn at Hazleton North included two L-shaped chambered areas which were located north and south of the main 'spine' of the linear structure. After they had died, individuals were buried inside these two chambered areas and the research findings indicate that men were generally buried with their father and brothers, suggesting that descent was patrilineal with later generations buried at the tomb connected to the first generation entirely through male relatives.

While two of the daughters of the lineage who died in childhood were buried in the tomb, the complete absence of adult daughters suggests that their remains were placed either in the tombs of male partners with whom they had children, or elsewhere.

Although the right to use the tomb ran through patrilineal ties, the choice of whether individuals were buried in the north or south chambered area initially depended on the first-generation woman from whom they were descended, suggesting that these first-generation women were socially significant in the memories of this community.

There are also indications that 'stepsons' were adopted into the lineage, the researchers say -- males whose mother was buried in the tomb but not their biological father, and whose mother had also had children with a male from the patriline. Additionally, the team found no evidence that another eight individuals were biological relatives of those in the family tree, which might further suggest that biological relatedness was not the only criterion for inclusion. However, three of these were women and it is possible that they could have had a partner in the tomb but either did not have any children or had daughters who reached adulthood and left the community so are absent from the tomb.

Dr Chris Fowler of Newcastle University, the first author and lead archaeologist of the study, said: "This study gives us an unprecedented insight into kinship in a Neolithic community. The tomb at Hazleton North has two separate chambered areas, one accessed via a northern entrance and the other from a southern entrance, and just one extraordinary finding is that initially each of the two halves of the tomb were used to place the remains of the dead from one of two branches of the same family. This is of wider importance because it suggests that the architectural layout of other Neolithic tombs might tell us about how kinship operated at those tombs."

Iñigo Olalde of the University of the Basque Country and Ikerbasque, the lead geneticist for the study and co-first author, said: "The excellent DNA preservation at the tomb and the use of the latest technologies in ancient DNA recovery and analysis allowed us to uncover the oldest family tree ever reconstructed and analyse it to understand something profound about the social structure of these ancient groups."

David Reich at Harvard University, whose laboratory led the ancient DNA generation, added: "This study reflects what I think is the future of ancient DNA: one in which archaeologists are able to apply ancient DNA analysis at sufficiently high resolution to address the questions that truly matter to archaeologists."

Ron Pinhasi, of the University of Vienna, said: "It was difficult to imagine just a few years ago that we would ever know about Neolithic kinship structures. But this is just the beginning and no doubt there is a lot more to be discovered from other sites in Britain, Atlantic France, and other regions."

Read more at Science Daily

Researchers lay groundwork for potential dog-allergy vaccine

There have been many research efforts describing the nature and progression of dog allergies, but there have been very few applied studies that use this information to try to cure people of dog allergies entirely by artificially inducing immune tolerance. But researchers have now for the first time identified candidates for those parts of the molecules that make up dog allergens that could give us precisely that: a "dog allergy vaccine."

Their findings were published in the Federation of European Biochemical Societies journal on October 26.

Being allergic to dogs is a common malady and one that is growing worldwide. Over the years, scientists have been able to identify seven different dog allergens -- molecules or molecular structures that bind to an antibody and produce an unusually strong immune response that would normally be harmless.

These seven are named Canis familiaris allergens 1 to 7 (Can f 1-7). But while there are seven, just one, Can f 1, is responsible for the majority (50-75 percent) of reactions in people allergic to dogs. It is found in dogs' tongue tissue, salivary glands, and their skin.

Researchers have yet to identify Can f 1's IgE epitopes -- those specific parts of the antigens that are recognized by the immune system and stimulate or 'determine' an immune response (which is why epitopes are also called antigen determinants). More specifically, epitopes are short amino acid sequences making up part of a protein that induces the immune response.

Epitopes bind to a specific antigen receptor on the surface of immune system antibodies, B cells, or T Cells, much like how the shape of a jigsaw puzzle piece fits the specific shape of another puzzle piece. (The part of the receptor that binds to the epitope is in turn called a paratope). Antibodies, also known as immunoglobulin, come in five different classes or isotypes: IgA (for immunoglobulin A), IgD, IgE, IgG, or IgM. The IgE isotype (only found in mammals) plays a key role in allergies and allergic diseases. There is also an IgE epitope that is the puzzle piece that fits the IgE isotype's paratope.

In recent years, there has been extensive effort at developing epitope-focused vaccines -- in this case, a vaccine against dog allergies.

"We want to be able to present small doses of these epitopes to the immune system to train it to deal with them, similar to the principle behind any vaccine," said Takashi Inui, a specialist in allergy research, professor at Osaka Prefecture University and a lead author of the study. "But we can't do this without first identifying the Can f 1's IgE epitope."

So the researchers used X-ray crystallography (in which the diffraction of x-rays through a material is analyzed to identify its 'crystal' structure) to determine the structure of the Can f 1 protein as a whole -- the first time this had ever been done.

They found that the protein's folding pattern is at first glance extremely similar to three other Can f allergens. However, the locations of surface electrical charges were quite different, which in turn suggest a series of 'residues' that are good candidates for the IgE epitope.

Using this basic data, further experimental work needs to be performed to narrow the candidates down, but the findings suggest the development of a hypoallergenic vaccine against Can f 1 -- a dog-allergy vaccine -- is within our grasp.

Read more at Science Daily

COVID-19 infection detected in deer in six Ohio locations

Scientists have detected infection by at least three variants of the virus that causes COVID-19 in free-ranging white-tailed deer in six northeast Ohio locations, the research team has reported.

Previous research led by the U.S. Department of Agriculture had shown evidence of antibodies in wild deer. This study, published today (Dec. 23, 2021) in Nature, details the first report of active COVID-19 infection in white-tailed deer supported by the growth of viral isolates in the lab, indicating researchers had recovered viable samples of the SARS-CoV-2 virus and not only its genetic traces.

Based on genomic sequencing of the samples collected between January and March 2021, researchers determined that variants infecting wild deer matched strains of the SARS-CoV-2 virus that had been prevalent in Ohio COVID-19 patients at the time. Sample collection occurred before the Delta variant was widespread, and that variant was not detected in these deer. The team is testing more samples to check for new variants as well as older variants, whose continued presence would suggest the virus can set up shop and survive in this species.

The fact that wild deer can become infected "leads toward the idea that we might actually have established a new maintenance host outside humans," said Andrew Bowman, associate professor of veterinary preventive medicine at The Ohio State University and senior author of the paper.

"Based on evidence from other studies, we knew they were being exposed in the wild and that in the lab we could infect them and the virus could transmit from deer to deer. Here, we're saying that in the wild, they are infected," Bowman said. "And if they can maintain it, we have a new potential source of SARS-CoV-2 coming in to humans. That would mean that beyond tracking what's in people, we'll need to know what's in the deer, too.

"It could complicate future mitigation and control plans for COVID-19."

A lot of unknowns remain: how the deer got infected, whether they can infect humans and other species, how the virus behaves in the animals' body, and whether it's a transient or long-term infection.

The research team took nasal swabs from 360 white-tailed deer in nine northeast Ohio locations. Using PCR testing methods, the scientists detected genetic material from at least three different strains of the virus in 129 (35.8%) of the deer sampled.

The analysis showed that B.1.2 viruses dominant in Ohio in the early months of 2021 spilled over multiple times into deer populations in different locations.

"The working theory based on our sequences is that humans are giving it to deer, and apparently we gave it to them several times," Bowman said. "We have evidence of six different viral introductions into those deer populations. It's not that a single population got it once and it spread."

Each site was sampled between one and three times, adding up to a total of 18 sample collection dates. Based on the findings, researchers estimated the prevalence of infection varied from 13.5% to 70% across the nine sites, with the highest prevalence observed in four sites that were surrounded by more densely populated neighborhoods.

White-tailed deer functioning as a viral reservoir of SARS-CoV-2 would likely result in one of two outcomes, Bowman said. The virus could mutate in deer, potentially facilitating transmission of new strains to other species, including humans, or the virus could survive in deer unmutated while it simultaneously continues to evolve in humans, and at some point when humans don't have immunity to the strains infecting deer, those variants could come spilling back to humans.

How transmission happened initially in these deer, and how it could happen across species, are among the pending questions related to these findings. The research team speculated that white-tailed deer were infected through an environmental pathway -- possibly by drinking contaminated water. Research has shown that the virus is shed in human stool and detectable in wastewater.

The white-tailed deer tested for this study were part of a population control initiative, so they are not a transmission threat.

Though there are an estimated 600,000 white-tailed deer in Ohio and 30 million in the United States, Bowman said this sampling focused on locations close to dense human populations and is not representative of all free-ranging deer.

Read more at Science Daily

Dec 22, 2021

Engineers test an idea for a new hovering rover

Aerospace engineers at MIT are testing a new concept for a hovering rover that levitates by harnessing the moon's natural charge.

Because they lack an atmosphere, the moon and other airless bodies such as asteroids can build up an electric field through direct exposure to the sun and surrounding plasma. On the moon, this surface charge is strong enough to levitate dust more than 1 meter above the ground, much the way static electricity can cause a person's hair to stand on end.

Engineers at NASA and elsewhere have recently proposed harnessing this natural surface charge to levitate a glider with wings made of Mylar, a material that naturally holds the same charge as surfaces on airless bodies. They reasoned that the similarly charged surfaces should repel each other, with a force that lofts the glider off the ground. But such a design would likely be limited to small asteroids, as larger planetary bodies would have a stronger, counteracting gravitational pull.

The MIT team's levitating rover could potentially get around this size limitation. The concept, which resembles a retro-style, disc-shaped flying saucer, uses tiny ion beams to both charge up the vehicle and boost the surface's natural charge. The overall effect is designed to generate a relatively large repulsive force between the vehicle and the ground, in a way that requires very little power. In an initial feasibility study, the researchers show that such an ion boost should be strong enough to levitate a small, 2-pound vehicle on the moon and large asteroids like Psyche.

"We think of using this like the Hayabusa missions that were launched by the Japanese space agency," says lead author Oliver Jia-Richards, a graduate student in MIT's Department of Aeronautics and Astronautics. "That spacecraft operated around a small asteroid and deployed small rovers to its surface. Similarly, we think a future mission could send out small hovering rovers to explore the surface of the moon and other asteroids."

The team's results appear in the current issue of the Journal of Spacecraft and Rockets. Jia-Richards' co-authors are Paulo Lozano, the M. Alemán-Velasco Professor of Aeronautics and Astronautics and director of MIT's Space Propulsion Lab; and former visiting student Sebastian Hampl, now at McGill University.

Ionic force

The team's levitating design relies on the use of miniature ion thrusters, called ionic-liquid ion sources. These small, microfabricated nozzles are connected to a reservoir containing ionic liquid in the form of room-temperature molten salt. When a voltage is applied, the liquid's ions are charged and emitted as a beam through the nozzles with a certain force.

Lozano's team has pioneered the development of ionic thrusters and has used them mainly to propel and physically maneuver small satellites in space. Recently, Lozano had seen research showing the levitating effect of the moon's charged surface on lunar dust. He also considered the electrostatic glider design by NASA and wondered: Could a rover fitted with ion thrusters produce enough repulsive, electrostatic force to hover on the moon and larger asteroids?

To test the idea, the team initially modeled a small, disk-shaped rover with ion thrusters that charged up the vehicle alone. They modeled the thrusters to beam negatively charged ions out from the vehicle, which effectively gave the vehicle a positive charge, similar to the moon's positively charged surface. But they found this was not enough to get the vehicle off the ground.

"Then we thought, what if we transfer our own charge to the surface to supplement its natural charge?" Jia-Richards says.

By pointing additional thrusters at the ground and beaming out positive ions to amplify the surface's charge, the team reasoned that the boost could produce a bigger force against the rover, enough to levitate it off the ground. They drew up a simple mathematical model for the scenario and found that, in principle, it could work.

Based on this simple model, the team predicted that a small rover, weighing about two pounds, could achieve levitation of about one centimeter off the ground, on a large asteroid such as Psyche, using a 10-kilovolt ion source. To get a similar liftoff on the moon, the same rover would need a 50-kilovolt source.

"This kind of ionic design uses very little power to generate a lot of voltage," Lozano explains. "The power needed is so small, you could do this almost for free."

In suspension

To be sure the model represented what could happen in a real environment in space, they ran a simple scenario in Lozano's lab. The researchers manufactured a small hexagonal test vehicle weighing about 60 grams and measuring about the size of a person's palm. They installed one ion thruster pointing up, and four pointing down, and then suspended the vehicle over an aluminum surface from two springs calibrated to counteract Earth's gravitational force. The entire setup was placed within a vacuum chamber to simulate the airless environment of the moon and asteroids.

The researchers also suspended a tungsten rod from the experiment's springs, and used its displacement to measure how much force the thrusters produced each time they were fired. They applied various voltages to the thrusters and measured the resulting forces, which they then used to calculate the height the vehicle alone could have levitated. They found these experimental results matched with predictions of the same scenario from their model, giving them confidence that its predictions for hovering a rover on Psyche and the moon were realistic.

The current model is designed to predict the conditions required to simply achieve levitation, which happened to be about 1 centimeter off the ground for a 2-pound vehicle. The ion thrusters could generate more force with larger voltage to lift a vehicle higher off the ground. But Jia-Richards says the model would need revising, as it doesn't account for how the emitted ions would behave at higher altitudes.

"In principle, with better modeling, we could levitate to much higher heights," he says.

In that case, Lozano says future missions to the moon and asteroids could deploy rovers that use ion thrusters to safely hover and maneuver over unknown, uneven terrain.

Read more at Science Daily