Dec 23, 2021

70 new rogue planets discovered in our galaxy

Rogue planets are elusive cosmic objects that have masses comparable to those of the planets in our Solar System but do not orbit a star, instead roaming freely on their own. Not many were known until now, but a team of astronomers, using data from several European Southern Observatory (ESO) telescopes and other facilities, have just discovered at least 70 new rogue planets in our galaxy. This is the largest group of rogue planets ever discovered, an important step towards understanding the origins and features of these mysterious galactic nomads.

"We did not know how many to expect and are excited to have found so many," says Núria Miret-Roig, an astronomer at the Laboratoire d'Astrophysique de Bordeaux, France and the University of Vienna, Austria, and the first author of the new study published today in Nature Astronomy.

Rogue planets, lurking far away from any star illuminating them, would normally be impossible to image. However, Miret-Roig and her team took advantage of the fact that, in the few million years after their formation, these planets are still hot enough to glow, making them directly detectable by sensitive cameras on large telescopes. They found at least 70 new rogue planets with masses comparable to Jupiter's in a star-forming region close to our Sun, in the Upper Scorpius and Ophiuchus constellations.

To spot so many rogue planets, the team used data spanning about 20 years from a number of telescopes on the ground and in space. "We measured the tiny motions, the colours and luminosities of tens of millions of sources in a large area of the sky," explains Miret-Roig. "These measurements allowed us to securely identify the faintest objects in this region, the rogue planets."

The team used observations from ESO's Very Large Telescope (VLT), the Visible and Infrared Survey Telescope for Astronomy (VISTA), the VLT Survey Telescope (VST) and the MPG/ESO 2.2-metre telescope located in Chile, along with other facilities. "The vast majority of our data come from ESO observatories, which were absolutely critical for this study. Their wide field of view and unique sensitivity were keys to our success," explains Hervé Bouy, an astronomer at the Laboratoire d'Astrophysique de Bordeaux, France, and project leader of the new research. "We used tens of thousands of wide-field images from ESO facilities, corresponding to hundreds of hours of observations, and literally tens of terabytes of data."

The team also used data from the European Space Agency's Gaia satellite, marking a huge success for the collaboration of ground- and space-based telescopes in the exploration and understanding of our Universe.

The study suggests there could be many more of these elusive, starless planets that we have yet to discover. "There could be several billions of these free-floating giant planets roaming freely in the Milky Way without a host star," Bouy explains.

By studying the newly found rogue planets, astronomers may find clues to how these mysterious objects form. Some scientists believe rogue planets can form from the collapse of a gas cloud that is too small to lead to the formation of a star, or that they could have been kicked out from their parent system. But which mechanism is more likely remains unknown.

Further advances in technology will be key to unlocking the mystery of these nomadic planets. The team hopes to continue to study them in greater detail with ESO's forthcoming Extremely Large Telescope (ELT), currently under construction in the Chilean Atacama Desert and due to start observations later this decade. "These objects are extremely faint and little can be done to study them with current facilities," says Bouy. "The ELT will be absolutely crucial to gathering more information about most of the rogue planets we have found."

Read more at Science Daily

Tracking down the forces that shaped our Solar System’s evolution

Meteorites are remnants of the building blocks that formed Earth and the other planets orbiting our Sun. Recent analysis of their isotopic makeup led by Carnegie's Nicole Nie and published in Science Advances settles a longstanding debate about the geochemical evolution of our Solar System and our home planet.

In their youth, stars are surrounded by a rotating disk of gas and dust. Over time, these materials aggregate to form larger bodies, including planets. Some of these objects are broken up due to collisions in space, the remnants of which sometimes hurtle through Earth's atmosphere as meteorites.

By studying a meteorite's chemistry and mineralogy, researchers like Nie and Carnegie's Anat Shahar can reveal details about the conditions these materials were exposed to during the Solar System's tumultuous early years. Of particular interest is why so-called moderately volatile elements are more depleted on Earth and in meteoritic samples than the average Solar System, represented by the Sun's composition. They are named because their relatively low boiling points mean they evaporate easily.

It's long been theorized that periods of heating and cooling resulted in the evaporation of volatiles from meteorites. Nie and her team showed that an entirely different phenomenon is the culprit in the case of the missing volatiles.

Solving the mystery involved studying a particularly primitive class of meteorites called carbonaceous chondrites that contain crystalline droplets, called chondrules, which were part of the original disk of materials surrounding the young Sun. Because of their ancient origins, these beads are an excellent laboratory for uncovering the Solar System's geochemical history.

"Understanding the conditions under which these volatile elements are stripped from the chondrules can help us work backward to learn the conditions they were exposed to in the Solar System's youth and all the years since," Nie explained.

She and her co-authors set out to probe the isotopic variability of potassium and rubidium, two moderately volatile elements. The research team included Shahar and colleagues from The University of Chicago, where Nie was a graduate student prior to joining Carnegie -- Timo Hopp, Justin Y. Hu, Zhe J. Zhang, and Nicolas Dauphas -- as well as Xin-Yang Chen and Fang-Zhen Teng from University of Washington Seattle.

Each element contains a unique number of protons, but its isotopes have varying numbers of neutrons. This means that each isotope has a slightly different mass than the others. As a result, chemical reactions discriminate between the isotopes, which, in turn, affects the proportion of that isotope in the reaction's end products.

"This means that the different kinds of chemical processing that the chondrules experienced will be evident in their isotopic composition, which is something we can probe using precision instruments," Nie added.

Their work enabled the researchers to settle the debate about how and when in their lifespans the chondrules lost their volatiles. The isotopic record unveiled by Nie and her team indicates that the volatiles were stripped as a result of massive shockwaves passing through the material circling the young Sun that likely drove melting of the dust to form the chondrules. These types of events can be generated by gravitational instability or by larger baby planets moving through the nebular gas.

"Our findings offer new information about our Solar System's youth and the events that shaped the geochemistry of the planets, including our own," Nie concluded.

Read more at Science Daily

Ancient DNA reveals the world’s oldest family tree

Analysis of ancient DNA from one of the best-preserved Neolithic tombs in Britain has revealed that most of the people buried there were from five continuous generations of a single extended family.

By analysing DNA extracted from the bones and teeth of 35 individuals entombed at Hazleton North long cairn in the Cotswolds-Severn region, the research team was able to detect that 27 of them were close biological relatives. The group lived approximately 5700 years ago -- around 3700-3600 BC -- around 100 years after farming had been introduced to Britain.

Published in Nature, it is the first study to reveal in such detail how prehistoric families were structured, and the international team of archaeologists and geneticists say that the results provide new insights into kinship and burial practices in Neolithic times.

The research team -- which included archaeologists from Newcastle University, UK, and geneticists from the University of the Basque Country, University of Vienna and Harvard University -- show that most of those buried in the tomb were descended from four women who had all had children with the same man.

The cairn at Hazleton North included two L-shaped chambered areas which were located north and south of the main 'spine' of the linear structure. After they had died, individuals were buried inside these two chambered areas and the research findings indicate that men were generally buried with their father and brothers, suggesting that descent was patrilineal with later generations buried at the tomb connected to the first generation entirely through male relatives.

While two of the daughters of the lineage who died in childhood were buried in the tomb, the complete absence of adult daughters suggests that their remains were placed either in the tombs of male partners with whom they had children, or elsewhere.

Although the right to use the tomb ran through patrilineal ties, the choice of whether individuals were buried in the north or south chambered area initially depended on the first-generation woman from whom they were descended, suggesting that these first-generation women were socially significant in the memories of this community.

There are also indications that 'stepsons' were adopted into the lineage, the researchers say -- males whose mother was buried in the tomb but not their biological father, and whose mother had also had children with a male from the patriline. Additionally, the team found no evidence that another eight individuals were biological relatives of those in the family tree, which might further suggest that biological relatedness was not the only criterion for inclusion. However, three of these were women and it is possible that they could have had a partner in the tomb but either did not have any children or had daughters who reached adulthood and left the community so are absent from the tomb.

Dr Chris Fowler of Newcastle University, the first author and lead archaeologist of the study, said: "This study gives us an unprecedented insight into kinship in a Neolithic community. The tomb at Hazleton North has two separate chambered areas, one accessed via a northern entrance and the other from a southern entrance, and just one extraordinary finding is that initially each of the two halves of the tomb were used to place the remains of the dead from one of two branches of the same family. This is of wider importance because it suggests that the architectural layout of other Neolithic tombs might tell us about how kinship operated at those tombs."

Iñigo Olalde of the University of the Basque Country and Ikerbasque, the lead geneticist for the study and co-first author, said: "The excellent DNA preservation at the tomb and the use of the latest technologies in ancient DNA recovery and analysis allowed us to uncover the oldest family tree ever reconstructed and analyse it to understand something profound about the social structure of these ancient groups."

David Reich at Harvard University, whose laboratory led the ancient DNA generation, added: "This study reflects what I think is the future of ancient DNA: one in which archaeologists are able to apply ancient DNA analysis at sufficiently high resolution to address the questions that truly matter to archaeologists."

Ron Pinhasi, of the University of Vienna, said: "It was difficult to imagine just a few years ago that we would ever know about Neolithic kinship structures. But this is just the beginning and no doubt there is a lot more to be discovered from other sites in Britain, Atlantic France, and other regions."

Read more at Science Daily

Researchers lay groundwork for potential dog-allergy vaccine

There have been many research efforts describing the nature and progression of dog allergies, but there have been very few applied studies that use this information to try to cure people of dog allergies entirely by artificially inducing immune tolerance. But researchers have now for the first time identified candidates for those parts of the molecules that make up dog allergens that could give us precisely that: a "dog allergy vaccine."

Their findings were published in the Federation of European Biochemical Societies journal on October 26.

Being allergic to dogs is a common malady and one that is growing worldwide. Over the years, scientists have been able to identify seven different dog allergens -- molecules or molecular structures that bind to an antibody and produce an unusually strong immune response that would normally be harmless.

These seven are named Canis familiaris allergens 1 to 7 (Can f 1-7). But while there are seven, just one, Can f 1, is responsible for the majority (50-75 percent) of reactions in people allergic to dogs. It is found in dogs' tongue tissue, salivary glands, and their skin.

Researchers have yet to identify Can f 1's IgE epitopes -- those specific parts of the antigens that are recognized by the immune system and stimulate or 'determine' an immune response (which is why epitopes are also called antigen determinants). More specifically, epitopes are short amino acid sequences making up part of a protein that induces the immune response.

Epitopes bind to a specific antigen receptor on the surface of immune system antibodies, B cells, or T Cells, much like how the shape of a jigsaw puzzle piece fits the specific shape of another puzzle piece. (The part of the receptor that binds to the epitope is in turn called a paratope). Antibodies, also known as immunoglobulin, come in five different classes or isotypes: IgA (for immunoglobulin A), IgD, IgE, IgG, or IgM. The IgE isotype (only found in mammals) plays a key role in allergies and allergic diseases. There is also an IgE epitope that is the puzzle piece that fits the IgE isotype's paratope.

In recent years, there has been extensive effort at developing epitope-focused vaccines -- in this case, a vaccine against dog allergies.

"We want to be able to present small doses of these epitopes to the immune system to train it to deal with them, similar to the principle behind any vaccine," said Takashi Inui, a specialist in allergy research, professor at Osaka Prefecture University and a lead author of the study. "But we can't do this without first identifying the Can f 1's IgE epitope."

So the researchers used X-ray crystallography (in which the diffraction of x-rays through a material is analyzed to identify its 'crystal' structure) to determine the structure of the Can f 1 protein as a whole -- the first time this had ever been done.

They found that the protein's folding pattern is at first glance extremely similar to three other Can f allergens. However, the locations of surface electrical charges were quite different, which in turn suggest a series of 'residues' that are good candidates for the IgE epitope.

Using this basic data, further experimental work needs to be performed to narrow the candidates down, but the findings suggest the development of a hypoallergenic vaccine against Can f 1 -- a dog-allergy vaccine -- is within our grasp.

Read more at Science Daily

COVID-19 infection detected in deer in six Ohio locations

Scientists have detected infection by at least three variants of the virus that causes COVID-19 in free-ranging white-tailed deer in six northeast Ohio locations, the research team has reported.

Previous research led by the U.S. Department of Agriculture had shown evidence of antibodies in wild deer. This study, published today (Dec. 23, 2021) in Nature, details the first report of active COVID-19 infection in white-tailed deer supported by the growth of viral isolates in the lab, indicating researchers had recovered viable samples of the SARS-CoV-2 virus and not only its genetic traces.

Based on genomic sequencing of the samples collected between January and March 2021, researchers determined that variants infecting wild deer matched strains of the SARS-CoV-2 virus that had been prevalent in Ohio COVID-19 patients at the time. Sample collection occurred before the Delta variant was widespread, and that variant was not detected in these deer. The team is testing more samples to check for new variants as well as older variants, whose continued presence would suggest the virus can set up shop and survive in this species.

The fact that wild deer can become infected "leads toward the idea that we might actually have established a new maintenance host outside humans," said Andrew Bowman, associate professor of veterinary preventive medicine at The Ohio State University and senior author of the paper.

"Based on evidence from other studies, we knew they were being exposed in the wild and that in the lab we could infect them and the virus could transmit from deer to deer. Here, we're saying that in the wild, they are infected," Bowman said. "And if they can maintain it, we have a new potential source of SARS-CoV-2 coming in to humans. That would mean that beyond tracking what's in people, we'll need to know what's in the deer, too.

"It could complicate future mitigation and control plans for COVID-19."

A lot of unknowns remain: how the deer got infected, whether they can infect humans and other species, how the virus behaves in the animals' body, and whether it's a transient or long-term infection.

The research team took nasal swabs from 360 white-tailed deer in nine northeast Ohio locations. Using PCR testing methods, the scientists detected genetic material from at least three different strains of the virus in 129 (35.8%) of the deer sampled.

The analysis showed that B.1.2 viruses dominant in Ohio in the early months of 2021 spilled over multiple times into deer populations in different locations.

"The working theory based on our sequences is that humans are giving it to deer, and apparently we gave it to them several times," Bowman said. "We have evidence of six different viral introductions into those deer populations. It's not that a single population got it once and it spread."

Each site was sampled between one and three times, adding up to a total of 18 sample collection dates. Based on the findings, researchers estimated the prevalence of infection varied from 13.5% to 70% across the nine sites, with the highest prevalence observed in four sites that were surrounded by more densely populated neighborhoods.

White-tailed deer functioning as a viral reservoir of SARS-CoV-2 would likely result in one of two outcomes, Bowman said. The virus could mutate in deer, potentially facilitating transmission of new strains to other species, including humans, or the virus could survive in deer unmutated while it simultaneously continues to evolve in humans, and at some point when humans don't have immunity to the strains infecting deer, those variants could come spilling back to humans.

How transmission happened initially in these deer, and how it could happen across species, are among the pending questions related to these findings. The research team speculated that white-tailed deer were infected through an environmental pathway -- possibly by drinking contaminated water. Research has shown that the virus is shed in human stool and detectable in wastewater.

The white-tailed deer tested for this study were part of a population control initiative, so they are not a transmission threat.

Though there are an estimated 600,000 white-tailed deer in Ohio and 30 million in the United States, Bowman said this sampling focused on locations close to dense human populations and is not representative of all free-ranging deer.

Read more at Science Daily

Dec 22, 2021

Engineers test an idea for a new hovering rover

Aerospace engineers at MIT are testing a new concept for a hovering rover that levitates by harnessing the moon's natural charge.

Because they lack an atmosphere, the moon and other airless bodies such as asteroids can build up an electric field through direct exposure to the sun and surrounding plasma. On the moon, this surface charge is strong enough to levitate dust more than 1 meter above the ground, much the way static electricity can cause a person's hair to stand on end.

Engineers at NASA and elsewhere have recently proposed harnessing this natural surface charge to levitate a glider with wings made of Mylar, a material that naturally holds the same charge as surfaces on airless bodies. They reasoned that the similarly charged surfaces should repel each other, with a force that lofts the glider off the ground. But such a design would likely be limited to small asteroids, as larger planetary bodies would have a stronger, counteracting gravitational pull.

The MIT team's levitating rover could potentially get around this size limitation. The concept, which resembles a retro-style, disc-shaped flying saucer, uses tiny ion beams to both charge up the vehicle and boost the surface's natural charge. The overall effect is designed to generate a relatively large repulsive force between the vehicle and the ground, in a way that requires very little power. In an initial feasibility study, the researchers show that such an ion boost should be strong enough to levitate a small, 2-pound vehicle on the moon and large asteroids like Psyche.

"We think of using this like the Hayabusa missions that were launched by the Japanese space agency," says lead author Oliver Jia-Richards, a graduate student in MIT's Department of Aeronautics and Astronautics. "That spacecraft operated around a small asteroid and deployed small rovers to its surface. Similarly, we think a future mission could send out small hovering rovers to explore the surface of the moon and other asteroids."

The team's results appear in the current issue of the Journal of Spacecraft and Rockets. Jia-Richards' co-authors are Paulo Lozano, the M. Alemán-Velasco Professor of Aeronautics and Astronautics and director of MIT's Space Propulsion Lab; and former visiting student Sebastian Hampl, now at McGill University.

Ionic force

The team's levitating design relies on the use of miniature ion thrusters, called ionic-liquid ion sources. These small, microfabricated nozzles are connected to a reservoir containing ionic liquid in the form of room-temperature molten salt. When a voltage is applied, the liquid's ions are charged and emitted as a beam through the nozzles with a certain force.

Lozano's team has pioneered the development of ionic thrusters and has used them mainly to propel and physically maneuver small satellites in space. Recently, Lozano had seen research showing the levitating effect of the moon's charged surface on lunar dust. He also considered the electrostatic glider design by NASA and wondered: Could a rover fitted with ion thrusters produce enough repulsive, electrostatic force to hover on the moon and larger asteroids?

To test the idea, the team initially modeled a small, disk-shaped rover with ion thrusters that charged up the vehicle alone. They modeled the thrusters to beam negatively charged ions out from the vehicle, which effectively gave the vehicle a positive charge, similar to the moon's positively charged surface. But they found this was not enough to get the vehicle off the ground.

"Then we thought, what if we transfer our own charge to the surface to supplement its natural charge?" Jia-Richards says.

By pointing additional thrusters at the ground and beaming out positive ions to amplify the surface's charge, the team reasoned that the boost could produce a bigger force against the rover, enough to levitate it off the ground. They drew up a simple mathematical model for the scenario and found that, in principle, it could work.

Based on this simple model, the team predicted that a small rover, weighing about two pounds, could achieve levitation of about one centimeter off the ground, on a large asteroid such as Psyche, using a 10-kilovolt ion source. To get a similar liftoff on the moon, the same rover would need a 50-kilovolt source.

"This kind of ionic design uses very little power to generate a lot of voltage," Lozano explains. "The power needed is so small, you could do this almost for free."

In suspension

To be sure the model represented what could happen in a real environment in space, they ran a simple scenario in Lozano's lab. The researchers manufactured a small hexagonal test vehicle weighing about 60 grams and measuring about the size of a person's palm. They installed one ion thruster pointing up, and four pointing down, and then suspended the vehicle over an aluminum surface from two springs calibrated to counteract Earth's gravitational force. The entire setup was placed within a vacuum chamber to simulate the airless environment of the moon and asteroids.

The researchers also suspended a tungsten rod from the experiment's springs, and used its displacement to measure how much force the thrusters produced each time they were fired. They applied various voltages to the thrusters and measured the resulting forces, which they then used to calculate the height the vehicle alone could have levitated. They found these experimental results matched with predictions of the same scenario from their model, giving them confidence that its predictions for hovering a rover on Psyche and the moon were realistic.

The current model is designed to predict the conditions required to simply achieve levitation, which happened to be about 1 centimeter off the ground for a 2-pound vehicle. The ion thrusters could generate more force with larger voltage to lift a vehicle higher off the ground. But Jia-Richards says the model would need revising, as it doesn't account for how the emitted ions would behave at higher altitudes.

"In principle, with better modeling, we could levitate to much higher heights," he says.

In that case, Lozano says future missions to the moon and asteroids could deploy rovers that use ion thrusters to safely hover and maneuver over unknown, uneven terrain.

Read more at Science Daily

Exquisitely preserved embryo found inside fossilized dinosaur egg

A 72 to 66-million-year-old embryo found inside a fossilised dinosaur egg sheds new light on the link between the behaviour of modern birds and dinosaurs, according to a new study.

The embryo, dubbed 'Baby Yingliang', was discovered in the Late Cretaceous rocks of Ganzhou, southern China and belongs to a toothless theropod dinosaur, or oviraptorosaur. Among the most complete dinosaur embryos ever found, the fossil suggests that these dinosaurs developed bird-like postures close to hatching.

Scientists found the posture of 'Baby Yingliang' unique among known dinosaur embryos -- its head lies below the body, with the feet on either side and the back curled along the blunt end of the egg. Previously unrecognised in dinosaurs, this posture is similar to that of modern bird embryos.

In modern birds, such postures are related to 'tucking' -- a behaviour controlled by the central nervous system and critical for hatching success. After studying egg and embryo, researchers believe that such pre-hatching behaviour, previously considered unique to birds, may have originated among non-avian theropods.

Led by scientists from the University of Birmingham and China University of Geosciences (Beijing), the research team from institutions in China, UK and Canada today published its findings in iScience.

The embryo is articulated in its life position without much disruption from fossilisation. Estimated to be 27 cm long from head to tail, the creature lies inside a 17-cm-long elongatoolithid egg. The specimen is housed in Yingliang Stone Nature History Museum.

Fion Waisum Ma, joint first author and PhD researcher at the University of Birmingham, said: "Dinosaur embryos are some of the rarest fossils and most of them are incomplete with the bones dislocated. We are very excited about the discovery of 'Baby Yingliang' -- it is preserved in a great condition and helps us answer a lot of questions about dinosaur growth and reproduction with it.

"It is interesting to see this dinosaur embryo and a chicken embryo pose in a similar way inside the egg, which possibly indicates similar prehatching behaviours."

'Baby Yingliang' was identified as an oviraptorosaur based on its deep, toothless skull. Oviraptorosaurs are a group of feathered theropod dinosaurs, closely related to modern-day birds, known from the Cretaceous of Asia and North America. Their variable beak shapes and body sizes are likely to have allowed them to adopt a wide range of diets, including herbivory, omnivory and carnivory.

Birds are known to develop a series of tucking postures, in which they bend their body and bring their head under their wing, soon before hatching. Embryos that fail to attain such postures have a higher chance of death due to unsuccessful hatching.

By comparing 'Baby Yingliang' with the embryos of other theropods, long-necked sauropod dinosaurs and birds, the team proposed that tucking behaviour, which was considered unique to birds, first evolved in theropod dinosaurs many tens or hundreds of millions of years ago. Additional discoveries of embryo fossils would be invaluable to further test this hypothesis.

Professor Lida Xing from China University of Geosciences (Beijing), joint first author of the study, said: "This dinosaur embryo was acquired by the director of Yingliang Group, Mr Liang Liu, as suspected egg fossils around the 2000. During the construction of Yingliang Stone Nature History Museum in 2010s, museum staff sorted through the storage and discovered the specimens.

"These specimens were identified as dinosaur egg fossils. Fossil preparation was conducted and eventually unveiled the embryo hidden inside the egg. This is how 'Baby Yingliang' was brought to light."

Read more at Science Daily

The Hitchhiker’s guide to the soil

The interaction of fungi and bacteria in the transport of viruses in the soil ecosystem has been examined by a UFZ research team in a study recently published in the journal of the International Society for Microbial Ecology (ISME Journal). The scientists showed a novel mechanism of viral transport by bacterial shuttles traveling along fungal hyphae. Bacteria thereby benefit from taking along viruses on the conquest of new habitats.

There are up to one billion viruses in just one gram of soil. However, little is known about their influence on the nutrient and carbon cycle in the soil ecosystem. Soils can sometimes be inhospitable places. Dry zones and air-filled soil pores are almost impossible obstacles for bacteria and viruses. In order for them to move around -- for example, to get to a place with better conditions -- they need water. But the situation is not completely hopeless. Because there is an excellently developed infrastructure in the soil: the fungal network. Fungi are always in search of water and nutrients. To do this, they form hyphae, long, thin threads that run through the soil as a widely branched network. Fungi are thus able to bridge dry and nutrient-poor zones.

In an earlier study, UFZ researchers showed that soil bacteria use the mucus-covered fungal hyphae in order to move around on them and thus reach new food sources. In their current study, the research team led by environmental microbiologist Dr. Lukas Y. Wick has now been able to identify another beneficiary of the underground fungal network. "Phages, i.e. viruses that have bacteria as their sole target, also travel this fungal highway," says Wick. "Not independently but rather by hitching a ride with bacteria. Physical forces cause the viruses to adhere to the surface of bacteria -- much like mussels adhere to the hull of a ship." In this way, viruses hitch a ride through the soil -- until they arrive at a place that is better suited for them. But what exactly is a good place for soil-dwelling viruses?

"Wherever the host bacteria of the viruses are found," says Wick. "Not every phage can infect every bacterium," says Wick. "Because of a kind of lock-and-key principle, phages can smuggle their genetic material only into their respective host bacteria." If this succeeds, the bacterium is reprogrammed to produce new phages. The bacterial cell then bursts, thereby releasing the phages of the next generation. These can then once again infect new host bacteria. "The phages are highly efficient at this. This obviously also gives the shuttle bacteria a real advantage," says Wick. "We were able to show that soil bacteria with phages attached to them were able to spread far better in their new location than bacteria without this viral baggage."

It is well known from macro-ecology that migratory species can cause problems for the established residents of a habitat. Also that invasive species can bring pathogens that increasingly contribute to the displacement of native species. The UFZ research group therefore interpreted their data using MAFIA (MAecological Framework of Invasive Aliens), a well-known model of invasion ecology. "With our fungus-bacteria-phage system, we were able to detect the same invasion patterns on a micro-scale as we did in the macro-ecological system," says Wick. "And because our microbial laboratory model can be quickly and easily sampled and modified, it could be used as a model system to answer various questions and hypotheses in invasion ecology -- such as the transport of pests or pathogens."

For their studies, the research team recreated a micro-attack of bacteria and phages in the laboratory. For this purpose, two zones with culture medium were used. These were connected to each other only via fungal hyphae. "In Zone A, we used typical soil bacteria as shuttles as well as phages that cannot harm this bacterial species," explains Xin You, first author of the study and PhD student at the UFZ Department of Environmental Microbiology. "Zone B was colonised with a phage-specific host bacterium." In different experimental approaches, the research team had the shuttle bacteria travel along the fungal hyphae highway with and without viral baggage. "The result was clear: the bacteria-phage duo had a clear advantage in the invasion of Zone B," says You. "The shuttle bacteria benefited from the power of the phages, which effectively disabled their host bacteria and thus also eliminated food competition for the invading bacteria."

Read more at Science Daily

Where does the special scent of thyme and oregano come from?

Thyme and oregano are not only popular herbs for cooking, but also valuable medicinal plants. Their essential oils contain thymol and carvacrol which impart the typical flavors and are medically important. A team from Martin Luther University Halle-Wittenberg (MLU) and Purdue University in the USA has now fully identified how the plants produce these two substances. The results could simplify the breeding process and improve the pharmaceutical value of thyme and oregano. The study appears in the journal Proceedings of the National Academy of Sciences.

Thymol, which is mainly extracted from thyme, has secretolytic, antibacterial and antispasmodic properties. The plant is therefore often used in tea for colds, cough syrups and as an herbal remedy for bronchitis. In contrast, oregano contains particularly high levels of carvacrol, which has similar properties. Its smell is often associated with pizza sauce and other Mediterranean dishes. Both substances are chemically closely related and are produced by thyme and oregano in multi-stage processes. "It's like a production line in a factory: Every step needs to be coordinated and the desired product only emerges when the steps are carried out in the right order," explains Professor Jörg Degenhardt from the Institute of Pharmacy at MLU. Instead of machines, specific biomolecules -- enzymes -- carry out this work in special glands on the surface of the leaves.

Together with researchers from Purdue University in the USA, the team in Halle decoded the individual production steps, thereby solving a decades-old mystery. "For a long time it was assumed that p-Cymene was an intermediate product of thymol and carvacrol synthesis. However, it was chemically not feasible for thymol or carvacrol to ultimately be produced from this substance," says Degenhardt. In fact, normal production of the two substances does not produce any p-Cymene at all, but rather an extremely unstable intermediate product. "This is only present for a few moments in the plant cells, which is why observing it is so difficult. However, it represents the hitherto missing step in the synthesis of the two substances," says Degenhardt. The processes start out the same for both thymol and carvacrol; only in step four do different enzymes that produce the respective substances come into play. In a fifth step, thymol and carvacrol can be further converted to thymohydroquinone and thymoquinone, which have anti-inflammatory and anti-tumour effects.

The researchers were also able to use these new findings to genetically reprogramme a species of tobacco, the model plant N. benthamiana, to produce thymol. "Even though this only happened in small quantities, it meant that we were able to fully understand the synthesis pathways and the associated enzymes," summarises Degenhardt.

Read more at Science Daily

First genetic risk factors identified for sudden unexplained death in children after age one

A new study found that changes in specific genes may contribute each year to the roughly 400 sudden unexplained deaths in children (SUDC) aged one year and older -- and separately from sudden infant death syndrome (SIDS).

Children younger than 1 year old who die suddenly are diagnosed with SIDS, and older children with SUDC. But the conditions likely have many factors in common, say the study authors. Although SIDS causes 3 times as many deaths as SUDC each year, it receives more than 20 times the research funding. Parents who lost a child older than age 1 have had few options to support their search for answers, and no research organization to join.

For this reason, study author Laura Gould, after losing her daughter, Maria, to SUDC at the age of 15 months in 1997, asked NYU Langone Health neurologist Orrin Devinsky, MD , to co-found the SUDC Registry and Research Collaborative (SUDCRRC). Since 2014, registry staff have worked with bereaved parents to enroll their families in the registry, which collects and analyzes genetic specimens from parents and their deceased child. Such molecular autopsies are not currently part of the standard cause-of-death investigations conducted by most medical examiner's and coroner's offices.

Published online December 20 in the Proceedings of the National Academy of Sciences, the new study is the first to identify genetic differences present in a large group of SUDC cases, most of which involved children who died between the ages of 1 and 4.

Led by researchers from the NYU Grossman School of Medicine, the study analyzed the DNA codes of 124 sets of parents, and of the child that each couple lost to SUDC. They found that nearly 9 percent -- or 11 of the 124 children -- had DNA code changes in genes that regulate calcium function. Calcium-based signals are important for brain cell and heart muscle function. When such signals are abnormal, they may cause arrhythmias (abnormal heart rhythms) or seizures, both of which increase the risk of sudden death.

The researchers discovered that most of these DNA changes were new. The mutations were not inherited, instead arising randomly in the children of parents who did not have that genetic change, says Gould. Thus, if SUDC occurs in one child, it is unlikely to occur again if the same couple has another child. This provides some reassurance to families who want to have another child.

"Our study is the largest of its kind to date, the first to prove that there are definite genetic causes of SUDC, and the first to fill in any portion of the risk picture," says senior study author Richard Tsien, DPhil, chair of the Department of Neuroscience and Physiology and director of the Neuroscience Institute at NYU Langone. "Along with providing comfort to parents, new findings about genetic changes involved will accumulate with time, reveal the mechanisms responsible, and serve as the basis for new treatment approaches."

First Hints

"We focused on 137 genes linked by past studies to cardiac arrhythmias, epilepsy , and related conditions, because seizures and sudden cardiac death are known to be more prevalent in SUDC," says study author Dr. Devinsky, director of NYU Langone's Comprehensive Epilepsy Center. "Among the children that died, we found a tenfold greater frequency of genetic changes in these genes than in the general population."

In a partial explanation for these trends, the study's statistical analysis found that the genetic changes present in the children with SUDC occurred in clusters with similar functions, most controlling calcium channels in brain and heart muscle cells. After receiving the right signal, a cell opens the channels, enabling calcium ions to rush across membranes to create an electric current. In neurons this current triggers signals along nerve pathways, and in heart muscle cells, contractions as the heart beats.

Mutations found in the current study are known to slow calcium channel inactivation, prolong the current running through them, and potentially lead to abnormal heart rhythms that can cause the heart to stop, say the study authors. The two genes with de novo mutations in calcium processing found in more than one child in the study were RYR2 and CACNA1C, both of which are known to be linked to a cardiac arrhythmia. Other genes mutated in the SUDC group have been linked to seizures.

In addition, more than 91 percent of the children died while asleep or resting, including 50 percent of those with de novo mutations affecting genes involved with calcium physiology in the heart and brain -- CACNA1C, RYR2, CALM1, and TNNI3. Moving forward, the team plans larger studies to look at the role of neurohumoral status (sleep vs. waking, rest vs. exercise), identify more mutations that may be harmful in SUDC, and determine if the calcium channel flaws cause more dire problems in brain cells or heart muscle.

Read more at Science Daily

Dec 21, 2021

Are black holes and dark matter the same?

Proposing an alternative model for how the universe came to be, a team of astrophysicists suggests that all black holes -- from those as tiny as a pin head to those covering billions of miles -- were created instantly after the Big Bang and account for all dark matter.

That's the implication of a study by astrophysicists at the University of Miami, Yale University, and the European Space Agency that suggests that black holes have existed since the beginning of the universe and that these primordial black holes could be as-of-yet unexplained dark matter. If proven true with data collected from this month's launch of the James Webb Space Telescope, the discovery may transform scientific understanding of the origins and nature of two cosmic mysteries: dark matter and black holes.

"Our study predicts how the early universe would look if, instead of unknown particles, dark matter was made by black holes formed during the Big Bang -- as Stephen Hawking suggested in the 1970s," said Nico Cappelluti, an assistant professor of physics at the University of Miami and first author of the study slated for publication in The Astrophysical Journal.

"This would have several important implications," continued Cappelluti, who this year expanded the research he began at Yale as the Yale Center for Astronomy and Astrophysics Prize Postdoctoral Fellow. "First, we would not need 'new physics' to explain dark matter. Moreover, this would help us to answer one of the most compelling questions of modern astrophysics: How could supermassive black holes in the early universe have grown so big so fast? Given the mechanisms we observe today in the modern universe, they would not have had enough time to form. This would also solve the long-standing mystery of why the mass of a galaxy is always proportional to the mass of the super massive black hole in its center."

Dark matter, which has never been directly observed, is thought to be most of the matter in the universe and act as the scaffolding upon which galaxies form and develop. On the other hand, black holes, which can be found at the centers of most galaxies, have been observed. A point in space where matter is so tightly compacted, they create intense gravity.

Co-authored by Priyamvada Natarajan, professor of astronomy and physics at Yale, and Günther Hasinger, director of science at the European Space Agency (ESA), the new study suggests that so-called primordial black holes of all sizes account for all black matter in the universe.

"Black holes of different sizes are still a mystery," Hasinger explained. "We don't understand how supermassive black holes could have grown so huge in the relatively short time available since the universe existed."

Their model tweaks the theory first proposed by Hawking and fellow physicist Bernard Carr, who argued that in the first fraction of a second after the Big Bang, tiny fluctuations in the density of the universe may have created an undulating landscape with "lumpy" regions that had extra mass. These lumpy areas would collapse into black holes.

That theory did not gain scientific traction, but Cappelluti, Natarajan, and Hasinger suggest it could be valid with some slight modifications. Their model shows that the first stars and galaxies would have formed around black holes in the early universe. They also propose that primordial black holes would have had the ability to grow into supermassive black holes by feasting on gas and stars in their vicinity, or by merging with other black holes.

"Primordial black holes, if they do exist, could well be the seeds from which all the supermassive black holes form, including the one at the center of the Milky Way," Natarajan said. "What I find personally super exciting about this idea is how it elegantly unifies the two really challenging problems that I work on -- that of probing the nature of dark matter and the formation and growth of black holes -- and resolves them in one fell swoop."

Primordial black holes also may resolve another cosmological puzzle: the excess of infrared radiation, synced with X-ray radiation, that has been detected from distant, dim sources scattered around the universe. The study authors said growing primordial black holes would present "exactly" the same radiation signature.

And, best of all, the existence of primordial black holes may be proven -- or disproven -- in the near future, courtesy of the Webb telescope scheduled to launch from French Guiana before the end of the year and the ESA-led Laser Interferometer Space Antenna (LISA) mission planned for the 2030s.

Developed by NASA, ESA, and the Canadian Space Agency to succeed the Hubble Space Telescope, the Webb can look back more than 13 billion years. If dark matter is comprised of primordial black holes, more stars and galaxies would have formed around them in the early universe, which is precisely what the cosmic time machine will be able to see.

Read more at Science Daily

Could acid-neutralizing life-forms make habitable pockets in Venus’ clouds?

It's hard to imagine a more inhospitable world than our closest planetary neighbor. With an atmosphere thick with carbon dioxide, and a surface hot enough to melt lead, Venus is a scorched and suffocating wasteland where life as we know it could not survive. The planet's clouds are similarly hostile, blanketing the planet in droplets of sulfuric acid caustic enough to burn a hole through human skin.

And yet, a new study supports the longstanding idea that if life exists, it might make a home in Venus' clouds. The study's authors, from MIT, Cardiff University, and Cambridge University, have identified a chemical pathway by which life could neutralize Venus' acidic environment, creating a self-sustaining, habitable pocket in the clouds.

Within Venus' atmosphere, scientists have long observed puzzling anomalies -- chemical signatures that are hard to explain, such as small concentrations of oxygen and nonspherical particles unlike sulfuric acid's round droplets. Perhaps most puzzling is the presence of ammonia, a gas that was tentatively detected in the 1970s, and that by all accounts should not be produced through any chemical process known on Venus.

In their new study, the researchers modeled a set of chemical processes to show that if ammonia is indeed present, the gas would set off a cascade of chemical reactions that would neutralize surrounding droplets of sulfuric acid and could also explain most of the anomalies observed in Venus' clouds. As for the source of ammonia itself, the authors propose that the most plausible explanation is of biological origin, rather than a nonbiological source such as lightning or volcanic eruptions.

As they write in their study, the chemistry suggests that "life could be making its own environment on Venus."

This tantalizing new hypothesis is testable, and the researchers provide a list of chemical signatures for future missions to measure in Venus' clouds, to either confirm or contradict their idea.

"No life that we know of could survive in the Venus droplets," says study co-author Sara Seager, the Class of 1941 Professor of Planetary Sciences in MIT's Department of Earth, Atmospheric and Planetary Sciences (EAPS). "But the point is, maybe some life is there, and is modifying its environment so that it is livable."

The study's co-authors include Janusz Petkowski, William Bains, and Paul Rimmer, who are affiliated with MIT, Cardiff University, and Cambridge University.

Life suspect

"Life on Venus" was a trending phrase last year, when scientists including Seager and her co-authors reported the detection of phosphine in the planet's clouds. On Earth, phosphine is a gas that is produced mainly through biological interactions. The discovery of phosphine on Venus leaves room for the possibility of life. Since then, however, the discovery has been widely contested.

"The phosphine detection ended up becoming incredibly controversial," Seager says. "But phosphine was like a gateway, and there's been this resurgence in people studying Venus."

Inspired to look more closely, Rimmer began combing through data from past missions to Venus. In these data, he identified anomalies, or chemical signatures, in the clouds that had gone unexplained for decades. In addition to the presence of oxygen and nonspherical particles, anomalies included unexpected levels of water vapor and sulfur dioxide.

Rimmer proposed the anomalies might be explained by dust. He argued that minerals, swept up from Venus' surface and into the clouds, could interact with sulfuric acid to produce some, though not all, of the observed anomalies. He showed the chemistry checked out, but the physical requirements were unfeasible: A massive amount of dust would have to loft into the clouds to produce the observed anomalies.

Seager and her colleagues wondered if the anomalies could be explained by ammonia. In the 1970s, the gas was tentatively detected in the planet's clouds by the Venera 8 and Pioneer Venus probes. The presence of ammonia, or NH3, was an unsolved mystery.

"Ammonia shouldn't be on Venus," Seager says. "It has hydrogen attached to it, and there's very little hydrogen around. Any gas that doesn't belong in the context of its environment is automatically suspicious for being made by life."

Livable clouds

If the team were to assume that life was the source of ammonia, could this explain the other anomalies in Venus' clouds? The researchers modeled a series of chemical processes in search of an answer.

They found that if life were producing ammonia in the most efficient way possible, the associated chemical reactions would naturally yield oxygen. Once present in the clouds, ammonia would dissolve in droplets of sulfuric acid, effectively neutralizing the acid to make the droplets relatively habitable. The introduction of ammonia into the droplets would transform their formerly round, liquid shape into more of a nonspherical, salt-like slurry. Once ammonia dissolved in sulfuric acid, the reaction would trigger any surrounding sulfur dioxide to dissolve as well.

The presence of ammonia then could indeed explain most of the major anomalies seen in Venus' clouds. The researchers also show that sources such as lightning, volcanic eruptions, and even a meteorite strike could not chemically produce the amount of ammonia required to explain the anomalies. Life, however, might.

In fact, the team notes that there are life-forms on Earth -- particuarly in our own stomachs -- that produce ammonia to neutralize and make livable an otherwise highly acidic environment.

"There are very acidic environments on Earth where life does live, but it's nothing like the environment on Venus -- unless life is neutralizing some of those droplets," Seager says.

Scientists may have a chance to check for the presence of ammonia, and signs of life, in the next several years with the Venus Life Finder Missions, a set of proposed privately funded missions, of which Seager is principal investigator, that plan to send spacecraft to Venus to measure its clouds for ammonia and other signatures of life.

Read more at Science Daily

Plants as cold specialists from the ice age

As cold relics in an increasingly warming world, plants of the spoonweed group time and again quickly adapted to a changing climate during the Ice Ages of the last two million years. An international team of evolutionary biologists and botanists led by Prof. Dr Marcus Koch of Heidelberg University used genomic analyses to study what factors favour adaptation to extreme climatic conditions. The evolutionary history of the Brassicaceae family provides insights into how plants may be able to cope with climate change in the future.

"With the challenges of increasing global warming, developing a basic understanding of how plants adapted to severe environmental change is increasingly urgent," stresses Prof. Koch, whose "Biodiversity and Plant Systematics" working group conducts research at the Centre for Organismal Studies (COS). In many cases, their evolutionary past also strongly determines the future adaptability of plants as well as their ability to develop into new forms and types, he continues. The spoonweed genus, or Latin Cochlearia, from the Brassicaceae family separated from its Mediterranean relatives more than ten million years ago. While their direct descendants specialised in response to drought stress, the spoonweeds conquered the cold and arctic habitats at the beginning of the Ice Age 2.5 million years ago.

In controlled lab experiments, the researchers studied cultivated species from both groups to determine how they repeatedly adapted during the relatively rapidly alternating cold and warm periods over the last two million years. A "cold training" indicates that the physiological adaptations to drought and salt stress during their early evolution later helped the plants develop a high tolerance to cold. Although the researchers expected that both groups would show a pronounced response to this "cold training," there appeared to be no significant difference in response to cold stress between the cold specialists of the Arctic and Alpine regions and the dry specialists or species adapted to salt water from the Mediterranean.

Furthermore, the newly emerged plants adapted to cold developed separate gene pools that frequently came into contact with one another in the cold regions. Because spoonweeds have hardly any genetic barriers to contact between species, populations with multiple sets of chromosomes developed that, subsequently, were continually reduced in their size. "Time after time, these species were then able to occupy cold ecological niches," explains Marcus Koch.

While the gene pool of the cold specialists from the Arctic expanded, the European spoonweed population has shrunk since the last Ice Age. Cold habitats in Europe are disappearing in the face of significant global warming, thus seriously endangering all spoonweed species. Only the Danish spoonweed, with its abundant sets of chromosomes, remains unscathed and in some cases is even spreading. "It is the only species of spoonweed that changed its life cycle and flourishes in salt and sand locations. In some of its ecological features, it resembles its faraway Mediterranean cousins," adds Prof. Koch. For the researchers, the physiological adaptability of the spoonweeds makes them a promising model system to simultaneously study adaptations to drought, cold, and salt stress.

Read more at Science Daily

Extinct reptile discovery reveals earliest origins of human teeth, study finds

A new extinct reptile species has shed light on how our earliest ancestors became top predators by modifying their teeth in response to environmental instability around 300 million years ago.

In findings published in Royal Society Open Science, researchers at the University of Bristol have discovered that this evolutionary adaptation laid the foundations for the incisor, canine and molar teeth that all mammals -- including humans -- possess today.

Shashajaia is one of the most primitive members of a group called the Sphenacodontoidea, which includes the famous sail-backed Dimetrodon, and mammal-like reptiles known as therapsids, which eventually evolved into mammals. It is remarkable for its age and anatomy, possessing a very unique set of teeth that set it apart from other synapsids -- meaning the animal lineage that mammals belong to -- of the time.

Dr Suresh Singh of the School of Earth Sciences explained: "The teeth show clear differentiation in shape between the front and back of the jaw, organised into distinct regions. This is the basic precursor of what mammals have today -- incisors and canines up front, with molars in the back. This is the oldest record of such teeth in our evolutionary tree."

The novel dentition of Shashajaia demonstratesthat large, canine-like differentiated teeth were present in synapsids by the Late Carboniferous period -- a time famous for giant insects and the global swampy rainforests that produced much of our coal deposits.

By analytically comparing the tooth variation observed in Shashajaia with other synapsids, the study suggests that distinctive, specialised teeth likely emerged in our synapsid ancestors as a predatory adaptation to help them catch prey at a time when global climate change approximately 300 million years ago saw once-prevalent Carboniferous wetlands replaced by more arid, seasonal environments. These new, more changeable conditions brought a change in the availability and diversity of prey.

Lead author Dr Adam Huttenlocker of the University of Southern California said: "Canine-like teeth in small sphenacodonts like Shashajaia might have facilitated a fast, raptorial bite in riparian habitats where a mix of terrestrial and semi-aquatic prey could be found in abundance."

The new reptile is one of the oldest synapsids. It was named "Shashajaia bermani," which translates as Berman's bear heart, to honour the 51-year career of veteran palaeontologist, Dr David Berman of the Carnegie Museum of Natural History, as well as the local Navajo people of the discovery site within the Bears Ears National Monument, Utah.

Dr Singh said: "The study is a testament to Dr Berman who originally discovered the fossil site in 1989, and his decades of work on synapsids and other early tetrapods from the Bears Ears region of Utah which helped to justify the Bears Ears National Monument in 2016."

The site is located within an area known as the Valley of the Gods and is of huge importance to palaeontologists.

Read more at Science Daily

New muscle layer discovered on the jaw

Human anatomy still has a few surprises in store for us: researchers at the University of Basel have discovered a previously overlooked section of our jaw muscles and described this layer in detail for the first time.

The masseter muscle is the most prominent of the jaw muscles. If you place your fingers on the back of your cheeks and press your teeth together, you'll feel the muscle tighten. Anatomy textbooks generally describe the masseter as consisting of one superficial and one deep part.

Now, researchers led by Dr. Szilvia Mezey from the Department of Biomedicine at the University of Basel and Professor Jens Christoph Türp from the University Center for Dental Medicine Basel (UZB) have described the structure of the masseter muscle as consisting of an additional third, even deeper layer. In the scientific journal Annals of Anatomy, they propose that this layer be given the name Musculus masseter pars coronidea -- in other words, the coronoid section of the masseter -- because the newly described layer of muscle is attached to the muscular (or "coronoid") process of the lower jaw.

The anatomical study was based on detailed examination of formalin-fixed jaw musculature, computer tomographic scans and the analysis of stained tissue sections from deceased individuals who had donated their bodies to science. This was in addition to MRI data from a living person.

As if a new animal species had been discovered

"This deep section of the masseter muscle is clearly distinguishable from the two other layers in terms of its course and function," explains Mezey. The arrangement of the muscle fibers, she says, suggests that this layer is involved in the stabilization of the lower jaw. It also appears to be the only part of the masseter that can pull the lower jaw backwards -- that is, toward the ear.

A look at historical anatomy studies and textbooks reveals that the structure of the masseter muscle has already raised questions in the past. In a previous edition of Gray's Anatomy, from the year 1995, the editors also describe the masseter muscle as having three layers, although the cited studies were based on the jaw musculature of other species and partly contradicted one another.

Other individual studies from the early 2000s also reported three layers, but they divided the superficial section of the masseter into two layers and agreed with standard works in their description of the deeper section.

Read more at Science Daily

Dec 20, 2021

Sauropod dinosaurs were restricted to warmer regions of Earth

Giant, long-necked sauropods, thought to include the largest land animals ever to have existed, preferred to live in warmer, more tropical regions on Earth, suggesting they may have had a different physiology from other dinosaurs, according to a new study led by researchers at UCL and the University of Vigo.

The study, published in the journal Current Biology, investigated the enigma of why sauropod fossils are only found at lower latitudes, while fossils of other main dinosaur types seem ubiquitously present, with many located in the polar regions.

The researchers analysed the fossil record across the Mesozoic era (the time of the dinosaurs), lasting from around 230 to 66 million years ago, looking at occurrences of fossils of the three main dinosaur types: sauropods, which include the Brontosaurus and the Diplodocus, theropods ("lizard-hipped"), which include velociraptors and Tyrannosaurus rex, and ornithischians ("bird-hipped") such as the Triceratops.

Combining this fossil data with data about climate throughout the period, along with information about how continents have moved across the globe, the researchers concluded that sauropods were restricted to warmer, drier habitats than other dinosaurs. These habitats were likely to be open, semi-arid landscapes, similar to today's savannahs.

Co-author Dr Philip Mannion (UCL Earth Sciences) said: "Our research shows that some parts of the planet always seemed to be too cold for sauropods. They seem to have avoided any temperatures approaching freezing. Other dinosaur types, in contrast, could thrive in Earth's polar regions, from innermost Antarctica to polar Alaska -- which, due to the warmer climate, were ice-free, with lush vegetation.

"This suggests sauropods had different thermal requirements from other dinosaurs, relying more on their external environment to heat their bodies -- slightly closer to being 'cold-blooded', like modern-day reptiles. Their grand size hints that this physiology may have been unique."

First author Dr Alfio Alessandro Chiarenza, formerly of UCL who is now based at the University of Vigo, Spain, said: "It may be that sauropods were physiologically incapable of thriving in colder regions, or that they thrived less well in these areas than their dinosaurian cousins and were outcompeted.

"A mix of features may have helped sauropods shed heat more easily than mammals do today. Their long necks and tails would have given them a larger surface area, and they may have had a respiratory system more akin to birds, which is much more efficient.

"Some species of theropods and ornithischians are known to have had feathers or downy fur helping them retain body warmth. This suggests they may have generated their own body heat. For sauropods, however, there is no evidence of this kind of insulation.

"Sauropods' strategies for keeping their eggs warm may also have differed from the other dinosaurs. Theropods probably warmed eggs by sitting on them, whereas ornithischians seem to have used heat generated by decaying plants. Sauropods, meanwhile, may have buried their eggs, relying on heat from the sun and the ground."

In their paper, the researchers noted that the fossil record showed zero occurrences of sauropods above a latitude of 50 degrees north -- an area encompassing most of Canada, Russia, northern Europe and the UK -- or below 65 degrees south, encompassing Antarctica. In contrast, there are rich records for theropods and ornithischians living above 50 degrees north in later periods (from 145 million years ago).

To test if this was a true reflection of where sauropods lived, researchers used a statistical technique to adjust for gaps in the fossil record, and also analysed where the highest diversities of dinosaur types were in different periods throughout the Mesozoic era.

They combined fossil data with climate data, allowing an estimate of the temperature ranges of the dinosaur types' habitats, finding that sauropods' range across the latitudes was more restricted during colder periods.

They then used habitat modelling to infer which regions of the globe would likely be suitable for sauropods and the other dinosaur types to live.

While in the past it was believed that dinosaurs were ectothermic ("cold-blooded"), like reptiles today, relying on the external environment to heat their bodies, it is now thought they were closer to "warm-blooded" mammals, generating some of their own body heat (endothermic).

Read more at Science Daily

Discovering sources of Roman silver coinage from the Iberian Peninsula

Despite its prior status as a luxury commodity, silver became widely used for coinage in the Roman world from the 7th century BCE onward and provided a standardized monetary system for ancient Mediterranean civilizations. However, the sources of silver used to produce Roman coinage have largely been used up, making it difficult to determine which deposits Roman miners exploited.

A new study published in the journal Geology yesterday evaluated silver sources from different mining provinces in the Iberian Peninsula to determine which locations may have been mined for silver to produce Roman coinage.

"The control of silver sources was a major geopolitical issue, and the identification of Roman silver sources may help archaeologists to reconstruct ancient fluxes of precious metals and to answer important historical questions," said Jean Milot, the lead author of this study.

The Iberian Peninsula, which includes modern Spain and Portugal, is host to world-class silver deposits, especially in the southern region. These deposits contain galena, which is the main ore of lead and an important source of silver. To extract silver, the galena ore is smelted and purified, with refined silver for coin minting able to reach a purity of over 95%.

To track the source of Roman silver, the team of researchers analyzed the silver and lead compositions of galena samples from ore deposits across the Iberian Peninsula and compared the results to the chemical signatures of silver Roman coins.

They identified two different types of galena deposits based on the silver elemental composition of the samples: silver-rich galena that would have been a likely source for Roman coinage, and silver-poor galena that would have been exploited for lead only and would have been of lower economic importance.

However, few of the ore samples had a composition that fit the silver elemental composition of the Roman silver coins. Silver-bearing ores spanned a wide range in compositional variability, but Roman coins notably have a very narrow elemental composition range.

Based on the lead elemental signatures of the galena samples, the ore deposits from southeastern Spain best fit the composition of Roman coins, suggesting that these deposits were a major source of Roman silver. Both silver-rich and silver-poor galena deposits were likely exploited here, with the extracted lead from silver-poor galena able to be mixed with other ores to extract silver.

These results based on chemical analyses are also consistent with archaeological evidence for ancient mining exploitation in the region.

This combined analytical toolkit provides a way to distinguish between silver-rich deposits and deposits barren of silver ore, which is critical in understanding the dynamics of silver supply in Roman times.

Read more at Science Daily

After thousands of years, an iconic whale confronts a new enemy

For millennia, vast expanses of the Arctic Ocean have been untouched by humans, ocean where narwhals and other marine mammals lived undisturbed. Now that climate change is causing sea ice to melt, there has been an uptick of human activity in the Arctic. This has resulted in significantly more noise from an array of human sources, including seismic surveys, mine blasts, port projects and cruise ships.

Although the noise is not violently loud when it comes a from a fair distance, for narwhals, the noise is disturbing and triggers stress -- even many kilometers away. These are the results of unique experiments conducted with the iconic whale. The University of Copenhagen has helped the Greenland Institute of Natural Resources (Pinngortitaleriffik) to analyse the data collected during the research.

Narwhals are notoriously difficult to study because they only live in the hard-to-reach High Arctic, which is often covered by ice. But the research team managed to tag a herd of narwhals in the Scoresby Sound fjord system of East Greenland using a variety of measurement equipment. They then positioned a ship in the fjord, which exposed the animals to noise -- both from the ship's engine and from a seismic airgun used for oil exploration.

"The narwhals' reactions indicate that they are frightened and stressed. They stop emitting the click sounds that they need to feed, they stop diving deep and they swim close to shore, a behaviour that they usually only display when feeling threatened by killer whales. This behavior means that they have no chance of finding food for as long as the noise persists," explains marine biologist Outi Tervo of the Greenland Institute of Natural Resources, who is one of the researchers behind the study.

Researchers can also see that the whales make an uncommon number of strokes with their tails when fleeing from a vessel. This may pose a danger to them because it vastly depletes their energy reserves. Constant energy conservation is important for narwhals as they need a great deal of oxygen to dive several hundred meters below the surface for food and return to the surface for air.

Everything in a narwhal's life is sound

Narwhals spend much of their time in the dark -- partly because the Arctic is dark for half of the year, and partly because these unicorns of the sea hunt at depths of up to 1800 meters, where there is no light. Thus, everything in a narwhal's life is based on sound. And like bats, they orient themselves by echolocation -- which includes emitting click sounds as they hunt.

"Our data shows that narwhals react to noise 20-30 kilometers away from a noise source by completely stopping their clicking sounds. And in one case, we could measure this from a source 40 kilometers away. It is quite surprising that we can measure how something so far away can influence whale behaviour," says Professor Susanne Ditlevsen of the University of Copenhagen's Department of Mathematical Sciences.

Professor Ditlevsen was responsible for the statistical analyses of the enormous and extremely complicated data sets that emerged from the experiments, where data was collected via underwater microphone, GPS, accelerometer (an apparatus that measures movement in three directions) and heart rate monitors. She continues:

"Even when a ship's noise is lower than the background noise in the ocean and we can no longer hear it with our advanced equipment, the whales can hear and distinguish it from other sounds in their midst. And so, to a degree, their behavior is clearly affected. This demonstrates how incredibly sensitive narwhals are."

Following a week of sonic tests, the researchers observed the whales' behavior return to normal again.

"But if they are exposed to noise for a long period of time -- for example, if a port is built nearby that leads to regular shipping traffic, the whales' success in hunting could be affected for a longer period of time, which could become quite serious for them. In this case, we fear that it could have physiological consequences for them and impair their fitness," says Outi Tervo.

Calling upon to authorities

The researchers' hope is that the authorities and other decision-makers will ensure for better management of the activities that create noise pollution in narwhal habitats.

"For the most part, narwhals live around Greenland, Canada and Svalbard in Norway. As such, these countries have the main responsibility for looking after them. Because narwhals are so well-adapted to the Arctic environment, they can't just choose to go to the Caribbean instead. It is being pressured both by warmer water temperatures and in some places, by fish catch. Now, noise enters the equation," says Susanne Ditlevsen.

Read more at Science Daily

California spotted owls benefit from forest restoration

This finding is showcased in "Forest restoration limits megafires and supports species conservation under climate change," a new research publication released this week in Frontiers in Ecology and the Environment. Lead author Gavin Jones, Ph.D., a research ecologist with the USDA Forest Service (USFS) Rocky Mountain Research Station, said the research shows that forest restoration and the preservation of the spotted owl are not mutually exclusive, as had previously been feared.

"We've shown that restoration provides co-benefits to owls by reducing their exposure to stand-replacing wildfire, which leads to loss of nesting habitat," Jones said.

The research team also included collaborators from the USFS Pacific Southwest Research Station, USFS Region 5, University of Wisconsin-Madison, and University of California-Merced.

The scientists developed a fire simulation model that predicted future severe fire across the Sierra Nevada through mid-century. The predicted amount of severe fire then changed as a function of simulated fuels reduction and forest restoration treatments. The fire model was then linked to a Sierra Nevada-wide population model of California spotted owls, which also responded to potential direct effects of treatments on owl habitat.

The science found that placing treatments inside owl territories cut the amount of predicted severe fire nearly in half compared to treating the same total area outside of such territories. Thus, treating inside owl territories may have an outsized effect on reducing future severe fire, while providing a net benefit to the California spotted owl population.

According to Jones, "Even under climate change, forest management can move the needle on forest ecosystem conservation by reducing future stand-replacing fire and do so in a way that safeguards habitat for sensitive species like the California spotted owl."

Read more at Science Daily

Dec 19, 2021

Secret embraces of stars revealed by Alma

Unlike our Sun, most stars live with a companion. Sometimes, two come so close that one engulfs the other -- with far-reaching consequences. When a team of astronomers led by Chalmers University of Technology, Sweden, used the telescope Alma to study 15 unusual stars, they were surprised to find that they all recently underwent this phase. The discovery promises new insight on the sky's most dramatic phenomena -- and on life, death and rebirth among the stars.

Using the gigantic telescope Alma in Chile, a team of scientists led by Chalmers University of Technology studied 15 unusual stars in our galaxy, the Milky Way, the closest 5000 light years from Earth. Their measurements show that all the stars are double, and all have recently experienced a rare phase that is poorly understood, but is believed to lead to many other astronomical phenomena. Their results are published this week in the scientific journal Nature Astronomy.

By directing the antennas of Alma towards each star and measuring light from different molecules close to each star, the researchers hoped to find clues to their backstories. Nicknamed "water fountains," these stars were known to astronomers because of intense light from water molecules -- produced by unusually dense and fast-moving gas.

Located 5000 m above sea level in Chile, the Alma telescope is sensitive to light with wavelengths around one millimetre, invisible to human eyes, but ideal for looking through the Milky Way's layers of dusty interstellar clouds towards dust-enshrouded stars.

"We were extra curious about these stars because they seem to be blowing out quantities of dust and gas into space, some in the form of jets with speeds up to 1.8 million kilometres per hour. We thought we might find out clues to how the jets were being created, but instead we found much more than that," says Theo Khouri, first author of the new study.

Stars losing up to half their total mass

The scientists used the telescope to measure signatures of carbon monoxide molecules, CO, in the light from the stars, and compared signals from different atoms (isotopes) of carbon and oxygen. Unlike its sister molecule carbon dioxide, CO2, carbon monoxide is relatively easy to discover in space, and is a favourite tool for astronomers.

"Thanks to Alma's exquisite sensitivity, we were able to detect the very faint signals from several different molecules in the gas ejected by these stars. When we looked closely at the data, we saw details that we really weren't expecting to see," says Theo Khouri.

The observations confirmed that the stars were all blowing off their outer layers. But the proportions of the different oxygen atoms in the molecules indicated that the stars were in another respect not as extreme as they had seemed, explains team member Wouter Vlemmings, astronomer at Chalmers University of Technology.

"We realised that these stars started their lives with the same mass as the Sun, or only a few times more. Now our measurements showed that they have ejected up to 50% of their total mass, just in the last few hundred years. Something really dramatic must have happened to them," he says.

A short but intimate phase

Why were such small stars come losing so much mass so quickly? The evidence all pointed to one explanation, the scientists concluded. These were all double stars, and they had all just been through a phase in which the two stars shared the same atmosphere -- one star entirely embraced by the other.

"In this phase, the two stars orbit together in a sort of cocoon. This phase, which we call a "common envelope" phase, is really brief, and only lasts a few hundred years. In astronomical terms, it's over in the blink of an eye," says team member Daniel Tafoya of Chalmers University of Technology.

Most stars in binary systems simply orbit around a common centre of mass. These stars, however, share the same atmosphere. It can be a life-changing experience for a star, and may even lead to the stars merging completely.

Scientists believe that this sort of intimate episode can lead to some of the sky's most spectacular phenomena. Understanding how it happens could help answer some of astronomers' biggest questions about how stars live and die, Theo Khouri explains.

"What happens to cause a supernova explosion? How do black holes get close enough to collide? What's makes the beautiful and symmetric objects we call planetary nebulae? Astronomers have suspected for many years that common envelopes are part of the answers to questions like these. Now we have a new way of studying this momentous but mysterious phase," he says.

Understanding the common envelope phase will also help scientists study what will happen in the very distant future, when the Sun too will become a bigger, cooler star -- a red giant -- and engulf the innermost planets.

"Our research will help us understand how that might happen, but it gives me another, more hopeful perspective. When these stars embrace, they send dust and gas out into space that can become the ingredients for coming generations of stars and planets, and with them the potential for new life," says Daniel Tafoya.

Since the 15 stars seem to be evolving on a human timescale, the team plan to keep monitoring them with Alma and with other radio telescopes. With the future telescopes of the SKA Observatory, they hope to study how the stars form their jets and change their surroundings. They also hope to find more -- if there are any.

Read more at Science Daily

Darwin’s finches forced to 'evolve'

Spending time with offspring is beneficial to development, but it's proving lifesaving to Galápagos Islands Darwin's finches studied by Flinders University experts.

A new study, published in Proceedings of the Royal Society B, has found evidence Darwin's finch females that spend longer inside the nest can ward off deadly larvae of the introduced avian vampire fly, which otherwise enter and consume the growing chicks.

The maternal buffer is a life-saver, according to the research, especially during the first days after hatching, when chicks are blind, helpless and cannot preen. Although older offspring still have to contend with the larvae, they are better able to preen themselves, and may dislodge and occasionally eat some of them.

"The pair male is also essential for success of the chicks. If he feeds the offspring a lot, the mother can remain inside the nest for longer," says Flinders University Professor Sonia Kleindorfer, who is also affiliated with the University of Vienna.

"Timing is everything. The female must forgo foraging herself, and her persistence is strongly influenced by good food provisioning of her offspring by the male."

The unintentionally introduced avian vampire fly, an invasive species on the Galápagos Islands, enters Darwin's finch nests when attending parents are absent.

The 17 Darwin's finch species on the Galápagos Islands are a textbook example of a rapid adaptive radiation: each species has a unique beak shape suited to extract resources from a different ecological niche. However, since being first observed in Darwin's finch nests in 1997, the avian vampire fly has been parasitising nestlings and changing the beak and behaviour of its Darwin's finch hosts.

The fly lays eggs that hatch into larvae that feed on the developing chicks, killing most chicks and causing beak deformation in the survivors.

"What we show in this publication is that longer female in-nest attendance of chicks predicts the number of parasites in the nest," says Professor Kleindorfer.

"The new research findings are significant because they show that 'just being there' can be a form of front-line defence against threats to offspring survival."

British naturalist Charles Darwin's theory of evolution by natural selection was developed while observing plants and animals in various environments, including the Galápagos Islands, where in 1835 he noted the rich diversity of endemic plants, birds and reptiles.

Female Darwin's finches provide much longer in-nest care to young offspring than males, and presence inside the nest is needed to fend off parasites. For this reason, females themselves may incur higher survival costs as they attempt to save their offspring.

Monitoring survivorship of female birds is often more difficult than for males since male Darwin's finches produce a loud advertisement song but females to not.

Usually we think of active defending males as contributing more to offspring survival than females that incubate eggs or brood young chicks, adds Flinders University researcher Dr Andrew Katsis.

"We know from long-term monitoring of Darwin's finches, from recapture and resighting data since 2000, that annual survivorship in females is much lower than in males, and over 50% of male Darwin's finches sing at the nest but don't attract a female," he says.

"Combined, these factors suggest that higher female mortality, and higher parental care costs carried by females, may be a contributing factor."

High-quality females that can sustain longer in-nest parental care with less feeding opportunity for themselves, paired with males that increase feeding to the offspring, have better chances to produce offspring in a vampire fly-dominated environment, the research concludes.

"Control measures are urgently needed to save Darwin's finches from extinction," the scientists say in another new publication in Birds.

Read more at Science Daily