Sep 10, 2021

ESO captures best images yet of peculiar 'dog-bone' asteroid

Using the European Southern Observatory's Very Large Telescope (ESO's VLT), a team of astronomers have obtained the sharpest and most detailed images yet of the asteroid Kleopatra. The observations have allowed the team to constrain the 3D shape and mass of this peculiar asteroid, which resembles a dog bone, to a higher accuracy than ever before. Their research provides clues as to how this asteroid and the two moons that orbit it formed.

"Kleopatra is truly a unique body in our Solar System," says Franck Marchis, an astronomer at the SETI Institute in Mountain View, USA and at the Laboratoire d'Astrophysique de Marseille, France, who led a study on the asteroid -- which has moons and an unusual shape -- published today in Astronomy & Astrophysics. "Science makes a lot of progress thanks to the study of weird outliers. I think Kleopatra is one of those and understanding this complex, multiple asteroid system can help us learn more about our Solar System."

Kleopatra orbits the Sun in the Asteroid Belt between Mars and Jupiter. Astronomers have called it a "dog-bone asteroid" ever since radar observations around 20 years ago revealed it has two lobes connected by a thick "neck." In 2008, Marchis and his colleagues discovered that Kleopatra is orbited by two moons, named AlexHelios and CleoSelene, after the Egyptian queen's children.

To find out more about Kleopatra, Marchis and his team used snapshots of the asteroid taken at different times between 2017 and 2019 with the Spectro-Polarimetric High-contrast Exoplanet REsearch (SPHERE) instrument on ESO's VLT. As the asteroid was rotating, they were able to view it from different angles and to create the most accurate 3D models of its shape to date. They constrained the asteroid's dog-bone shape and its volume, finding one of the lobes to be larger than the other, and determined the length of the asteroid to be about 270 kilometres or about half the length of the English Channel.

In a second study, also published in Astronomy & Astrophysics and led by Miroslav Brož of Charles University in Prague, Czech Republic, the team reported how they used the SPHERE observations to find the correct orbits of Kleopatra's two moons. Previous studies had estimated the orbits, but the new observations with ESO's VLT showed that the moons were not where the older data predicted them to be.

"This had to be resolved," says Brož. "Because if the moons' orbits were wrong, everything was wrong, including the mass of Kleopatra." Thanks to the new observations and sophisticated modelling, the team managed to precisely describe how Kleopatra's gravity influences the moons' movements and to determine the complex orbits of AlexHelios and CleoSelene. This allowed them to calculate the asteroid's mass, finding it to be 35% lower than previous estimates.

Combining the new estimates for volume and mass, astronomers were able to calculate a new value for the density of the asteroid, which, at less than half the density of iron, turned out to be lower than previously thought. The low density of Kleopatra, which is believed to have a metallic composition, suggests that it has a porous structure and could be little more than a "pile of rubble." This means it likely formed when material reaccumulated following a giant impact.

Kleopatra's rubble-pile structure and the way it rotates also give indications as to how its two moons could have formed. The asteroid rotates almost at a critical speed, the speed above which it would start to fall apart, and even small impacts may lift pebbles off its surface. Marchis and his team believe that those pebbles could subsequently have formed AlexHelios and CleoSelene, meaning that Kleopatra has truly birthed its own moons.

The new images of Kleopatra and the insights they provide are only possible thanks to one of the advanced adaptive optics systems in use on ESO's VLT, which is located in the Atacama Desert in Chile. Adaptive optics help to correct for distortions caused by the Earth's atmosphere which cause objects to appear blurred -- the same effect that causes stars viewed from Earth to twinkle. Thanks to such corrections, SPHERE was able to image Kleopatra -- located 200 million kilometres away from Earth at its closest -- even though its apparent size on the sky is equivalent to that of a golf ball about 40 kilometres away.

Read more at Science Daily

Surprise: The Milky Way is not homogeneous

In order to better understand the history and evolution of the Milky Way, astronomers are studying the composition of the gases and metals that make up an important part of our galaxy. Three main elements stand out: the initial gas coming from outside our galaxy, the gas between the stars inside our galaxy -- enriched with chemical elements -, and the dust created by the condensation of the metals present in this gas. Until now, theoretical models assumed that these three elements were homogeneously mixed throughout the Milky Way and reached a level of chemical enrichment similar to the Sun's atmosphere, called the Solar metallicity. Today, a team of astronomers from the University of Geneva (UNIGE) demonstrates that these gases are not mixed as much as previously thought, which has a strong impact on the current understanding of the evolution of galaxies. As a result, simulations of the Milky Way's evolution will have to be modified. These results can be read in the journal Nature.

Galaxies are made up of a collection of stars and are formed by the condensation of the gas of the intergalactic medium composed of mostly hydrogen and a bit of helium. This gas does not contain metals unlike the gas in galaxies -- in astronomy, all chemical elements heavier than helium are collectively called "metals," although they are atoms in gaseous form. "Galaxies are fuelled by 'virgin' gas that falls in from the outside, which rejuvenates them and allows new stars to form," explains Annalisa De Cia, a professor in the Department of Astronomy at the UNIGE Faculty of Science and first author of the study. At the same time, stars burn the hydrogen that constitutes them throughout their life and form other elements through nucleosynthesis. When a star that has reached the end of its life explodes, it expels the metals it has produced, such as iron, zinc, carbon and silicon, feeding these elements into the gas of the galaxy. These atoms can then condense into dust, especially in the colder, denser parts of the galaxy. "Initially, when the Milky Way was formed, more than 10 billion years ago, it had no metals. Then the stars gradually enriched the environment with the metals they produced," continues the researcher. When the amount of metals in this gas reaches the level that is present in the Sun, astronomers speak of Solar metallicity.

A not so homogeneous environment

The environment that makes up the Milky Way thus brings together the metals produced by the stars, the dust particles that have formed from these metals, but also gases from outside the galaxy that regularly enter it. "Until now, theoretical models considered that these three elements were homogeneously mixed and reached the Solar composition everywhere in our galaxy, with a slight increase in metallicity in the centre, where the stars are more numerous," explains Patrick Petitjean, a researcher at the Institut d'Astrophysique de Paris, Sorbonne University. "We wanted to observe this in detail using an Ultraviolet spectrograph on the Hubble Space Telescope."

Spectroscopy allows the light from stars to be separated in its individual colors or frequencies, a bit like a with prism or in a rainbow. In this decomposed light, astronomers are particularly interested in absorption lines: "When we observe a star, the metals that make up the gas between the star and ourselves absorb a very small part of the light in a characteristic way, at a specific frequency, which allows us not only to identify their presence, but also to say which metal it is, and how abundant it is," he continues.

A new method developed to observe the total metallicity

For 25 hours, the team of scientists observed the atmosphere of 25 stars using Hubble and the Very Large Telescope (VLT) in Chile. The problem? The dust cannot be counted with these spectrographs, even though it contains metals. Annalisa De Cia's team has therefore developed a new observational technique. "It involves taking into account the total composition of the gas and dust by simultaneously observing several elements such as iron, zinc, titanium, silicon and oxygen," explains the Geneva researcher. "Then we can trace the quantity of metals present in the dust and add it to that already quantified by the previous observations to get the total."

Thanks to this dual observation technique, the astronomers have found that not only is the Milky Way's environment not homogeneous, but that some of the areas studied reach only 10% of the Solar metallicity. "This discovery plays a key role in the design of theoretical models on the formation and evolution of galaxies," says Jens-Kristian Krogager, researcher at the UNIGE's Department of Astronomy. "From now on, we will have to refine the simulations by increasing the resolution, so that we can include these changes in metallicity at different locations in the Milky Way."

Read more at Science Daily

Too much free time may be almost as bad as too little

As an individual's free time increases, so does that person's sense of well-being -- but only up to a point. Too much free time can be also be a bad thing, according to research published by the American Psychological Association.

"People often complain about being too busy and express wanting more time. But is more time actually linked to greater happiness? We found that having a dearth of discretionary hours in one's day results in greater stress and lower subjective well-being," said Marissa Sharif, PhD, an assistant professor of marketing at The Wharton School and lead author of the paper. "However, while too little time is bad, having more time is not always better."

The research was published in the Journal of Personality and Social Psychology.

Researchers analyzed the data from 21,736 Americans who participated in the American Time Use Survey between 2012 and 2013. Participants provided a detailed account of what they did during the prior 24 hours -- indicating the time of day and duration of each activity -- and reported their sense of well-being. The researchers found that as free time increased, so did well-being, but it leveled off at about two hours and began to decline after five. Correlations in both directions were statistically significant.

The researchers also analyzed data from 13,639 working Americans who participated in the National Study of the Changing Workforce between 1992 and 2008. Among the survey's many questions, participants were asked about their amount of discretionary time (e.g., "On average, on days when you're working, about how many hours [minutes] do you spend on your own free-time activities?") and their subjective well-being, which was measured as life satisfaction (e.g., "All things considered, how do you feel about your life these days? Would you say you feel 1=very satisfied, 2=somewhat satisfied, 3=somewhat dissatisfied, or 4=very dissatisfied?")

Once again, the researchers found that higher levels of free time were significantly associated with higher levels of well-being, but only up to a point. After that, excess free time was not associated with greater well-being.

To further investigate the phenomenon, the researchers conducted two online experiments involving more than 6,000 participants. In the first experiment, participants were asked to imagine having a given amount of discretionary time every day for at least six months. Participants were randomly assigned to have a low (15 minutes per day), moderate (3.5 hours per day), or high (7 hours per day) amount of discretionary time. Participants were asked to report the extent to which they would experience enjoyment, happiness and satisfaction.

Participants in both the low and high discretionary time groups reported lower well-being than the moderate discretionary time group. The researchers found that those with low discretionary time felt more stressed than those with a moderate amount, contributing to lower well-being, but those with high levels of free time felt less productive than those in the moderate group, leading them to also have lower well-being.

In the second experiment, researchers looked at the potential role of productivity. Participants were asked to imagine having either a moderate (3.5 hours) or high (7 hours) amount of free time per day, but were also asked to imagine spending that time in either productive (e.g., working out, hobbies or running) or unproductive activities (e.g., watching television or using the computer). The researchers found participants with more free time reported lower levels of well-being when engaging in unproductive activities. However, when engaging in productive activities, those with more free time felt similar to those with a moderate amount of free time.

Read more at Science Daily

Steps per day matter in middle age, but not as many as you may think

Walking at least 7,000 steps a day reduced middle-aged people's risk of premature death from all causes by 50% to 70%, compared to that of other middle-aged people who took fewer daily steps.

But walking more than 10,000 steps per day -- or walking faster -- did not further reduce the risk, notes lead author Amanda Paluch, a physical activity epidemiologist at the University of Massachusetts Amherst.

The findings, published in JAMA Network Open, highlight the evolving efforts to establish evidence-based guidelines for simple, accessible physical activity that benefits health and longevity, such as walking. The oft-advised 10,000 steps a day is not a scientifically established guideline but emerged as part of a decades-old marketing campaign for a Japanese pedometer, says Paluch, assistant professor of kinesiology in the School of Public Health and Health Sciences.

One question Paluch and colleagues wanted to begin to answer: How many steps per day do we need for health benefits? "That would be great to know for a public health message or for clinician-patient communication," she says.

The researchers mined data from the Coronary Artery Risk Development in Young Adults (CARDIA) study, which began in 1985 and is still ongoing. Some 2,100 participants between age 38 and 50 wore an accelerometer in 2005 or 2006. They were followed for nearly 11 years after that, and the resulting data were analyzed in 2020 and 2021.

The participants were separated into three comparison groups: low-step volume (under 7,000 per day), moderate (between 7,000-9,999) and high (more than 10,000).

"You see this gradual risk reduction in mortality as you get more steps," Paluch says. "There were substantial health benefits between 7,000 and 10,000 steps but we didn't see an additional benefit from going beyond 10,000 steps.

"For people at 4,000 steps, getting to 5,000 is meaningful," she adds. "And from 5,000 to 6,000 steps, there is an incremental risk reduction in all-cause mortality up to about 10,000 steps."

Several features make this study particularly interesting and informative. For one, it involved people in middle age, while most step studies have been focused on older adults. So the findings can begin to suggest ways to keep people healthier longer and to avoid premature death, as some of the participants experienced.

"Preventing those deaths before average life expectancy -- that is a big deal," Paluch says. "Showing that steps per day could be associated with premature mortality is a new contribution to the field."

The study also featured an equal number of men and women and Black and white participants. Death rates for people walking at least 7,000 steps per day were lowest among women and Blacks, compared to their more sedentary peers. But there was a limited sample of people who died, and Paluch cautions that researchers need to study larger diverse populations to gauge statistically significant sex and race differences.

Paluch is eager to continue researching the impact of steps per day on health and how walking may be beneficial in a variety of ways at different life stages.

Read more at Science Daily

Preventing the long-term effects of traumatic brain injury

You've been in a car accident and sustained a head injury. You recovered, but years later you begin having difficulty sleeping. You also become very sensitive to noise and bright lights, and find it hard to carry out your daily activities, or perform well at your job.

This is a common situation after a traumatic brain injury -- many people experience bad side effects months or years later. These long-term effects can last a few days or the rest of a person's life.

"No therapies currently exist to prevent the disabilities that can develop after a brain trauma," says Jeanne Paz, PhD, associate investigator at Gladstone Institutes. "So, understanding how the traumatic brain injury affects the brain, especially in the long term, is a really important gap in research that could help develop new and better treatment options."

In a new study published in the journal Science, Paz and her team helped close that gap. They identified a specific molecule in a part of the brain called the thalamus that plays a key role in secondary effects of brain injury, such as sleep disruption, epileptic activity, and inflammation. In collaboration with scientists at Annexon Biosciences, a clinical-stage biopharmaceutical company, they also showed that an antibody treatment could prevent the development of these negative outcomes.

A Vulnerable Brain Region

Traumatic brain injuries, which range from a mild concussion to a severe injury, can be the result of a fall, sports injury, gunshot injury, blow to the head, explosion, or domestic violence. Often, soldiers returning from war also suffer head injuries, which commonly lead to the development of epilepsy. Traumatic brain injury affects 69 million people around the world annually, and is the leading cause of death in children and a major source of disability in adults.

"These injuries are frequent and can happen to anyone," says Paz, who is also an associate professor of neurology at UC San Francisco (UCSF) and a member of the Kavli Institute for Fundamental Neuroscience. "The goal of our study was to understand how the brain changes after traumatic brain injuries and how those changes can lead to chronic problems, such as the development of epilepsy, sleep disruption, and difficulty with sensory processing."

To do so, Paz and her team recorded the activity of different cells and circuits in the brain of mice after brain injury. The researchers monitored the mice continually and wirelessly, meaning the mice could go about their normal activities without being disrupted.

"We collected so much data, from the time of injury and over the next several months, that it actually crashed our computers," says Paz. "But it was important to capture all the different stages of sleep and wakefulness to get the whole picture."

During a trauma to the head, the region of the brain called the cerebral cortex is often the primary site of injury, because it sits directly beneath the skull.

But at later time points, the researchers discovered that another region -- the thalamus -- was even more disrupted than the cortex. In particular, they found that a molecule called C1q was present at abnormally high levels in the thalamus for months after the initial injury, and these high levels were associated with inflammation, dysfunctional brain circuits, and the death of neurons.

"The thalamus seems particularly vulnerable, even after a mild traumatic brain injury," says Stephanie Holden, PhD, first author of the study and former graduate student in Paz's lab at Gladstone. "This doesn't mean the cortex isn't affected, but simply that it might have the necessary tools to recover over time. Our findings suggest that the higher levels of C1q in the thalamus could contribute to several long-term effects of brain injury."

The Paz Lab collaborated with Eleonora Aronica, MD, PhD, a neuropathologist at the University of Amsterdam, to validate their findings in human brain tissues obtained from autopsies, in which they found high levels of the C1q molecule in the thalamus 8 days after people had sustained a traumatic brain injury. In addition, by working with fellow Gladstone Assistant Investigator Ryan Corces, PhD, they determined that C1q in the thalamus likely came from microglia, the immune cells in the brain.

"Our study answered some very big questions in the field about where and how changes are happening in the brain after a trauma, and which ones are actually important for causing deficits," says Paz.

The Right Window to Treat Chronic Effects After Traumatic Brain Injury

The C1q molecule, which is part of an immune pathway, has well-documented roles in brain development and normal brain functions. For instance, it protects the central nervous system from infection and helps the brain forget memories -- a process needed to store new memories. The accumulation of C1q in the brain has also been studied in various neurological and psychiatric disorders and is associated, for example, with Alzheimer's disease and schizophrenia.

"C1q can be both good and bad," says Paz. "We wanted to find a way to prevent this molecule's detrimental effect, but without impacting its beneficial role. This is an example of what makes neuroscience a really hard field these days, but it's also what makes it exciting."

She and her group decided to leverage the "latent phase" after a traumatic brain injury, during which changes are occurring in the brain but before long-term symptoms appear.

"My cousin, for example, was hit in the head when he was 10 years old, and the impact broke his skull and damaged his brain," says Paz. "But it wasn't until he was 20 that he developed epilepsy. This latent phase presents a window of opportunity for us to intervene in hopes of modifying the disease and preventing any complications."

Paz reached out to her collaborators at Annexon Biosciences, who produce a clinical antibody that can block the activity of the C1q molecule. Then, her team treated the mice who sustained brain injury with this antibody to see if it might have beneficial effects.

When the researchers studied mice genetically engineered to lack C1q at the time of the trauma, the brain injury appeared much worse. However, when they selectively blocked C1q with the antibody during the latent phase, they prevented chronic inflammation and the loss of neurons in the thalamus.

"This indicates that the C1q molecule shouldn't be blocked at the time of injury, because it's likely very important at this stage for protecting the brain and helping prevent cell death," says Holden. "But at later time points, blocking C1q can actually reduce harmful inflammatory responses. It's a way of telling the brain, 'It's okay, you've done the protective part and you can now turn off the inflammation.'"

"There is a paucity of treatments for patients who have suffered from an acute brain injury," says Ted Yednock, PhD, executive vice president and chief scientific officer at Annexon Biosciences, and an author of the study. "This result is exciting because it suggests that we could treat patients in the hours to days after an acute injury like traumatic brain injury to protect against secondary neuronal damage and provide significant functional benefit."

Path to a Potential Treatment

In addition to chronic inflammation, Paz and her team also uncovered abnormal brain activity in the mice with traumatic brain injury.

First, the researchers noticed disruptions in sleep spindles, which are normal brain rhythms that occur during sleep. These are important for memory consolidation, among other things. The scientists also found epileptic spikes, or abnormal fluctuations in brain activity. These spikes can be disruptive to cognition and normal behavior, and are also indicative of a greater susceptibility to seizures.

The scientists observed that the anti-C1q antibody treatment not only helped restore the sleep spindles, but also prevented the development of epileptic activities.

"Overall, our study indicates that targeting the C1q molecule after injury could avoid some of the most devastating, long-term consequences of traumatic brain injury," says Holden. "We hope this could eventually lead to the development of treatments for traumatic brain injury."

Annexon's anti-C1q inhibitors are designed to treat multiple autoimmune and neurological disorders, and are already being examined in clinical trials, including for an autoimmune disorder known as Guillain-Barré syndrome, where the drug has been shown to be safe in humans.

"The fact that the drug is already in clinical trials may speed the pace at which a treatment could eventually be made available to patients," says Yednock. "We already understand doses of drug that are safe and effective in patients for blocking C1q in the brain, and could move directly into studies that ameliorate the chronic effects after traumatic brain injury."

For Holden, who previously worked with individuals who experienced brain injury and heard many of their personal stories, the impact of this study is particularly meaningful.

"Brain injury is a hidden disability for many of the people I met," she says. "The side effects they experience can be difficult to diagnose and their physicians often can't provide any medical treatment. Being able to contribute to finding ways to treat the detrimental consequences of the injury after it happens is really inspiring."

Paz and her lab are continuing to expand their understanding of what happens in the brain after injury. Next, they will focus on studying whether they can help prevent convulsive seizures, which are often reported by people with severe traumatic brain injuries.

Read more at Science Daily

Sep 9, 2021

How land birds cross the open ocean

Researchers at the Max Planck Institute of Animal Behavior and University of Konstanz in Germany have identified how large land birds fly nonstop for hundreds of kilometers over the open ocean—without taking a break for food or rest. Using GPS tracking technology, the team monitored the global migration of five species of large land birds that complete long sea crossings. They found that all birds exploited wind and uplift to reduce energy costs during flight—even adjusting their migratory routes to benefit from the best atmospheric conditions. This is the most wide-ranging study of sea-crossing behavior yet and reveals the important role of the atmosphere in facilitating migration over the open sea for many terrestrial birds.

Flying over the open sea can be dangerous for land birds. Unlike seabirds, land birds are not able to rest or feed on water, and so sea crossings must be conducted as nonstop flights. For centuries, bird-watchers assumed that large land birds only managed short sea crossings of less than 100 kilometers and completely avoided flying over the open ocean.

However, recent advances in GPS tracking technology have overturned that assumption. Data obtained by attaching small tracking devices on wild birds has shown that many land birds fly for hundreds or even thousands of kilometers over the open seas and oceans as a regular part of their migration.

But scientists are still unraveling how land birds are able to accomplish this. Flapping is an energetically costly activity, and trying to sustain nonstop flapping flight for hundreds of kilometers would not be possible for large, heavy land birds. Some studies have suggested that birds sustain such journeys using tailwind, a horizontal wind blowing in the bird’s direction of flight, which helps them save energy. Most recently, a study revealed that a single species—the osprey—used rising air thermals known as “uplift” to soar over the open sea.

Now, the new study has examined sea-crossing behavior of 65 birds across five species to gain the most wide-ranging insight yet into how land birds survive long flights over the open sea. The researchers analyzed 112 sea-crossing tracks, collected over nine years, with global atmospheric information to pinpoint the criteria that the birds use for selecting their migration routes over the open sea. A large international collaboration of scientists shared their tracking data to make this study possible.

The findings not only confirm the role of tailwind in facilitating sea-crossing behavior, but also reveal the widespread use of uplift for saving energy during these nonstop flights. Suitable uplift means less drag, making sea crossing less energetically demanding.

“Until recently, uplift was assumed to be weak or absent over the sea surface. We show that is not the case,” says first author Elham Nourani, a DAAD PRIME postdoctoral fellow at the Department of Biology at the University of Konstanz, who did the work when she was at the Max Planck Institute of Animal Behavior.

“Instead, we find that migratory birds adjust their flight routes to benefit from the best wind and uplift conditions when they fly over the sea. This helps them sustain flight for hundreds of kilometers,” says Nourani.

The oriental honey buzzard, for example, flies 700 kilometers over the East China Sea during its annual migration from Japan to southeast Asia. The roughly 18-hour nonstop sea crossing is conducted in autumn when the air movement conditions are optimal. “By making use of uplift, these birds can soar up to one kilometer above the sea surface,” says Nourani.

Read more at Science Daily

Massive new animal species discovered in half-billion-year-old Burgess Shale

Palaeontologists at the Royal Ontario Museum (ROM) have uncovered the remains of a huge new fossil species belonging to an extinct animal group in half-a-billion-year-old Cambrian rocks from Kootenay National Park in the Canadian Rockies. The findings were announced on September 8, 2021, in a study published in Royal Society Open Science.

Named Titanokorys gainesi, this new species is remarkable for its size. With an estimated total length of half a meter, Titanokorys was a giant compared to most animals that lived in the seas at that time, most of which barely reached the size of a pinky finger.

"The sheer size of this animal is absolutely mind-boggling, this is one of the biggest animals from the Cambrian period ever found," says Jean-Bernard Caron, ROM's Richard M. Ivey Curator of Invertebrate Palaeontology.

Evolutionarily speaking, Titanokorys belongs to a group of primitive arthropods called radiodonts. The most iconic representative of this group is the streamlined predator Anomalocaris, which may itself have approached a metre in length. Like all radiodonts, Titanokorys had multifaceted eyes, a pineapple slice-shaped, tooth-lined mouth, a pair of spiny claws below its head to capture prey and a body with a series of flaps for swimming. Within this group, some species also possessed large, conspicuous head carapaces, with Titanokorys being one of the largest ever known.

"Titanokorys is part of a subgroup of radiodonts, called hurdiids, characterized by an incredibly long head covered by a three-part carapace that took on myriad shapes. The head is so long relative to the body that these animals are really little more than swimming heads," added Joe Moysiuk, co-author of the study, and a ROM-based Ph.D. student in Ecology & Evolutionary Biology at the University of Toronto.

Why some radiodonts evolved such a bewildering array of head carapace shapes and sizes is still poorly understood and was likely driven by a variety of factors, but the broad flattened carapace form in Titanokorys suggests this species was adapted to life near the seafloor.

"These enigmatic animals certainly had a big impact on Cambrian seafloor ecosystems. Their limbs at the front looked like multiple stacked rakes and would have been very efficient at bringing anything they captured in their tiny spines towards the mouth. The huge dorsal carapace might have functioned like a plough," added Dr. Caron, who is also an Associate Professor in Ecology & Evolutionary Biology and Earth Sciences at the University of Toronto, and Moysiuk's Ph.D. advisor.

All fossils in this study were collected around Marble Canyon in northern Kootenay National Park by successive ROM expeditions. Discovered less than a decade ago, this area has yielded a great variety of Burgess Shale animals dating back to the Cambrian period, including a smaller, more abundant relative of Titanokorys named Cambroraster falcatusin reference to its Millennium Falcon-shaped head carapace. According to the authors, the two species might have competed for similar bottom-dwelling prey.

The Burgess Shale fossil sites are located within Yoho and Kootenay National Parks and are managed by Parks Canada. Parks Canada is proud to work with leading scientific researchers to expand knowledge and understanding of this key period of earth history and to share these sites with the world through award-winning guided hikes. The Burgess Shale was designated a UNESCO World Heritage Site in 1980 due to its outstanding universal value and is now part of the larger Canadian Rocky Mountain Parks World Heritage Site.

Read more at Science Daily

Who was king before Tyrannosaurus? Uzbek fossil reveals new top dino

Iconic tyrannosauroids like T. rex famously dominated the top of the food web at the end of the reign of the dinosaurs. But they didn't always hold that top spot.

In a new study published in Royal Society Open Science, a research team led by the University of Tsukuba has described a new genus and species belonging to the Carcharodontosauria, a group of medium- to large-sized carnivorous dinosaurs that preceded the tyrannosauroids as apex predators.

The new dinosaur, named Ulughbegsaurus uzbekistanensis, was found in the lower Upper Cretaceous Bissekty Formation of the Kyzylkum Desert in Uzbekistan, and therefore lived about 90 million years ago. Two separate evolutionary analyses support classification of the new dinosaur as the first definitive carcharodontosaurian discovered in the Upper Cretaceous of Central Asia.

"We described this new genus and species based on a single isolated fossil, a left maxilla, or upper jawbone," explains study first author Assistant Professor Kohei Tanaka. "Among theropod dinosaurs, the size of the maxilla can be used to estimate the animal's size because it correlates with femur length, a well-established indicator of body size. Thus, we were able to estimate that Ulughbegsaurus uzbekistanensis had a mass of over 1,000 kg, and was approximately 7.5 to 8.0 meters in length, greater than the length of a full-grown African elephant."

This size greatly exceeds that of any other carnivore known from the Bissekty Formation, including the small-sized tyrannosauroid Timurlengia described from the same formation. Therefore, the newly named dinosaur likely topped the food web in its early Late Cretaceous ecosystem.

The genus's namesake is fittingly regal; Ulughbegsaurus is named for Ulugh Beg, the 15th century mathematician, astronomer, and sultan of the Timurid Empire of Central Asia. The species is named for the country where the fossil was discovered.

Before the Late Cretaceous, carcharodontosaurians like Ulughbegsaurus disappeared from the paleocontinent that included Central Asia. This disappearance is thought to have been related to the rise of tyrannosauroids as apex predators, but this transition has remained poorly understood because of the scarcity of relevant fossils.

Read more at Science Daily

500-million-year-old fossil represents rare discovery of ancient animal in North America

Many scientists consider the "Cambrian explosion" -- which occurred about 530-540 million years ago -- as the first major appearance of many of the world's animal groups in the fossil record. Like adding pieces to a giant jigsaw puzzle, each discovery dating from this time period has added another piece to the evolutionary map of modern animals. Now, researchers at the University of Missouri have found a rare, 500-million-year-old "worm-like" fossil called a palaeoscolecid, which is an uncommon fossil group in North America. The researchers believe this find, from an area in western Utah, can help scientists better understand how diverse the Earth's animals were during the Cambrian explosion.

Jim Schiffbauer, an associate professor of geological sciences in the MU College of Arts and Science and one of the study's co-authors, said that while this fossil has the same anatomical organization as modern worms, it doesn't exactly match with anything we see on modern Earth.

"This group of animals are extinct, so we don't see them, or any modern relatives, on the planet today," Schiffbauer said. "We tend to call them 'worm-like' because it's hard to say that they perfectly fit with annelids, priapulids, or any other types of organism on the planet today that we would generally call a "worm." But palaeoscolecids have the same general body plan, which in the history of life has been an incredibly successful body plan. So, this is a pretty cool addition because it expands the number of worm-like things that we know about from 500 million years ago in North America and adds to our global occurrences and diversity of the palaeoscolecids."

At the time, this palaeoscolecid was likely living on an ocean floor, said Wade Leibach, an MU graduate teaching assistant in the College of Arts and Science, and lead author on the study.

"It is the first known palaeoscolecid discovery in a certain rock formation -- the Marjum Formation of western Utah -- and that's important because this represents one of only a few palaeoscolecid taxa in North America," Leibach said. "Other examples of this type of fossil have been previously found in much higher abundance on other continents, such as Asia, so we believe this find can help us better understand how we view prehistoric environments and ecologies, such as why different types of organisms are underrepresented or overrepresented in the fossil record. So, this discovery can be viewed from not only the perspective of its significance in North American paleontology, but also broader trends in evolution, paleogeography and paleoecology."

Leibach, who switched his major from biology to geology after volunteering to work with the invertebrate paleontology collections at the University of Kansas, began this project as an undergraduate student by analyzing a box of about a dozen fossils in the collections of the KU Biodiversity Institute. Initially, Leibach and one of his co-authors, Anna Whitaker, who was a graduate student at KU at the time and now is at the University of Toronto-Mississauga, analyzed each fossil using a light microscope, which identified at least one of the fossils to be a palaeoscolecid.

Leibach worked with Julien Kimmig, who was at the KU Biodiversity Institute at the time and is now at Penn State University, to determine that, in order to be able to confirm their initial findings, he would need the help of additional analyses provided by sophisticated microscopy equipment located at the MU X-ray Microanalysis Core, which is directed by Schiffbauer. Using the core facility at MU, Leibach focused his analysis on the indentations left in the fossil by the ancient animal's microscopic plates, which are characteristic of the palaeoscolecids.

"These very small mineralized plates are usually nanometers-to-micrometers in size, so we needed the assistance of the equipment in Dr. Schiffbauer's lab to be able to study them in detail because their size, orientation and distribution is how we classify the organism to the genus and species levels," Leibach said.

Leibach said the team found a couple reasons about why this particular fossil may be found in limited quantities in North America as compared to other parts of the world. They are:
 

  • Geochemical limitations or different environments that may be more predisposed to preserving these types of organisms.
  • Ecological competition, which may have driven this type of organism to be less competitive or less abundant in certain areas.


The new taxon is named Arrakiscolex aasei after the fictional planet Arrakis in the novel "Dune" by Frank Herbert, which is inhabited by a species of armored worm and the collector of the specimens Arvid Aase.

Read more at Science Daily

Sep 8, 2021

Some coral reefs are keeping pace with ocean warming

Some coral communities are becoming more heat tolerant as ocean temperatures rise, offering hope for corals in a changing climate.

After a series of marine heatwaves hit the Phoenix Islands Protected Area (PIPA) in the central Pacific Ocean, a new study finds the impact of heat stress on the coral communities lessened over time.

While a 2002-2003 heatwave devastated coral communities in PIPA, the reefs recovered and experienced minimal losses during a similar event in 2009-2010. Then, in 2015-2016, a massive heatwave put twice as much heat stress on the corals, yet the die-off was much less severe than expected, according to new research published in Geophysical Research Letters, AGU's journal for high-impact reports with immediate implications spanning all Earth and space sciences.

The authors of the new study suspect heat-tolerant offspring from the surviving corals are repopulating the reefs, allowing the community to keep pace with warming seas, at least for the time being.

The new study could help coral reef managers identify coral communities most likely to survive in the warming ocean, improving conservation and restoration outcomes.

"It's easy to lose faith in coral reefs," said first author Michael Fox, a postdoctoral scientist and coral reef ecologist at the Woods Hole Oceanographic Institution (WHOI). "But in PIPA, which is protected from local stressors, and where reefs have enough time to recover between heatwaves, the coral populations are doing better than expected."

UNDERWATER HEATWAVES

Just like on land, heatwaves underwater are becoming more frequent and intense as the world warms, putting stress on ocean ecosystems. High temperatures hit coral reefs especially hard by causing widespread bleaching events, where corals eject the symbiotic algae in their tissues, further weakening the animals. With continued ocean warming, coral reefs face a dim future.

In the new study, researchers monitored coral communities at four islands within PIPA, an area encompassing over 400,000-square-kilometers of coral reef and deep-sea habitat. The Republic of Kiribati established the reserve in 2008, and the United Nations Educational, Scientific and Cultural Organization (UNESCO) designated PIPA as a World Heritage Site in 2010. "The protected area gives us a rare opportunity to study pristine and isolated coral reef ecosystems, a privilege for which we thank the people of Kiribati," said co-author Anne Cohen, a marine scientist at WHOI.

The team used daily satellite data and temperature loggers to examine how each heatwave impacted the corals. They ruled out 11 environmental factors that might explain the higher-than-expected survival following the 2009-2010 and 2015-2016 heatwaves, such as greater cloud cover or more gradual warming.

After the 2002-2003 heatwave, the surveyed sites lost more than three-quarters of their coral cover. The reef was beginning to recover when the 2009-2010 heatwave hit, sparking fears of widespread bleaching, but two years later, coral cover had increased by more than 5%. Following the "Super El Niño" in 2015-2016, which raised ocean temperatures by 3 degrees Celsius (5.4 degrees Fahrenheit), the loss of coral cover was 40% -- about half of the 2002 losses, despite causing twice the level of thermal stress.

A SOURCE OF HOPE FOR CORAL REEFS

Many of the reef-building species survived the heatwaves. "We're seeing areas that were devoid of corals after 2002-2003 that are now flourishing with most of the original species," Fox said.

At other reefs worldwide, sometimes only a handful of especially hardy or fast-growing species recover after a bleaching event. Coral larvae can float long distances on ocean currents, but due to PIPA's isolation, the researchers hypothesize that local heat-tolerant individuals are repopulating the reefs.

Now that the researchers have shown that some coral communities have the potential to keep up with ocean warming, their next step is to figure out how they are doing it.

The findings are "important for giving us hope for the future of coral reefs, and also for helping to maintain support for protecting reefs, including efforts to reduce local threats, like pollution, sedimentation and overfishing that undermine the reefs' ability to adapt," said Lizzie McLeod, the Global Reef Systems Lead at the Nature Conservancy, who was not involved in the study.

She recommends reef conservationists prioritize the conservation of heat-tolerant reefs, because they can act as climate refuges that repopulate other sites decimated by heatwaves.

The study's authors caution that even these remarkable corals have their limits and reversing climate change remains paramount. As heatwaves become more frequent or intense, even heat-tolerant communities could die out.

Read more at Science Daily

Threat of catastrophic supervolcano eruptions is ever-present

Curtin scientists are part of an international research team that studied an ancient supervolcano in Indonesia and found such volcanoes remain active and hazardous for thousands of years after a super-eruption, prompting the need for a rethink of how these potentially catastrophic events are predicted.

Associate Professor Martin Danišík, lead Australian author from the John de Laeter Centre based at Curtin University, said supervolcanoes often erupted several times with intervals of tens of thousands of years between the big eruptions but it was not known what happened during the dormant periods.

"Gaining an understanding of those lengthy dormant periods will determine what we look for in young active supervolcanoes to help us predict future eruptions," Associate Professor Danišík said.

"Super-eruptions are among the most catastrophic events in Earth's history, venting tremendous amounts of magma almost instantaneously. They can impact global climate to the point of tipping the Earth into a 'volcanic winter', which is an abnormally cold period that may result in widespread famine and population disruption.

"Learning how supervolcanoes work is important for understanding the future threat of an inevitable super-eruption, which happen about once every 17,000 years."

Associate Professor Danišík said the team investigated the fate of magma left behind after the Toba super-eruption 75,000 years ago, using the minerals feldspar and zircon, which contain independent records of time based on the accumulation of gasses argon and helium as time capsules in the volcanic rocks.

"Using these geochronological data, statistical inference and thermal modelling, we showed that magma continued to ooze out within the caldera, or deep depression created by the eruption of magma, for 5000 to 13,000 years after the super-eruption, and then the carapace of solidified left-over magma was pushed upward like a giant turtle shell," Associate Professor Danišík said.

"The findings challenged existing knowledge and studying of eruptions, which normally involves looking for liquid magma under a volcano to assess future hazard. We must now consider that eruptions can occur even if no liquid magma is found underneath a volcano -- the concept of what is 'eruptible' needs to be re-evaluated.

"While a super-eruption can be regionally and globally impactful and recovery may take decades or even centuries, our results show the hazard is not over with the super-eruption and the threat of further hazards exists for many thousands of years after.

"Learning when and how eruptible magma accumulates, and in what state the magma is in before and after such eruptions, is critical for understanding supervolcanoes."

Read more at Science Daily

Insect protein has great potential to reduce the carbon footprint of European consumers

Researchers at the University of Helsinki and LUT University, Finland, have analysed the extent to which insect protein could help to reduce global warming associated with food consumption in Europe. They have especially focused on insect protein use and soybean-protein use in the production of broilers.

The results support previous research suggesting that insect protein has the greatest potential to reduce the food-related carbon footprints of European consumers, if edible insects -- such as crickets, flies, and worms -- are consumed directly or processed as food. Preparation methods include eating them fresh, or drying and processing them into flour for use in bread or pasta.

"Our results indeed suggest that it is more sustainable to use insect protein for food rather than to use it to replace soybean meal in animal feed. Yet we found that a shift to using low-value food industry side stream products -- such as catering waste or by-products, for example, from fish processing -- in insect production for chicken feed is key to decisively increasing the carbon footprint benefits of using insect protein over soybean meal protein," says Professor Bodo Steiner from the Faculty of Agriculture and Forestry, University of Helsinki, Finland.

All this is important and timely, because as a part of the current climate change debate, concerns have been raised over the increasing deforestation associated with the rapid expansion of global soybean cultivation, which is a major protein source for feeding livestock raised to be food for humans.

From Science Daily

Study illuminates origins of lung cancer in never smokers

A genomic analysis of lung cancer in people with no history of smoking has found that a majority of these tumors arise from the accumulation of mutations caused by natural processes in the body. This study was conducted by an international team led by researchers at the National Cancer Institute (NCI), part of the National Institutes of Health (NIH), and describes for the first time three molecular subtypes of lung cancer in people who have never smoked.

These insights will help unlock the mystery of how lung cancer arises in people who have no history of smoking and may guide the development of more precise clinical treatments. The findings were published September 6, 2021, in Nature Genetics.

"What we're seeing is that there are different subtypes of lung cancer in never smokers that have distinct molecular characteristics and evolutionary processes," said epidemiologist Maria Teresa Landi, M.D., Ph.D., of the Integrative Tumor Epidemiology Branch in NCI's Division of Cancer Epidemiology and Genetics, who led the study, which was done in collaboration with researchers at the National Institute of Environmental Health Sciences, another part of NIH, and other institutions. "In the future we may be able to have different treatments based on these subtypes."

Lung cancer is the leading cause of cancer-related deaths worldwide. Every year, more than 2 million people around the world are diagnosed with the disease. Most people who develop lung cancer have a history of tobacco smoking, but 10% to 20% of people who develop lung cancer have never smoked. Lung cancer in never smokers occurs more frequently in women and at an earlier age than lung cancer in smokers.

Environmental risk factors, such as exposure to secondhand tobacco smoke, radon, air pollution, and asbestos, or having had previous lung diseases, may explain some lung cancers among never smokers, but scientists still don't know what causes the majority of these cancers.

In this large epidemiologic study, the researchers used whole-genome sequencing to characterize the genomic changes in tumor tissue and matched normal tissue from 232 never smokers, predominantly of European descent, who had been diagnosed with non-small cell lung cancer. The tumors included 189 adenocarcinomas (the most common type of lung cancer), 36 carcinoids, and seven other tumors of various types. The patients had not yet undergone treatment for their cancer.

The researchers combed the tumor genomes for mutational signatures, which are patterns of mutations associated with specific mutational processes, such as damage from natural activities in the body (for example, faulty DNA repair or oxidative stress) or from exposure to carcinogens. Mutational signatures act like a tumor's archive of activities that led up to the accumulation of mutations, providing clues into what caused the cancer to develop. A catalogue of known mutational signatures now exists, although some signatures have no known cause. In this study, the researchers discovered that a majority of the tumor genomes of never smokers bore mutational signatures associated with damage from endogenous processes, that is, natural processes that happen inside the body.

As expected, because the study was limited to never smokers, the researchers did not find any mutational signatures that have previously been associated with direct exposure to tobacco smoking. Nor did they find those signatures among the 62 patients who had been exposed to secondhand tobacco smoke. However, Dr. Landi cautioned that the sample size was small and the level of exposure highly variable.

"We need a larger sample size with detailed information on exposure to really study the impact of secondhand tobacco smoking on the development of lung cancer in never smokers," Dr. Landi said.

The genomic analyses also revealed three novel subtypes of lung cancer in never smokers, to which the researchers assigned musical names based on the level of "noise" (that is, the number of genomic changes) in the tumors. The predominant "piano" subtype had the fewest mutations; it appeared to be associated with the activation of progenitor cells, which are involved in the creation of new cells. This subtype of tumor grows extremely slowly, over many years, and is difficult to treat because it can have many different driver mutations. The "mezzo-forte" subtype had specific chromosomal changes as well as mutations in the growth factor receptor gene EGFR, which is commonly altered in lung cancer, and exhibited faster tumor growth. The "forte" subtype exhibited whole-genome doubling, a genomic change that is often seen in lung cancers in smokers. This subtype of tumor also grows quickly.

"We're starting to distinguish subtypes that could potentially have different approaches for prevention and treatment," said Dr. Landi. For example, the slow-growing piano subtype could give clinicians a window of opportunity to detect these tumors earlier when they are less difficult to treat. In contrast, the mezzo-forte and forte subtypes have only a few major driver mutations, suggesting that these tumors could be identified by a single biopsy and could benefit from targeted treatments, she said.

A future direction of this research will be to study people of different ethnic backgrounds and geographic locations, and whose exposure history to lung cancer risk factors is well described.

"We're at the beginning of understanding how these tumors evolve," Dr. Landi said. "This analysis shows that there is heterogeneity, or diversity, in lung cancers in never smokers."

Stephen J. Chanock, M.D., director of NCI's Division of Cancer Epidemiology and Genetics, noted, "We expect this detective-style investigation of genomic tumor characteristics to unlock new avenues of discovery for multiple cancer types."

Read more at Science Daily

Sep 7, 2021

Conservation commitments should focus on the best places to protect rare species

The Prime Minister of the United Kingdom has pledged to protect 30% of land to support the recovery of nature, but a new study finds that much of the new land that has been allocated to meet this aspiration is not in the highest priority areas for biodiversity conservation.

Currently, only 9% of Britain's land area has a legal status that specifically mandates biodiversity protection.

The UK 30by30 commitment includes land that is currently designated as 'protected landscapes' in England, such as National Parks and Areas of Outstanding Natural Beauty, but these areas were not originally chosen nor managed for biodiversity.

New research by the University of York and Natural England finds that 58% of British 'protected landscapes' lie outside the highest 30% priority land for species conservation.

The study comes in response to the UK Government's pledge to protect 30% of land to support the recovery of nature by 2030, made last September.

The authors of the report say the 30by30 commitment is a positive step for UK conservation, but requires better planning and implementation if it is to deliver its intended goals.

They argue that designating areas with high landscape value does not offer efficient protection of high priority species (such as tree sparrows and white-letter hairstreak butterflies) and habitats. This is because many attractive landscapes are not in the right places to enhance the country's existing protected area network.

The team identified potential areas for nature recovery, which they say could improve species representation outcomes by 68%, compared to only 38% using the pledged landscapes

The study found the most important areas to prioritise, in a way that is likely to benefit the most species, are largely concentrated in southern and eastern England. Northern and upland areas of Britain have disproportionately larger areas protected for biodiversity, so the greatest gains in species representation can potentially be achieved by increased levels of protection and habitat restoration in southern and lowland areas.

Charles Cunningham, a PhD researcher from the University of York's Leverhulme Centre for Anthropocene Biodiversity who is first author of the study, said: "Increasingly, ambitious conservation pledges that focus on large areas may draw attention away from where threatened species are actually located."

"Our findings show that including all of these landscapes is an inefficient way to expand the existing conservation network, and a mixture of landscapes inside and outside of protected landscapes would result in much better species protection."

Read more at Science Daily

Hydrogen-burning white dwarfs enjoy slow aging

The prevalent view of white dwarfs as inert, slowly cooling stars has been challenged by observations from the NASA/ESA Hubble Space Telescope. An international group of astronomers have discovered the first evidence that white dwarfs can slow down their rate of ageing by burning hydrogen on their surface.

"We have found the first observational evidence that white dwarfs can still undergo stable thermonuclear activity," explained Jianxing Chen of the Alma Mater Studiorum Università di Bologna and the Italian National Institute for Astrophysics, who led this research. "This was quite a surprise, as it is at odds with what is commonly believed."

White dwarfs are the slowly cooling stars which have cast off their outer layers during the last stages of their lives. They are common objects in the cosmos; roughly 98% of all the stars in the Universe will ultimately end up as white dwarfs, including our own Sun. Studying these cooling stages helps astronomers understand not only white dwarfs, but also their earlier stages as well.

To investigate the physics underpinning white dwarf evolution, astronomers compared cooling white dwarfs in two massive collections of stars: the globular clusters M3 and M13. These two clusters share many physical properties such as age and metallicity but the populations of stars which will eventually give rise to white dwarfs are different. In particular, the overall colour of stars at an evolutionary stage known as the Horizontal Branch are bluer in M13, indicating a population of hotter stars. This makes M3 and M13 together a perfect natural laboratory in which to test how different populations of white dwarfs cool.

"The superb quality of our Hubble observations provided us with a full view of the stellar populations of the two globular clusters," continued Chen. "This allowed us to really contrast how stars evolve in M3 and M13."

Using Hubble's Wide Field Camera 3 the team observed M3 and M13 at near-ultraviolet wavelengths, allowing them to compare more than 700 white dwarfs in the two clusters. They found that M3 contains standard white dwarfs which are simply cooling stellar cores. M13, on the other hand, contains two populations of white dwarfs: standard white dwarfs and those which have managed to hold on to an outer envelope of hydrogen, allowing them to burn for longer and hence cool more slowly.

Comparing their results with computer simulations of stellar evolution in M13, the researchers were able to show that roughly 70% of the white dwarfs in M13 are burning hydrogen on their surfaces, slowing down the rate at which they are cooling.

This discovery could have consequences for how astronomers measure the ages of stars in the Milky Way. The evolution of white dwarfs has previously been modelled as a predictable cooling process. This relatively straightforward relationship between age and temperature has led astronomers to use the white dwarf cooling rate as a natural clock to determine the ages of star clusters, particularly globular and open clusters. However, white dwarfs burning hydrogen could cause these age estimates to be inaccurate by as much as 1 billion years.

Read more at Science Daily

The warming climate is causing animals to 'shapeshift'

Climate change is not only a human problem; animals have to adapt to it as well. Some "warm-blooded" animals are shapeshifting and getting larger beaks, legs, and ears to better regulate their body temperatures as the planet gets hotter. Bird researcher Sara Ryding of Deakin University in Australia describes these changes in a review published September 7th in the journal Trends in Ecology and Evolution.

"A lot of the time when climate change is discussed in mainstream media, people are asking 'can humans overcome this?', or 'what technology can solve this?'. It's high time we recognized that animals also have to adapt to these changes, but this is occurring over a far shorter timescale than would have occurred through most of evolutionary time," says Ryding (@zuuletc). "The climate change that we have created is heaping a whole lot of pressure on them, and while some species will adapt, others will not."

Ryding notes that climate change is a complex and multifaceted phenomenon that's been occurring progressively, so it is difficult to pinpoint just one cause of the shapeshifting. But these changes have been occurring across wide geographical regions and among a diverse array of species, so there is little in common apart from climate change.

Strong shapeshifting has particularly been reported in birds. Several species of Australian parrot have shown, on average, a 4%-10% increase in bill size since 1871, and this is positively correlated with the summer temperature each year. North American dark-eyed juncos, a type of small songbird, had a link between increased bill size and short-term temperature extremes in cold environments. There have also been reported changes in mammalian species. Researchers have reported tail length increases in wood mice and tail and leg size increases in masked shrews.

"The increases in appendage size we see so far are quite small -- less than 10% -- so the changes are unlikely to be immediately noticeable," says Ryding. "However, prominent appendages such as ears are predicted to increase -- so we might end up with a live-action Dumbo in the not-so-distant future."

Next, Ryding intends to investigate shapeshifting in Australian birds firsthand by 3D scanning museum bird specimens from the past 100 years. It will give her team a better understanding of which birds are changing appendage size due to climate change and why.

Read more at Science Daily

Seven personality and behavior traits identified in cats

Researchers at the University of Helsinki have developed a new comprehensive questionnaire for surveying feline personality and behaviour. A dataset of more than 4,300 cats representing 26 breed groups revealed seven personality and behaviour traits, with significant differences observed between breeds.

Cats are our most common pets, and feline behaviour is increasingly being investigated due to a range of behavioural problems. Another topic of interest in addition to behaviour traits is personality since it can be connected to behavioural problems.

"Compared to dogs, less is known about the behaviour and personality of cats, and there is demand for identifying related problems and risk factors. We need more understanding and tools to weed out problematic behaviour and improve cat welfare. The most common behavioural challenges associated with cats relate to aggression and inappropriate elimination," says doctoral researcher Salla Mikkola from the University of Helsinki and the Folkhälsan Research Center.

Seven feline personality and behaviour traits

In a questionnaire designed by Professor Hannes Lohi's research group, personality and behaviour were surveyed through a total of 138 statements. The questionnaire included comprehensive sections on background and health-related information. By employing, among other means, factor analysis to process the data, seven personality and behaviour traits in all were identified.
 

  • Activity/playfulness
  • Fearfulness
  • Aggression towards humans
  • Sociability towards humans
  • Sociability towards cats
  • Litterbox issues (relieving themselves in inappropriate places, precision in terms of litterbox cleanliness and substrate material)
  • Excessive grooming


"While the number of traits identified in prior research varies, activity/playfulness, fearfulness and aggression are the ones from among the traits identified in our study which occur the most often in prior studies. Litterbox issues and excessive grooming are not personality traits as such, but they can indicate something about the cat's sensitivity to stress," Mikkola adds.

Differences in the prevalence of traits seen between breeds

In addition to individuals, clear personality differences can be found between breeds. In other words, certain personality and behaviour traits are more common among certain cat breeds.

"The most fearful breed was the Russian Blue, while the Abyssinian was the least fearful. The Bengal was the most active breed, while the Persian and Exotic were the most passive. The breeds exhibiting the most excessive grooming were the Siamese and Balinese, while the Turkish Van breed scored considerably higher in aggression towards humans and lower in sociability towards cats. We had already observed the same phenomenon in a prior study," says Professor Hannes Lohi from the University of Helsinki and the Folkhälsan Research Center.

The researchers wish to emphasise that no pairwise comparisons between breeds were carried out at this juncture.

"We wanted to obtain a rough idea of whether there are differences in personality traits between breeds. In further studies, we will utilise more complex models to examine factors that affect traits and problematic behaviour. In these models, we will take into consideration, in addition to its breed, the cat's age, gender, health and a wide range of environmental factors," Mikkola says.

Assessing reliability and validity

Feline behaviour and personality can be studied, for example, through questionnaires aimed at cat owners. Such questionnaires can measure feline behaviour in the long term and in everyday circumstances, which is impossible in behavioural tests. Furthermore, cats do not necessarily behave in test settings in a way typical of themselves. Due to their subjective nature, the reliability of the questionnaires must be assessed before the data can be exploited further.

"Internationally speaking, our study is the most extensive and significant survey so far, and it provides excellent opportunities for further research. The reliability of prior feline behavioural questionnaires has not been measured in such a versatile manner, nor are they as comprehensive as this one. Establishing reliability is key to making further analyses worthwhile and enabling the reliable identification of various risk factors," says Lohi.

The researchers reached out to cat owners who responded to the questionnaire one to three months ago, requesting them to fill out the questionnaire again or ask another adult living in the same household to respond to the questionnaire regarding the same cat. The goal was to assess the questionnaire's reliability both temporally and between respondents. Based on two additional datasets accumulated through this method, it was possible to evaluate the reliability of the questionnaire temporally and between respondents.

"By comparing the responses, we noted that the responses provided for the same cat were very similar, while the personality and behaviour traits were found to be reproducible and reliable. We also examined the validity of the questionnaire or whether it measures what it intended to measure. In these terms, too, the questionnaire functioned well," says Mikkola.

Read more at Science Daily

Metabolic changes in plasma and immune cells associated with COVID-19 severity, can predict patient survival

COVID-19 patients have differing immune responses that lead to disease outcomes ranging from asymptomatic SARS-CoV-2 infection to death. After examining the blood samples from nearly 200 COVID-19 patients, researchers have uncovered underlying metabolic changes that regulate how immune cells react to the disease. These changes are associated with disease severity and could be used to predict patient survival. The findings were published in the journal Nature Biotechnology.

"We know that there are a range of immune responses to COVID-19, and the biological processes underlying those responses are not well understood," said co-first author Jihoon Lee, a graduate student at Fred Hutchinson Cancer Research Center. "We analyzed thousands of biological markers linked to metabolic pathways that underlie the immune system and found some clues as to what immune-metabolic changes may be pivotal in severe disease. Our hope is that these observations of immune function will help others piece together the body's response to COVID-19. The deeper understanding gained here may eventually lead to better therapies that can more precisely target the most problematic immune or metabolic changes."

The researchers collected 374 blood samples -- two draws per patient during the first week after being diagnosed with SARS-CoV-2 infection -- and analyzed their plasma and single immune cells. The analysis included 1,387 genes involved in metabolic pathways and 1,050 plasma metabolites.

In plasma samples, the team found that increased COVID-19 severity is associated with metabolite alterations, suggesting increased immune-related activity. Furthermore, through single-cell sequencing, researchers found that each major immune cell type has a distinct metabolic signature.

"We have found metabolic reprogramming that is highly specific to individual immune cell classes (e.g. "killer" CD8+ T cells, "helper" CD4+ T cells, antibody-secreting B cells, etc.) and even cell subtypes, and the complex metabolic reprogramming of the immune system is associated with the plasma global metabolome and are predictive of disease severity and even patient death," said co-first and co-corresponding author Dr. Yapeng Su, a research scientist at Institute for Systems Biology. "Such deep and clinically relevant insights on sophisticated metabolic reprogramming within our heterogeneous immune systems are otherwise impossible to gain without advanced single-cell multi-omic analysis."

"This work provides significant insights for developing more effective treatments against COVID-19. It also represents a major technological hurdle," said Dr. Jim Heath, president and professor of ISB and co-corresponding author on the paper. "Many of the data sets that are collected from these patients tend to measure very different aspects of the disease, and are analyzed in isolation. Of course, one would like these different views to contribute to an overall picture of the patient. The approach described here allows for the sum of the different data sets to be much greater than the parts, and provides for a much richer interpretation of the disease."

The research was conducted by scientists from ISB, Fred Hutchinson Cancer Research Center, Stanford University, Swedish Medical Center St. John's Cancer Institute at Saint John's Health Center, the University of Washington, the Howard Hughes Medical Institute.

Read more at Science Daily

Sep 6, 2021

Astronomers explain origin of elusive ultradiffuse galaxies

As their name suggests, ultradiffuse galaxies, or UDGs, are dwarf galaxies whose stars are spread out over a vast region, resulting in extremely low surface brightness, making them very difficult to detect. Several questions about UDGs remain unanswered: How did these dwarfs end up so extended? Are their dark matter halos -- the halos of invisible matter surrounding the galaxies -- special?

Now an international team of astronomers, co-led by Laura Sales, an astronomer at the University of California, Riverside, reports in Nature Astronomythat it has used sophisticated simulations to detect a few "quenched" UDGs in low-density environments in the universe. A quenched galaxy is one that does not form stars.

"What we have detected is at odds with theories of galaxy formation since quenched dwarfs are required to be in clusters or group environments in order to get their gas removed and stop forming stars," said Sales, an associate professor of physics and astronomy. "But the quenched UDGs we detected are isolated. We were able to identify a few of these quenched UDGs in the field and trace their evolution backward in time to show they originated in backsplash orbits."

Here, "in the field" refers to galaxies isolated in quieter environments and not in a group or cluster environment. Sales explained that a backsplash galaxy is an object that looks like an isolated galaxy today but in the past was a satellite of a more massive system -- similar to a comet, which visits our sun periodically, but spends the bulk of its journey in isolation, far from most of the solar system.

"Isolated galaxies and satellite galaxies have different properties because the physics of their evolution is quite different," she said. "These backsplash galaxies are intriguing because they share properties with the population of satellites in the system to which they once belonged, but today they are observed to be isolated from the system."

Dwarf galaxies are small galaxies that contain anywhere from 100 million to a few billion stars. In contrast, the Milky Way has 200 billion to 400 billion stars. While all UDGs are dwarf galaxies, all dwarf galaxies are not UDGs. For example, at similar luminosity, dwarfs show a very large range of sizes, from compact to diffuse. UDGs are the tail end of most extended objects at a given luminosity. A UDG has the stellar content of a dwarf galaxy, 10-100 times smaller than the Milky Way. But its size is comparable to the Milky Way, giving it the extremely low surface brightness that makes it special.

Sales explained that the dark matter halo of a dwarf galaxy has a mass at least 10 times smaller than the Milky Way, and the size scales similarly. UDGs, however, break this rule and show a radial extension comparable to that of much larger galaxies.

"One of the popular theories to explain this was that UDGs are 'failed Milky Ways,' meaning they were destined to be galaxies like our own Milky Way but somehow failed to form stars," said José A. Benavides, a graduate student at the Institute of Theoretical and Experimental Astronomy in Argentina and the first author of the research paper. "We now know that this scenario cannot explain all UDGs. So theoretical models are arising where more than one formation mechanism may be able to form these ultradiffuse objects."

According to Sales, the value of the new work is twofold. First, the simulation used by the researchers, called TNG50, successfully predicted UDGs with characteristics similar to observed UDGs. Second, the researchers found a few rare quenched UDGs for which they have no formation mechanism.

"Using TNG50 as a 'time machine' to see how the UDGs got to where they are, we found these objects were satellites several billion years before but got expelled into a very elliptical orbit and look isolated today," she said.

The researchers also report that according to their simulations, quenched UDGs can commonly make up 25% of an ultradiffuse population of galaxies. In observations, however, this percentage is much smaller.

"This means a lot of dwarf galaxies lurking in the dark may have remained undetected to our telescopes," Sales said. "We hope our results will inspire new strategies for surveying the low-luminosity universe, which would allow for a complete census of this population of dwarf galaxies."

The study is the first to resolve the myriad of environments -- from isolated dwarfs to dwarfs in groups and clusters -- necessary to detect UDGs, and with high-enough resolution to study their morphology and structure.

Next, the research team will continue its study of UDGs in TNG50 simulations to better understand why these galaxies are so extended compared to other dwarf galaxies with the same stellar content. The researchers will use the Keck Telescope in Hawaii, one of the most powerful telescopes in the world, to measure the dark matter content of UDGs in the Virgo cluster, the closest galaxy cluster to Earth.

Read more at Science Daily

Hummingbirds can smell their way out of danger

In less time than it takes to read this sentence, hummingbirds can catch a whiff of potential trouble. That's the result of new UC Riverside research showing, contrary to popular belief, the tiny birds do have an active sense of smell.

Researchers have known for some time that vultures have a highly sensitive sense of smell, with some species being compared to "airborne bloodhounds." This is due in part to their large olfactory bulbs -- tissue in the brain that controls smell.

However, hummingbirds' olfactory bulbs are, like the rest of their bodies, extremely small. Earlier studies were unable to demonstrate that hummingbirds showed a preference for the smell of flowers containing nectar. In addition, flowers pollinated by birds generally don't have strong odors, unlike those pollinated by insects. For these reasons, scientists did not previously believe the birds possessed the ability to smell things.

UCR scientists have now shown for the first time that not only can hummingbirds smell insects, but also that scent may help them stay out of danger while looking for nectar to eat. A paper describing their experiments has now been published in the journal Behavioral Ecology and Sociobiology.

"This is pretty exciting, as it is the first clear demonstration of hummingbirds using their sense of smell alone to make foraging decisions and avoid contact with potentially dangerous insects at a flower or feeder," said Erin Wilson Rankin, associate entomology professor and study co-author.

For their experiments, the researchers allowed more than 100 hummingbirds to choose between two feeders, either sugar water alone, or sugar water plus one of several chemicals whose scent signaled the presence of an insect. There were no visual differences between the two feeders offered in each of the experiments.

Tests included the scent deposited on flowers by European honeybees, an attraction chemical secreted by Argentine ants, and formic acid, a defensive compound produced by some Formica ants which is known to harm birds as well as mammals.

"If a bird has any exposed skin on their legs, formic acid can hurt, and if they get it in their eyes, it isn't pleasant," Rankin said. "It's also extremely volatile."

The hummingbirds avoided both of the ant-derived chemicals, especially the formic acid. However, they had no reaction at all to the honeybee scent, which is known to deter other bees from visiting flowers.

To ensure it was the chemical itself the birds were reacting to, and not simply a fear of new smells, the researchers did an additional test with ethyl butyrate, a common additive in human food.

"It smells like Juicy Fruit gum, which is not a smell known in nature," Rankin said. "I did not enjoy it. The birds did not care about it though and didn't go out of their way to avoid it."

Rankin said the study raises new questions about the underrated importance that scent plays in birds' foraging decisions and specifically, hummingbird foraging.

Ashley Kim, first author on the paper and current ecology doctoral student at UC San Diego, was based in the Rankin Lab at UCR while participating in this project.

"This research made me understand the importance of studying the basic biology and natural history of animals that are commonly overlooked," she said.

Kim's participation was supported by the National Science Foundation, through its Research Experiences for Undergraduates program, which helps undergraduates get hands-on experience conducting research.

Read more at Science Daily

Struggling to learn a new language? Blame it on your stable brain

A study in patients with epilepsy is helping researchers understand how the brain manages the task of learning a new language while retaining our mother tongue. The study, by neuroscientists at UC San Francisco, sheds light on the age-old question of why it's so difficult to learn a second language as an adult.

The somewhat surprising results gave the team a window into how the brain navigates the tradeoff between neuroplasticity -- the ability to grow new connections between neurons when learning new things -- and stability, which allows us to maintain the integrated networks of things we've already learned. The findings appear in the Aug. 30, 2021, issue of Proceedings of the National Academy of Sciences.

"When learning a new language, our brains are somehow accommodating both of these forces as they're competing against each other," said Matt Leonard, PhD, assistant professor of neurological surgery and a member of the UCSF Weill Institute for Neurosciences.

By using electrodes on the surface of the brain to follow high-resolution neural signals, the team found that clusters of neurons scattered throughout the speech cortex appear to fine-tune themselves as a listener gains familiarity with foreign sounds.

"These are our first insights into what's changing in the brain between first hearing the sounds of a foreign language and being able to recognize them," said Leonard, who is a principal investigator on the study.

"That in-between stage is a crucial step in language learning but has been difficult to tackle, because the process is dynamic and unique to the individual," he said. "With this study, we were able to see what's actually happening in the brain regions involved in differentiating sounds during this initial phase of learning."

Brain Activity Shifts as Foreign Sounds Become Familiar

Learning the sounds of a new language is the first step in learning to use that language, said Leonard. So for this study, Leonard and lead author and postdoctoral scholar Han Yi, PhD, investigated how the activity in the dispersed brain regions associated with language shifted as the listener became more familiar with the foreign sounds.

The team worked with 10 patient volunteers, aged 19 to 59, whose native language is English, and asked them to recognize speech sounds in Mandarin. Mandarin is a tonal language in which the meaning of the word relies not only on the vowel and consonant sounds but also on subtle changes in the pitch of the voice, known as tones. Speakers of non-tonal languages like English often find it very challenging to discern these unfamiliar sounds.

Each of the volunteers had previously had brain surgery, during which electrodes were implanted in their brains to locate the source of their seizures. The study included seven patients at the UCSF Epilepsy Center, and three in the Epilepsy Center at the University of Iowa Hospitals and Clinics. The volunteers agreed to allow Leonard and his team to gather data from high-density, 256-channel electrodes placed on the surface of the brain regions that process speech sounds.

Over the course of the next few days, Leonard and Yi worked with the volunteers individually, playing recordings of several native Mandarin speakers of different ages, both male and female, pronouncing syllables like "ma" and "di" using each of the four tones. After each sound, the patient indicated whether they thought the tone was going up, down, up and then down, or staying flat, and received feedback on whether they were correct. Patients repeated this task about 200 times, over several 5- to 10-minute sessions.

After that brief amount of time, Leonard said, people had gotten through the initial learning phase and had become somewhat adept at categorizing the sounds.

"We also saw a lot of variability," he added. "Some people will get a bunch of trials right and then they'll start getting them wrong and then they'll get it right again in this kind of up-and-down that seems to be part of the learning process."

Learning New Sounds Involves Fine-Tuning Neural "Knobs"

When Leonard and Yi looked at the neural signals generated by the language learners, they saw a pattern that both surprised them and explained the performance curve they'd observed.

Data from other published studies suggested that activity across the speech cortex might increase as a person becomes more familiar with the language. What the researchers discovered instead was a spectrum of changes distributed throughout that speech cortex; with activity increasing in some areas but decreasing in others, maintaining a careful balance.

Those changes might be related to a brain area becoming tuned in to a particular tone, said Yi.

"We could see some groups of cells would respond more to the falling tone, and just keep ramping up their response, while right next to it another group of cells would increasingly engage when the person heard the dipping tone," Yi said. "It's as if these small clumps of neurons took on different roles."

In addition, which brain regions were more activated by which tone varied across individuals.

"It's more like each person's brain has a unique set of knobs that are getting fine-tuned while they're becoming familiar with these sounds," Leonard said.

Leonard and Yi think this may explain why some people pick up the sounds much more easily than others, as each unique brain strikes its own balance between maintaining the stability of the native language while calling on the plasticity required to learn a new one.

"The volunteers were able to learn the tones in Mandarin without affecting their ability to perceive pitch in English or in music," said Leonard. "These little neural knobs were all communicating with each other to reach the point where they can do the task correctly by working together."

Read more at Science Daily

Over 200 health journals call on world leaders to address 'catastrophic harm to health' from climate change

Over 200 health journals across the world have come together to simultaneously publish an editorial calling on world leaders to take emergency action to limit global temperature increases, halt the destruction of nature, and protect health.

While recent targets to reduce emissions and conserve biodiversity are welcome, they are not enough and are yet to be matched with credible short and longer term plans, it warns.

The editorial is published in leading titles from every continent including The BMJ, The Lancet, the New England Journal of Medicine, the East African Medical Journal, the Chinese Science Bulletin, the National Medical Journal of India, the Medical Journal of Australia, and 50 BMJ specialist journals including BMJ Global Health and Thorax.

Never have so many journals come together to make the same statement, reflecting the severity of the climate change emergency now facing the world.

The editorial is being published ahead of the UN General Assembly next week, one of the last international meetings taking place before the (COP26) climate conference in Glasgow, UK in November. This is a crucial moment to urge all countries to deliver enhanced and ambitious climate plans to honour the goals of the Paris Agreement, the international treaty on climate change adopted by 195 countries in 2015.

For decades, health professionals and health journals have been warning of the severe and growing impacts on health from climate change and the destruction of nature .

The impact on health and survival of extreme temperatures, destructive weather events, and the widespread degradation of essential ecosystems are just some of the impacts that we are seeing more of due to a changing climate.

They disproportionately affect the most vulnerable, including children and the elderly, ethnic minorities, poorer communities and those with underlying health conditions.

The editorial urges governments to intervene to transform societies and economies, for example, by supporting the redesign of transport systems, cities, production and distribution of food, markets for financial investments, and health systems.

Substantial investment will be needed, but this will have huge positive health and economic benefits, including high quality jobs, reduced air pollution, increased physical activity, and improved housing and diet, explain the authors.

Crucially, cooperation hinges on wealthy nations doing more, they say. In particular, countries that have disproportionately created the environmental crisis must do more to support low and middle income countries to build cleaner, healthier, and more resilient societies.

"As health professionals, we must do all we can to aid the transition to a sustainable, fairer, resilient, and healthier world," they write. "We, as editors of health journals, call for governments and other leaders to act, marking 2021 as the year that the world finally changes course."

Dr Fiona Godlee, Editor-in-Chief of The BMJ, and one of the co-authors of the editorial, said: "Health professionals have been on the frontline of the covid-19 crisis and they are united in warning that going above 1.5C and allowing the continued destruction of nature will bring the next, far deadlier crisis. Wealthier nations must act faster and do more to support those countries already suffering under higher temperatures. 2021 has to be the year the world changes course -- our health depends on it."

Seye Abimbola, Editor-in-Chief of BMJ Global Health, said: "What we must do to tackle pandemics, health inequities, and climate change is the same - global solidarity and action that recognise that, within and across nations our destinies are inextricably linked, just as human health is inextricably linked to the health of the planet."

Read more at Science Daily

Sep 5, 2021

Astronomers create 3D-printed stellar nurseries

Astronomers can't touch the stars they study, but astrophysicist Nia Imara is using 3-dimensional models that fit in the palm of her hand to unravel the structural complexities of stellar nurseries, the vast clouds of gas and dust where star formation occurs.

Imara and her collaborators created the models using data from simulations of star-forming clouds and a sophisticated 3D printing process in which the fine-scale densities and gradients of the turbulent clouds are embedded in a transparent resin. The resulting models -- the first 3D-printed stellar nurseries -- are highly polished spheres about the size of a baseball (8 centimeters in diameter), in which the star-forming material appears as swirling clumps and filaments.

"We wanted an interactive object to help us visualize those structures where stars form so we can better understand the physical processes," said Imara, an assistant professor of astronomy and astrophysics at UC Santa Cruz and first author of a paper describing this novel approach published August 25 in Astrophysical Journal Letters.

An artist as well as an astrophysicist, Imara said the idea is an example of science imitating art. "Years ago, I sketched a portrait of myself touching a star. Later, the idea just clicked. Star formation within molecular clouds is my area of expertise, so why not try to build one?" she said.

She worked with coauthor John Forbes at the Flatiron Institute's Center for Computational Astrophysics to develop a suite of nine simulations representing different physical conditions within molecular clouds. The collaboration also included coauthor James Weaver at Harvard University's School of Engineering and Applied Sciences, who helped to turn the data from the astronomical simulations into physical objects using high-resolution and photo-realistic multi-material 3D printing.

The results are both visually striking and scientifically illuminating. "Just aesthetically they are really amazing to look at, and then you begin to notice the complex structures that are incredibly difficult to see with the usual techniques for visualizing these simulations," Forbes said.

For example, sheet-like or pancake-shaped structures are hard to distinguish in two-dimensional slices or projections, because a section through a sheet looks like a filament.

"Within the spheres, you can clearly see a two-dimensional sheet, and inside it are little filaments, and that's mind boggling from the perspective of someone who is trying to understand what's going on in these simulations," Forbes said.

The models also reveal structures that are more continuous than they would appear in 2D projections, Imara said. "If you have something winding around through space, you might not realize that two regions are connected by the same structure, so having an interactive object you can rotate in your hand allows us to detect these continuities more easily," she said.

The nine simulations on which the models are based were designed to investigate the effects of three fundamental physical processes that govern the evolution of molecular clouds: turbulence, gravity, and magnetic fields. By changing different variables, such as the strength of the magnetic fields or how fast the gas is moving, the simulations show how different physical environments affect the morphology of substructures related to star formation.

Stars tend to form in clumps and cores located at the intersection of filaments, where the density of gas and dust becomes high enough for gravity to take over. "We think that the spins of these newborn stars will depend on the structures in which they form -- stars in the same filament will 'know' about each other's spins," Imara said.

With the physical models, it doesn't take an astrophysicist with expertise in these processes to see the differences between the simulations. "When I looked at 2D projections of the simulation data, it was often challenging to see their subtle differences, whereas with the 3D-printed models, it was obvious," said Weaver, who has a background in biology and materials science and routinely uses 3D printing to investigate the structural details of a wide range of biological and synthetic materials.

"I'm very interested in exploring the interface between science, art, and education, and I'm passionate about using 3D printing as a tool for the presentation of complex structures and processes in an easily understandable fashion," Weaver said. "Traditional extrusion-based 3D printing can only produce solid objects with a continuous outer surface, and that's problematic when trying to depict, gases, clouds, or other diffuse forms. Our approach uses an inkjet-like 3D printing process to deposit tiny individual droplets of opaque resin at precise locations within a surrounding volume of transparent resin to define the cloud's form in exquisite detail."

He noted that in the future the models could also incorporate additional information through the use of different colors to increase their scientific value. The researchers are also interested in exploring the use of 3D printing to represent observational data from nearby molecular clouds, such as those in the constellation Orion.

Read more at Science Daily