Nov 26, 2022

Pair of studies uncover surprising new roles for spinal cord and brainstem in touch

The sense of touch is essential to almost everything we do, from routine tasks at home to navigating unfamiliar terrains that may conceal dangers. Scientists have long been interested in understanding exactly how the touch information we obtain with our hands and other parts of the body makes its way to the brain to create the sensations we feel.

Yet, key aspects of touch -- including how the spinal cord and brainstem are involved in receiving, processing, and transmitting signals -- have remained poorly understood.

Now, a pair of papers by scientists at Harvard Medical School reveal critical new insights into how the spinal cord and brainstem contribute to the sense of touch.

Specifically, the research shows that the spinal cord and the brainstem, previously thought to be mere relay centers for touch information, are actively involved in processing touch signals as they travel to higher-order brain regions.

One study, published Nov. 4 in Cell, shows that specialized neurons in the spinal cord form a complex network that processes light touch -- think the brush of a hand or a peck on the cheek -- and sends this information to the brainstem.

In another study, published Nov. 23 in Nature, researchers established that direct and indirect touch pathways work together, converging in the brainstem to shape how touch is processed.

"These studies focus the spotlight on the spinal cord and the brainstem as sites where touch information is integrated and processed to convey different types of touch. We hadn't fully appreciated before how these areas contribute to the brain's representation of vibration, pressure, and other features of tactile stimuli," said David Ginty, the Edward R. and Anne G. Lefler Professor of Neurobiology in the Blavatnik Institute at HMS and the senior author on both papers.

Although the studies were conducted in mice, mechanisms for touch are largely conserved across species, including humans, which means the basics of touch processing could be useful for scientists studying human conditions such as neuropathic pain characterized by touch dysfunction.

"This detailed understanding of tactile sensation -- that is, feeling the world through contact with the skin -- may have profound implications for understanding how disease, disorder, and injury can affect our ability to interact with the environment around us," said James Gnadt, program director at the National Institute of Neurological Disorders and Stroke (NINDS), which provided part of the funding for the studies.

Overlooked and underappreciated

The historical view of touch is that sensory neurons in the skin encounter a touch stimulus such as pressure or vibration and send this information in the form of electrical impulses that travel directly from the skin to the brainstem. There, other neurons relay touch information to the brain's primary somatosensory cortex -- the highest level of the touch hierarchy -- where it is processed into sensation.

However, Ginty and his team wondered if and how the spinal cord and brainstem are involved in processing touch information. These areas occupy the lowest level of the touch hierarchy, and combine to form a more indirect touch pathway into the brain.

"People in the field thought that the diversity and richness of touch came just from sensory neurons in the skin, but that thinking bypasses the spinal cord and brainstem," said Josef Turecek, a postdoctoral fellow in the Ginty lab and the first author on the Nature paper.

Many neuroscientists are not familiar with spinal cord neurons, called postsynaptic dorsal column (PSDC) neurons, that project from the spinal cord into the brainstem -- and textbooks tend to leave PSDC neurons out of diagrams depicting the details of touch, Turecek explained.

For Ginty, the way that the spinal cord and brainstem have been overlooked in touch brings to mind early research on the visual system. Initially, scientists studying vision thought that all processing occurred in the visual cortex of the brain. However, it turned out that the retina, which receives visual information long before it reaches the cortex, is heavily involved in processing this information.

"Analogous to research on the visual system, these two papers address how touch information coming from the skin is processed in the spinal cord and brainstem before it moves up the touch hierarchy to more complex brain regions," Ginty said.

Connecting the dots

In the Cell paper, the researchers used a technique they developed to simultaneously record the activity of many different neurons in the spinal cord as mice experienced various types of touch. They discovered that over 90 percent of neurons in the dorsal horn -- the sensory processing area of the spinal cord -- responded to light touch.

"This was surprising because classically it was thought that dorsal horn neurons in the superficial layers of the spinal cord respond mostly to temperature and painful stimuli. We hadn't appreciated how light-touch information is distributed in the spinal cord," said Anda Chirila, a research fellow in the Ginty lab and the co-lead author on the paper with graduate student Genelle Rankin.

Moreover, these responses to light touch varied considerably across genetically different populations of neurons in the dorsal horn, which were found to form a highly interconnected and complex neural network. This variation in responses, in turn, gave rise to a diversity of touch information carried from the dorsal horn to the brainstem by PSDC neurons. In fact, when the researchers silenced various dorsal horn neurons, they saw a reduction in the diversity of light-touch information conveyed by PSDC neurons.

"We think this information on how touch is encoded in the spinal cord, which is the first site in the touch hierarchy, is important for understanding fundamental aspects of touch processing," Chirila said.

In their other study, published in Nature, scientists focused on the next step in the touch hierarchy: the brainstem. They explored the relationship between the direct pathway from sensory neurons in the skin to the brainstem and the indirect pathway that sends touch information through the spinal cord, as described in the Cell paper.

"Brainstem neurons get both direct and indirect input, and we were really curious about what aspects of touch each pathway brings to the brainstem," Turecek said.

To parse this question, the researchers alternately silenced each pathway and recorded the response of neurons in mouse brainstems. The experiments showed that the direct pathway is important for communicating high-frequency vibration, while the indirect pathway is needed to encode the intensity of pressure on the skin.

"The idea is that these two pathways converge in the brainstem with neurons that can encode both vibration and intensity, so you can shape responses of those neurons based on how much direct and indirect input you have," Turecek explained. In other words, if brainstem neurons have more direct than indirect input, they communicate more vibration than intensity, and vice versa.

Additionally, the team discovered that both pathways can convey touch information from the same small area of skin, with information on intensity detouring through the spinal cord before joining information on vibration that travels directly to the brainstem. In this way, the direct and indirect pathways work together, enabling the brainstem to form a spatial representation of different types of touch stimuli from the same area.

Finally on the map

Up until now, "most people have viewed the brainstem as a relay station for touch, and they haven't even had the spinal cord on the map at all," Ginty said. For him, the new studies "demonstrate that there's a tremendous amount of information processing occurring in the spinal cord and brainstem -- and this processing is critical for how the brain represents the tactile world."

Such processing, he added, likely contributes to the complexity and diversity of the touch information that the brainstem sends to the somatosensory cortex.

Next, Ginty and team plan to repeat the experiments in mice that are awake and behaving, to test the findings under more natural conditions. They also want to expand the experiments to include more types of real-world touch stimuli, such as texture and movement.

The researchers are also interested in how information from the brain -- for example, about an animal's level of stress, hunger, or exhaustion -- affects how touch information is processed in the spinal cord and brainstem. Given that touch mechanisms appear to be conserved across species, such information may be especially relevant for human conditions such as autism spectrum disorders or neuropathic pain, in which neural dysfunction causes hypersensitivity to light touch.

"With these studies we've laid the fundamental building blocks for how these circuits work and what their importance is," Rankin said. "Now we have the tools to dissect these circuits to understand how they're functioning normally, and what's changing when something goes wrong."

Read more at Science Daily

Less intensively managed grasslands have higher plant diversity and better soil health

Researchers have shown -- for the first time -- that less intensively managed British grazed grasslands have on average 50% more plant species and better soil health than intensively managed grassland. The new study could help farmers increase both biodiversity and soil health, including the amount of carbon in the soil of the British countryside.

Grazed grassland makes up a large proportion of the British countryside and is vital to farming and rural communities. This land can be perceived as only being about food production, but this study gives more evidence that it could be key to increasing biodiversity and soil health.

Researchers at the UK Centre for Ecology & Hydrology (UKCEH) studied 940 plots of grassland, comparing randomly selected plots which sampled the range of grassland management across Great Britain; from intensively- managed land with a few sown grassland species and high levels of soil phosphorus (indicating ploughing/reseeding and fertiliser and slurry application), to grassland with higher levels of species and lower levels of soil phosphorus. The plots were sampled as part of the UKCEH Countryside Survey, a nationally representative long-term dataset.

The study counted the number of plant species in sample areas and analysed co-located soil samples for numbers of soil invertebrates and carbon, nitrogen and phosphorus levels.

Researchers found that less intensively managed grassland had greater diversity of plant species and, strikingly, this correlated with better soil health, such as increased nitrogen and carbon levels and increased numbers of soil invertebrates such as springtails and mites.

In the same study, the researchers used the same methods to examine the plant diversity and soil from grasslands on 56 mostly beef farms from the Pasture Fed Livestock Association (PFLA) -- a farmer group that has developed standards to manage and improve soil and pasture health.

The researchers found that plots of land from PFLA farms had greater plant diversity -- on average an additional six plant species, including different types of grasses and herbaceous flowering plants, compared to intensively farmed plots from the Countryside Survey. In addition, grassland plants on these farms were often taller, a quality which is proven to be beneficial to butterflies and bees.

Pasture Fed Livestock Association grasslands did not yet show increased soil health, but the research indicated that this may be due to a time lag between increasing numbers of plant species and changes in soil health, particularly on farms which have been intensively managed in the past.

Lead author Dr Lisa Norton, Senior Scientist at UKCEH, says: "We've shown for the first time, on land managed by farmers for production, that a higher diversity of plants in grasslands is correlated with better soil health. This work also tells us that the Pasture Fed Livestock Association members are on the right track to increase biodiversity, though it may take longer to see improvements in soil health.

"Grassland with different types of plants able to grow tall and flower is associated with improved soil health measures, and is beneficial for creepy crawlies below and above ground. Having this abundance of life in our grasslands can in turn support small mammals and birds of prey, and farmers have told us that they are seeing voles and mice in their fields for the first time."

Dr Norton adds: "My hope for the future is that our grasslands can be managed less intensively -- with all the improvements in plant and animal biodiversity and soil health that brings -- but still remain productive for farmers."

Read more at Science Daily

Nov 25, 2022

Immune cells in ALS patients can predict the course of the disease

By measuring immune cells in the cerebrospinal fluid when diagnosing ALS, it is possible to predict how fast the disease may progress according to a study from Karolinska Institutet published in Nature Communications.

ALS is a rare, but fatal disease that affects the nerve cells and leads to paralysis of voluntary muscles and death. In a new study, researchers from Karolinska Institutet have discovered a way to predict the course of the disease in ALS patients.

Between March 2016 and March 2020, researchers collected fresh blood and cerebrospinal fluid from 89 patients in Stockholm who had recently been diagnosed with ALS. The patients were followed until October 2020.

The study shows that a high proportion of so-called effector T cells are associated with a low survival rate. At the same time, a high proportion of activated regulatory T cells indicate a protective role against the rapid disease progression. The findings provide new evidence for the involvement of T cells in the course of the disease and show that certain types of effector T cells accumulate in the cerebrospinal fluid of ALS patients.

"The study could contribute to the development of new treatments that target immune cells to slow down the course of the disease," says Solmaz Yazdani, a doctoral student at the Institute of Environmental Medicine at Karolinska Institutet and first author of the study.

The next step in her research is to study how T cells contribute to the course of the disease.

"We have plans to collect samples from these individuals to study changes in the immune cells over time. In addition, we want to study effector T cells in more detail to understand their role in ALS."

Read more at Science Daily

Stop counting cups: There's an ocean of difference in our water-drinking needs

A new study of thousands of people reveals a wide range in the amount of water people consume around the globe and over their lifespans, definitively spilling the oft-repeated idea that eight, 8-ounce glasses meet the human body's daily needs.

"The science has never supported the old eight glasses thing as an appropriate guideline, if only because it confused total water turnover with water from beverages and a lot of your water comes from the food you eat," says Dale Schoeller, a University of Wisconsin-Madison emeritus professor of nutritional sciences who has been studying water and metabolism for decades. "But this work is the best we've done so far to measure how much water people actually consume on a daily basis -- the turnover of water into and out of the body -- and the major factors that drive water turnover."

That's not to say the new results settle on a new guideline. The study, published today in the journal Science, measured the water turnover of more than 5,600 people from 26 countries, ages ranging from 8 days to 96 years old, and found daily averages on a range between 1 liter per day and 6 liters per day.

"There are outliers, too, that are turning over as much as 10 liters a day," says Schoeller, a co-author of the study. "The variation means pointing to one average doesn't tell you much. The database we've put together shows us the big things that correlate with differences in water turnover."

Previous studies of water turnover relied largely on volunteers to recall and self-report their water and food consumption, or were focused observations -- of, say, a small group of young, male soldiers working outdoors in desert conditions -- of questionable use as representative of most people.

The new research objectively measured the time it took water to move through the bodies of study participants by following the turnover of "labeled water." Study subjects drank a measured amount of water containing trackable hydrogen and oxygen isotopes. Isotopes are atoms of a single element that have slightly different atomic weights, making them distinguishable from other atoms of the same element in a sample.

"If you measure the rate a person is eliminating those stable isotopes through their urine over the course of a week, the hydrogen isotope can tell you how much water they're replacing and the elimination of the oxygen isotope can tell us how many calories they are burning," says Schoeller, whose UW-Madison lab in the 1980s was the first to apply the labeled-water method to study people.

More than 90 researchers were involved in the study, which was led by a group that includes Yosuke Yamada, a former UW-Madison postdoctoral researcher in Schoeller's lab and now section head of the National Institute of Biomedical Innovation, Health and Nutrition in Japan, and John Speakman, zoology professor at the University of Aberdeen in Scotland. They collected and analyzed data from participants, comparing environmental factors -- such as temperature, humidity and altitude of the participants' hometowns -- to measured water turnover, energy expenditure, body mass, sex, age and athlete status.

The researchers also incorporated the United Nations' Human Development Index, a composite measure of a country that combines life expectancy, schooling and economic factors.

Water turnover volume peaked for men in the study during their 20s, while women held a plateau from 20 through 55 years of age. Newborns, however, turned over the largest proportion daily, replacing about 28 percent of the water in their bodies every day.

Physical activity level and athletic status explained the largest proportion of the differences in water turnover, followed by sex, the Human Development Index, and age.

All things equal, men and women differ by about half a liter of water turnover. As a baseline of sorts, the study's findings expect a male non-athlete (but of otherwise average physical activity) who is 20 years old, weighs 70kg (154 pounds), lives at sea level in a well-developed country in a mean air temperature of 10 degrees C (50 Fahrenheit) and a relative humidity of 50%, would take in and lose about 3.2 liters of water every day. A woman of the same age and activity level, weighing 60 kg (132 pounds) and living in the same spot, would go through 2.7 liters (91 ounces).

Doubling the energy a person uses will push their expected daily water turnover up by about liter, the researchers found. Fifty kilograms more body weight adds 0.7 liters a day. A 50% increase in humidity pushes water use up by 0.3 liters. Athletes use about a liter more than non-athletes.

The researchers found "hunter-gatherers, mixed farmers, and subsistence agriculturalists" all had higher water turnover than people who live in industrialized economies. In all, the lower your home country's Human Development Index, the more water you go through in a day.

"That's representing the combination of several factors," Schoeller says. "Those people in low HDI countries are more likely to live in areas with higher average temperatures, more likely to be performing physical labor, and less likely to be inside in a climate-controlled building during the day. That, plus being less likely to have access to a sip of clean water whenever they need it, makes their water turnover higher."

The measurements will improve our ability to predict more specific and accurate future water needs, especially in dire circumstances, according to Schoeller.

"Look at what's going on in Florida right now, or in Mississippi -- where entire regions have been exposed by a calamity to water shortages," he says. "The better we understand how much they need, the better prepared we are to respond in an emergency."

And the better we can prepare for long-term needs and even notice short-term health concerns, the researchers believe.

"Determining how much water humans consume is of increasing importance because of population growth and growing climate change," says Yamada. "Because water turnover is related to other important indicators of health, like physical activity and body fat percent, it has potential as a biomarker for metabolic health."

Read more at Science Daily

Planet's rarest birds at higher risk of extinction

A new study finds that bird species with extreme or uncommon combinations of traits face the highest risk of extinction. The findings are published in the British Ecological Society journal Functional Ecology.

A new study led by researchers at Imperial College London finds that the most unique birds on the planet are also the most threatened. Losing these species and the unique roles they play in the environment, such as seed dispersal, pollination and predation, could have severe consequences to the functioning of ecosystems.

The study analysed the extinction risk and physical attributes (such as beak shape and wing length) of 99% of all living bird species, making it the most comprehensive study of its kind to date.

The researchers found that in simulated scenarios in which all threatened and near-threatened bird species became extinct, there would be a significantly greater reduction in the physical (or morphological) diversity among birds than in scenarios where extinctions were random.

Bird species that are both morphologically unique and threatened include the Christmas Frigatebird (Fregata andrewsi), which nests only on Christmas Island, and the Bristle-thighed Curlew (Numenius tahitiensis), which migrates from its breeding grounds in Alaska to South Pacific islands every year.

Jarome Ali, a PhD candidate at Princeton University who completed the research at Imperial College London and was the lead author of the research, said: "Our study shows that extinctions will most likely prune a large proportion of unique species from the avian tree. Losing these unique species will mean a loss of the specialised roles that they play in ecosystems.

"If we do not take action to protect threatened species and avert extinctions, the functioning of ecosystems will be dramatically disrupted."

In the study, the authors used a dataset of measurements collected from living birds and museum specimens, totalling 9943 bird species. The measurements included physical traits like beak size and shape, and the length of wings, tails and legs.

The authors combined the morphological data with extinction risk, based on each species' current threat status on the IUCN Red List. They then ran simulations on what would happen if the most threatened birds were to go extinct.

Although the dataset used in the study was able to show that the most unique birds were also classified as threatened on the Red List, it was unable to show what links uniqueness in birds to extinction risk.

Read more at Science Daily

525-million-year-old fossil defies textbook explanation for brain evolution

Fossils of a tiny sea creature that died more than half a billion years ago may compel a science textbook rewrite of how brains evolved.

A study published in Science -- led by Nicholas Strausfeld,a Regents Professor in the University of Arizona Department of Neuroscience, and Frank Hirth, a reader of evolutionary neuroscience at King's College London -- provides the first detailed description of Cardiodictyon catenulum, a wormlike animal preserved in rocks in China's southern Yunnan province. Measuring barely half an inch (less than 1.5 centimeters) long and initially discovered in 1984, the fossil had hidden a crucial secret until now: a delicately preserved nervous system, including a brain.

"To our knowledge, this is the oldest fossilized brain we know of, so far," Strausfeld said.

Cardiodictyon belonged to an extinct group of animals known as armored lobopodians, which were abundant early during a period known as the Cambrian, when virtually all major animal lineages appeared over an extremely short time between 540 million and 500 million years ago. Lobopodians likely moved about on the sea floor using multiple pairs of soft, stubby legs that lacked the joints of their descendants, the euarthropods -- Greek for "real jointed foot." Today's closest living relatives of lobopodians are velvet worms that live mainly in Australia, New Zealand and South America.

A debate going back to the 1800s


Fossils of Cardiodictyon reveal an animal with a segmented trunk in which there are repeating arrangements of neural structures known as ganglia. This contrasts starkly with its head and brain, both of which lack any evidence of segmentation.

"This anatomy was completely unexpected because the heads and brains of modern arthropods, and some of their fossilized ancestors, have for over a hundred years been considered as segmented," Strausfeld said.

According to the authors, the finding resolves a long and heated debate about the origin and composition of the head in arthropods, the world's most species-rich group in the animal kingdom. Arthropods include insects, crustaceans, spiders and other arachnids, plus some other lineages such as millipedes and centipedes.

"From the 1880s, biologists noted the clearly segmented appearance of the trunk typical for arthropods, and basically extrapolated that to the head," Hirth said. "That is how the field arrived at supposing the head is an anterior extension of a segmented trunk."

"But Cardiodictyon shows that the early head wasn't segmented, nor was its brain, which suggests the brain and the trunk nervous system likely evolved separately," Strausfeld said.

Brains do fossilize

Cardiodictyon was part of the Chengjiang fauna, a famous deposit of fossils in the Yunnan Province discovered by paleontologist Xianguang Hou. The soft, delicate bodies of lobopodians have preserved well in the fossil record, but other than Cardiodictyon none have been scrutinized for their head and brain, possibly because lobopodians are generally small. The most prominent parts of Cardiodictyon were a series of triangular, saddle-shaped structures that defined each segment and served as attachment points for pairs of legs. Those had been found in even older rocks dating back to the advent of the Cambrian.

"That tells us that armored lobopodians might have been the earliest arthropods," Strausfeld said, predating even trilobites, an iconic and diverse group of marine arthropods that went extinct around 250 million years ago.

"Until very recently, the common understanding was 'brains don't fossilize,'" Hirth said. "So you would not expect to find a fossil with a preserved brain in the first place. And, second, this animal is so small you would not even dare to look at it in hopes of finding a brain."

However, work over the last 10 years, much of it done by Strausfeld, has identified several cases of preserved brains in a variety of fossilized arthropods.

A common genetic ground plan for making a brain

In their new study, the authors not only identified the brain of Cardiodictyon but also compared it with those of known fossils and of living arthropods, including spiders and centipedes. Combining detailed anatomical studies of the lobopodian fossils with analyses of gene expression patterns in their living descendants, they conclude that a shared blueprint of brain organization has been maintained from the Cambrian until today.

"By comparing known gene expression patterns in living species," Hirth said, "we identified a common signature of all brains and how they are formed."

In Cardiodictyon, three brain domains are each associated with a characteristic pair of head appendages and with one of the three parts of the anterior digestive system.

"We realized that each brain domain and its corresponding features are specified by the same combination genes, irrespective of the species we looked at," added Hirth. "This suggested a common genetic ground plan for making a brain."

Lessons for vertebrate brain evolution

Hirth and Strausfeld say the principles described in their study probably apply to other creatures outside of arthropods and their immediate relatives. This has important implications when comparing the nervous system of arthropods with those of vertebrates, which show a similar distinct architecture in which the forebrain and midbrain are genetically and developmentally distinct from the spinal cord, they said.

Strausfeld said their findings also offer a message of continuity at a time when the planet is changing dramatically under the influence of climatic shifts.

"At a time when major geological and climatic events were reshaping the planet, simple marine animals such as Cardiodictyon gave rise to the world's most diverse group of organisms -- the euarthropods -- that eventually spread to every emergent habitat on Earth, but which are now being threatened by our own ephemeral species."

Read more at Science Daily

Nov 24, 2022

Astronomers observe intra-group light -- the elusive glow between distant galaxies

An international team of astronomers have turned a new technique onto a group of galaxies and the faint light between them -- known as 'intra-group light' -- to characterise the stars that dwell there.

Lead author of the study published in MNRAS, Dr Cristina Martínez-Lombilla from the School of Physics at UNSW Science, said "We know almost nothing about intra-group light.

"The brightest parts of the intra-group light are ~50 times fainter than the darkest night sky on Earth. It is extremely hard to detect, even with the largest telescopes on Earth -- or in space."

Using their sensitive technique, which eliminates light from all objects except that from the intra-group light, the researchers not only detected the intra-group light but were able to study and tell the story of the stars that populate it.

"We analysed the properties of the intra-group stars -- those stray stars between the galaxy groups. We looked at the age and abundance of the elements that composed them and then we compared those features with the stars still belonging to galaxy groups," Dr Martínez-Lombilla said.

"We found that the intra-group light is younger and less metal-rich than the surrounding galaxies."

Rebuilding the story of intra-group light

Not only were the orphan stars in the intra-group light 'anachronistic' but they appeared to be of a different origin to their closest neighbours. The researchers found the character of the intra-group stars appeared similar to the nebulous 'tail' of a further away galaxy.

The combination of these clues allowed the researchers to rebuild the history -- the story -- of the intra-group light and how its stars came to be gathered in their own stellar orphanage.

"We think these individual stars were at some points stripped from their home galaxies and now they float freely, following the gravity of the group," said Dr Martínez-Lombilla. "The stripping, called tidal stripping, is caused by the passage of massive satellite galaxies -- similar to the Milky Way -- that pull stars in their wake."

This is the first time the intra-group light of these galaxies has been observed.

"Unveiling the quantity and origin of the intra-group light provides a fossil record of all the interactions a group of galaxies has undergone and provides a holistic view of the system's interaction history," Dr Martínez-Lombilla said.

"Also, these events occurred a long time ago. The galaxies [we're looking at] are so far away, that we're observing them as they were 2.5 billion years ago. That is how long it takes for their light to reach us."

By observing events from a long time ago, in galaxies so far away, the researchers are contributing vital datapoints to the slow-burning evolution of cosmic events.

Tailored image treatment procedure

The researchers pioneered a unique technique to achieve this penetrating view.

"We have developed a tailored image treatment procedure that allows us to analyse the faintest structures in the Universe," said Dr Martínez-Lombilla.

"It follows the standard steps for the study of faint structures in astronomical images -- which implies 2D modelling and the removal of all light except that coming from the intra-group light. This includes all the bright stars in the images, the galaxies obscuring the intra-group light and a subtraction of the continuum emission from the sky.

"What makes our technique different is that it is fully Python-based so it is very modular and easily applicable to different sets of data from different telescopes rather than being just useful for these images.

"The most important outcome is that when studying very faint structures around galaxies, every step in the process counts and every undesirable light should be accounted for and removed. Otherwise, your measurements will be wrong.

The techniques presented in this study are a pilot, encouraging future analyses of intra-group light, Dr Martínez-Lombilla said.

"Our main long-term goal is to extend these results to a large sample of group of galaxies. Then we can look at statistics and find out the typical properties regarding the formation and evolution of the intra-group light and these extremely common systems of groups of galaxies.

Read more at Science Daily

Picky eaters are put off by food depending on plateware color

Academics have examined the effect of colour among picky and non-picky eaters, in a first-of-its-kind study.

Previous research has demonstrated that the smell and texture of food can affect how it tastes for picky eaters, but little is known about other senses.

A team from the University of Portsmouth has discovered the colour of the bowl in which food is served also influences taste perception.

The experiment comprised nearly 50 people to measure their food neophobia, which is a reluctance to eat or try new food. The participants, who were divided into picky and non-picky eaters, then tasted the same snacks served in red, white and blue bowls.

Results revealed that both the perceived saltiness and desirability of the foods were influenced by colour in the picky group, but not the non-picky group.

Specifically, the snack was rated as higher in saltiness in the red and blue versus white bowl, and least desirable when served in the red bowl. In the UK, salty snacks are often sold in blue packaging, and the team believe that this might explain some of the saltiness findings.

Dr Lorenzo Stafford, an olfactory (sense of smell) researcher in the Department of Psychology at the University of Portsmouth, said: "Having restricted diets can lead to nutritional deficiencies as well as health problems like heart disease, poor bone health and dental issues. There is also a social cost because normally enjoyable moments between family members can easily turn into stressful, anxious, and conflict-causing situations when picky eaters feel ashamed or pressured to eat food.

"That is why it's important to understand the factors that act to 'push and pull' this behaviour."

Picky eating behaviour is usually categorised as having a limited diet, specific food preparation, strong dislikes and difficulty accepting new foods. Across a lifespan, a picky eater will generally consume fewer than 20 different food items.

The paper, published in the Food Quality and Preference journal, says this study is believed to be the first to provide insight into the interaction between colour and taste perception in adult picky and non-picky eaters and reveal a difference in the way that colour affects the perception of food in picky eaters.

It recommends further research to see if these findings extend beyond the food and colours tested here.

"This knowledge could be useful for those trying to expand the repertoire of foods," added Dr Stafford.

"For example, if you wanted to encourage a picky eater to try more vegetables well known to be viewed as bitter, you could attempt to serve them on a plate or bowl that is known to increase sweetness.

Read more at Science Daily

Human evolution wasn't just the sheet music, but how it was played

A team of Duke researchers has identified a group of human DNA sequences driving changes in brain development, digestion and immunity that seem to have evolved rapidly after our family line split from that of the chimpanzees, but before we split with the Neanderthals.

Our brains are bigger, and are guts are shorter than our ape peers.

"A lot of the traits that we think of as uniquely human, and human-specific, probably appear during that time period," in the 7.5 million years since the split with the common ancestor we share with the chimpanzee, said Craig Lowe, Ph.D., an assistant professor of molecular genetics and microbiology in the Duke School of Medicine.

Specifically, the DNA sequences in question, which the researchers have dubbed Human Ancestor Quickly Evolved Regions (HAQERS), pronounced like hackers, regulate genes. They are the switches that tell nearby genes when to turn on and off. The findings appear Nov.23 in the journal Cell.

The rapid evolution of these regions of the genome seems to have served as a fine-tuning of regulatory control, Lowe said. More switches were added to the human operating system as sequences developed into regulatory regions, and they were more finely tuned to adapt to environmental or developmental cues. By and large, those changes were advantageous to our species.

"They seem especially specific in causing genes to turn on, we think just in certain cell types at certain times of development, or even genes that turn on when the environment changes in some way," Lowe said.

A lot of this genomic innovation was found in brain development and the GI tract. "We see lots of regulatory elements that are turning on in these tissues," Lowe said. "These are the tissues where humans are refining which genes are expressed and at what level."

Today, our brains are larger than other apes, and our guts are shorter. "People have hypothesized that those two are even linked, because they are two really expensive metabolic tissues to have around," Lowe said. "I think what we're seeing is that there wasn't really one mutation that gave you a large brain and one mutation that really struck the gut, it was probably many of these small changes over time."

To produce the new findings, Lowe's lab collaborated with Duke colleagues Tim Reddy, an associate professor of biostatistics and bioinformatics, and Debra Silver, an associate professor of molecular genetics and microbiology to tap their expertise. Reddy's lab is capable of looking at millions of genetic switches at once and Silver is watching switches in action in developing mouse brains.

"Our contribution was, if we could bring both of those technologies together, then we could look at hundreds of switches in this sort of complex developing tissue, which you can't really get from a cell line," Lowe said.

"We wanted to identify switches that were totally new in humans," Lowe said. Computationally, they were able to infer what the human-chimp ancestor's DNA would have been like, as well as the extinct Neanderthal and Denisovan lineages. The researchers were able to compare the genome sequences of these other post-chimpanzee relatives thanks to databases created from the pioneering work of 2022 Nobel laureate Svante Pääbo.

"So, we know the Neanderthal sequence, but let's test that Neanderthal sequence and see if it can really turn on genes or not," which they did dozens of times.

"And we showed that, whoa, this really is a switch that turns on and off genes," Lowe said. "It was really fun to see that new gene regulation came from totally new switches, rather than just sort of rewiring switches that already existed."

Along with the positive traits that HAQERs gave humans, they can also be implicated in some diseases.

Most of us have remarkably similar HAQER sequences, but there are some variances, "and we were able to show that those variants tend to correlate with certain diseases," Lowe said, namely hypertension, neuroblastoma, unipolar depression, bipolar depression and schizophrenia. The mechanisms of action aren't known yet, and more research will have to be done in these areas, Lowe said.

"Maybe human-specific diseases or human-specific susceptibilities to these diseases are going to be preferentially mapped back to these new genetic switches that only exist in humans," Lowe said.

Read more at Science Daily

World's oldest meal helps unravel mystery of our earliest animal ancestors

The contents of the last meal consumed by the earliest animals known to inhabit Earth more than 550 million years ago has unearthed new clues about the physiology of our earliest animal ancestors, according to scientists from The Australian National University (ANU).

Ediacara biota are the world's oldest large organisms and date back 575 million years. ANU researchers found the animals ate bacteria and algae that was sourced from the ocean floor. The findings, published in Current Biology, reveal more about these strange creatures, including how they were able to consume and digest food.

The scientists analysed ancient fossils containing preserved phytosterol molecules -- natural chemical products found in plants -- that remained from the animals' last meal. By examining the molecular remains of what the animals ate, the researchers were able to confirm the slug-like organism, known as Kimberella, had a mouth and a gut and digested food the same way modern animals do. The researchers say it was likely one of the most advanced creatures of the Ediacarans.

The ANU team found that another animal, which grew up to 1.4 metres in length and had a rib-like design imprinted on its body, was less complex and had no eyes, mouth or gut. Instead, the odd creature, called Dickinsonia, absorbed food through its body as it traversed the ocean floor.

"Our findings suggest that the animals of the Ediacara biota, which lived on Earth prior to the 'Cambrian Explosion' of modern animal life, were a mixed bag of outright weirdos, such as Dickinsonia, and more advanced animals like Kimberella that already had some physiological properties similar to humans and other present-day animals," lead author Dr Ilya Bobrovskiy, from GFZ-Potsdam in Germany, said.

Both Kimberella and Dickinsonia, which have a structure and symmetry unlike anything that exists today, are part of the Ediacara biota family that lived on Earth about 20 million years prior to the Cambrian Explosion -- a major event that forever changed the course of evolution of all life on Earth.

"Ediacara biota really are the oldest fossils large enough to be visible with your naked eyes, and they are the origin of us and all animals that exist today. These creatures are our deepest visible roots," Dr Bobrovskiy, who completed the work as part of his PhD at ANU, said.

Study co-author Professor Jochen Brocks, from the ANU Research School of Earth Sciences, said algae are rich in energy and nutrients and may have been instrumental for Kimberella's growth.

"The energy-rich food may explain why the organisms of the Ediacara biota were so large. Nearly all fossils that came before the Ediacara biota were single-celled and microscopic in size," Professor Brocks said.

Using advanced chemical analysis techniques, the ANU scientists were able to extract and analyse the sterol molecules contained in the fossil tissue. Cholesterol is the hallmark of animals and it's how, back in 2018, the ANU team was able to confirm that Ediacara biota are among our earliest known ancestors.

The molecules contained tell-tale signatures that helped the researchers decipher what the animals ate in the lead up to their death. Professor Brocks said the difficult part was differentiating between the signatures of the fat molecules of the creatures themselves, the algal and bacterial remains in their guts, and the decaying algal molecules from the ocean floor that were all entombed together in the fossils.

"Scientists already knew Kimberella left feeding marks by scraping off algae covering the sea floor, which suggested the animal had a gut. But it was only after analysing the molecules of Kimberella's gut that we were able to determine what exactly it was eating and how it digested food," Professor Brocks said.

"Kimberella knew exactly which sterols were good for it and had an advanced fine-tuned gut to filter out all the rest.

"This was a Eureka moment for us; by using preserved chemical in the fossils, we can now make gut contents of animals visible even if the gut has since long decayed. We then used this same technique on weirder fossils like Dickinsonia to figure out how it was feeding and discovered that Dickinsonia did not have a gut."

Read more at Science Daily

Nov 23, 2022

International team observes innermost structure of quasar jet

An international team of scientists has observed the narrowing of a quasar jet for the first time by using a network of radio telescopes across the world. The results suggest that the narrowing of the jet is independent of the activity level of the galaxy which launched it.

Nearly every galaxy hosts a supermassive black hole in its center. In some cases, enormous amounts of energy are released by gas falling towards the black hole, creating a phenomenon known as a quasar. Quasars emit narrow, collimated jets of material at nearly the speed of light. But how and where quasar jets are collimated has been a long-standing mystery.

An international team led by Hiroki Okino, a graduate student at the University of Tokyo, and including members from the National Astronomical Observatory of Japan (NAOJ), the Massachusetts Institute of Technology, Kogakuin University, Hachinohe National College of Technology, and Niigata University, captured an image with the highest angular resolution to date that shows the deepest part of the jet in a bright quasar known as 3C 273. The team found that the jet flowing from the quasar narrows down over a very long distance. This narrowing part of the jet continues incredibly far, well beyond the area where the black hole's gravity dominates. The results show that the structure of the jet is similar to jets launched from nearby galaxies with a low luminosity active nucleus. This would indicate that the collimation of the jet is independent of the activity level in the host galaxy, providing an important clue to unravelling the inner workings of jets.

Read more at Science Daily

An exoplanet atmosphere as never seen before

The James Webb Space Telescope (JWST) just scored another first: a detailed molecular and chemical portrait of a distant world's skies.

The telescope's array of highly sensitive instruments was trained on the atmosphere of a "hot Saturn" -- a planet about as massive as Saturn orbiting a star some 700 light-years away -- known as WASP-39 b. While JWST and other space telescopes, including Hubble and Spitzer, previously have revealed isolated ingredients of this broiling planet's atmosphere, the new readings provide a full menu of atoms, molecules, and even signs of active chemistry and clouds.

"The clarity of the signals from a number of different molecules in the data is remarkable," says Mercedes López-Morales, an astronomer at the Center for Astrophysics | Harvard & Smithsonian and one of the scientists who contributed to the new results.

"We had predicted that we were going to see many of those signals, but still, when I first saw the data, I was in awe," López-Morales adds.

The latest data also give a hint of how these clouds in exoplanets might look up close: broken up rather than a single, uniform blanket over the planet.

The findings bode well for the capability of JWST to conduct the broad range of investigations on exoplanets -- planets around other stars -- scientists hoped for. That includes probing the atmospheres of smaller, rocky planets like those in the TRAPPIST-1 system.

"We observed the exoplanet with multiple instruments that, together, provide a broad swath of the infrared spectrum and a panoply of chemical fingerprints inaccessible until JWST," said Natalie Batalha, an astronomer at the University of California, Santa Cruz, who contributed to and helped coordinate the new research. "Data like these are a game changer."

The suite of discoveries is detailed in a set of five newly submitted scientific papers, available on the preprint website arXiv. Among the unprecedented revelations is the first detection in an exoplanet atmosphere of sulfur dioxide, a molecule produced from chemical reactions triggered by high-energy light from the planet's parent star. On Earth, the protective ozone layer in the upper atmosphere is created in a similar way.

"The surprising detection of sulfur dioxide finally confirms that photochemistry shapes the climate of 'hot Saturns,'" says Diana Powell, a NASA Hubble fellow, astronomer at the Center for Astrophysics and core member of the team that made the sulfur dioxide discovery. "Earth's climate is also shaped by photochemistry, so our planet has more in common with 'hot Saturns' than we previously knew!"

Jea Adams a graduate student at Harvard and researcher at the Center for Astrophysics analyzed the data that confirmed the sulfur dioxide signal.

"As an early career researcher in the field of exoplanet atmospheres, it's so exciting to be a part of a detection like this," Adams says. "The process of analyzing this data felt magical. We saw hints of this feature in early data, but this higher precision instrument revealed the signature of SO2 clearly and helped us solve the puzzle."

At an estimated temperature of 1,600 degrees Fahrenheit and an atmosphere made mostly of hydrogen, WASP-39 b is not believed to be habitable. The exoplanet has been compared to both Saturn and Jupiter, with a mass similar to Saturn, but an overall size as big as Jupiter. But the new work points the way to finding evidence of potential life on a habitable planet.

The planet's proximity to its host star -- eight times closer than Mercury is to our Sun -- also makes it a laboratory for studying the effects of radiation from host stars on exoplanets. Better knowledge of the star-planet connection should bring a deeper understanding of how these processes create the diversity of planets observed in the galaxy.

Other atmospheric constituents detected by JWST include sodium, potassium, and water vapor, confirming previous space and ground-based telescope observations as well as finding additional water features, at longer wavelengths, that haven't been seen before.

JWST also saw carbon dioxide at higher resolution, providing twice as much data as reported from its previous observations. Meanwhile, carbon monoxide was detected, but obvious signatures of both methane and hydrogen sulfide were absent from the data. If present, these molecules occur at very low levels, a significant finding for scientists making inventories of exoplanet chemistry in order to better understand the formation and development of these distant worlds.

Capturing such a broad spectrum of WASP-39 b's atmosphere was a scientific tour de force, as an international team numbering in the hundreds independently analyzed data from four of JWST's finely calibrated instrument modes. They then made detailed inter-comparisons of their findings, yielding yet more scientifically nuanced results.

JWST views the universe in infrared light, on the red end of the light spectrum beyond what human eyes can see; that allows the telescope to pick up chemical fingerprints that can't be detected in visible light.

Each of the three instruments even has some version of the "IR" of infrared in its name: NIRSpec, NIRCam, and NIRISS.

To see light from WASP-39 b, JWST tracked the planet as it passed in front of its star, allowing some of the star's light to filter through the planet's atmosphere. Different types of chemicals in the atmosphere absorb different colors of the starlight spectrum, so the colors that are missing tell astronomers which molecules are present.

Read more at Science Daily

Limiting global warming now can preserve valuable freshwater resource

Snowcapped mountains not only look majestic -- They're vital to a delicate ecosystem that has existed for tens of thousands of years. Mountain water runoff and snowmelt flows down to streams, rivers, lakes, and oceans -- and today, around a quarter of the world depends on these natural "water towers" to replenish downstream reservoirs and groundwater aquifers for urban water supplies, agricultural irrigation, and ecosystem support.

But this valuable freshwater resource is in danger of disappearing. The planet is now around 1.1 degrees Celsius (1.9 degrees Fahrenheit) warmer than pre-industrial levels, and mountain snowpacks are shrinking. Last year, a study co-led by Alan Rhoades and Erica Siirila-Woodburn, research scientists in the Earth and Environmental Sciences Area of Lawrence Berkeley National Laboratory (Berkeley Lab), found that if global warming continues along the high-emissions scenario, low-to-no-snow winters will become a regular occurrence in the mountain ranges of the western U.S. in 35 to 60 years.

Now, in a recent Nature Climate Change study, a research team led by Rhoades found that if global warming reaches around 2.5 degrees Celsius compared to pre-industrial levels, mountain ranges in the southern midlatitudes, the Andean region of Chile in particular, will face a low-to-no-snow future between the years 2046 and 2051 -- or 20 years earlier than mountain ranges in the northern midlatitudes such as the Sierra Nevada or Rockies. (Low-to-no-snow occurs when the annual maximum water stored as snowpack is within the bottom 30% of historical conditions for a decade or more.) The researchers also found that low-to-no-snow conditions would emerge in the southern midlatitudes at a third of the warming than in the northern midlatitudes.

"These findings are pretty shocking. We assumed that both regions in the southern and northern hemispheres would respond similarly to climate change, and that the Andes would be more resilient given its high elevation," said Alan Rhoades, a hydroclimate research scientist in Berkeley Lab's Earth and Environmental Sciences Area and lead author of the new study. "This shows that not every degree of warming has the same effect in one region as another."

In another major finding, the researchers learned that such a low-to-no-snow future coincides with roughly 10% less mountain runoff in both hemispheres, during wet and dry years.

"If you expect 10% less runoff, that means there's at least 10% less water available every year to refill reservoirs in the summer months when agriculture and mountain ecosystems most need it," Rhoades said.

Such diminished runoff would be particularly devastating for agricultural regions already parched by multiyear droughts.

California's current drought is entering its fourth year. According to the U.S. Drought Monitor, more than 94 percent of the state is in severe, extreme, or exceptional drought. Shrinking groundwater supplies and municipal wells throughout the state are severely impacting the San Joaquin Valley, the state's agricultural heartland.

And Chile -- which exports approximately 30% of its fresh fruit production every year, with much of it shipped to the United States -- is in the midst of a historic 13-year drought.

Saving snow, freshwater by curbing greenhouse gas emissions

But the new study also suggests that low-to-no-snow in both the northern and southern midlatitude mountain ranges can be prevented if global warming is limited to essentially 2.5 degrees Celsius (4.5 degrees Fahrenheit), the researchers said.

Their analysis is based on Earth system models that simulate the various components of the climate, such as the atmosphere and land surface, to identify how mountain water cycles could continue to change through the 21st century, and what warming levels might give rise to a widespread and persistent low-to-no-snow future across the American Cordillera -- a chain of mountain ranges spanning the western "backbone" of North America, Central America, and South America.

The researchers used computing resources at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) to process and analyze data collected by climate researchers from all over the world through the Department of Energy's CASCADE (Calibrated & Systematic Characterization, Attribution, & Detection of Extremes) project. (Post-analysis data from the study is available to the research community at NERSC.)

The closest to what Rhoades and his team considered to be "episodic low-to-no snow" conditions occurred in California between 2012 to 2016. The lack of snow and drought conditions in these years demonstrated the vulnerability of our water supply and, in part, led to the passing of the California Sustainable Groundwater Management Act, new approaches to water and agricultural management practices, and mandatory water cuts, Rhoades said

Persistent low-to-no snow (10 years in a row) has yet to occur, but Rhoades said that water managers are already thinking about such a future. "They're collaborating with scientists to come up with strategies to proactively rather than reactively manage water resources for the worst-case scenarios if we can't mitigate greenhouse gas emissions to avoid certain warming levels. But the better strategy would be to prevent further warming by cutting greenhouse gas emissions," he said.

For future studies, Rhoades plans to continue to examine and run new Earth system model simulations at even higher resolution "to give more spatial context of when and where snow loss might occur and what causes it," he said, and investigate how every degree of warming might change other key drivers of the mountain-water cycle, such as the landfall location and intensity of atmospheric rivers, and mountain ecosystem responses.

He also plans to continue to work with water managers through the Department of Energy-funded HyperFACETS project to identify ways we can better prepare for a low-to-no snow future through new management strategies such as infrastructure hardening against drought and floods and managed aquifer recharge.

Rhoades is optimistic, citing research from another Berkeley Lab-led study that found reaching zero net emissions of carbon dioxide from energy and industry by 2050 can be accomplished by rebuilding the U.S. energy infrastructure to run primarily on renewable energy.

"It just requires the will and initiative to invest financial resources at the level of urgency that climate change demands, which means we need to start doing this today," he said.

Read more at Science Daily

Earth might be experiencing 7th mass extinction, not 6th

Earth is currently in the midst of a mass extinction, losing thousands of species each year. New research suggests environmental changes caused the first such event in history, which occurred millions of years earlier than scientists previously realized.

Most dinosaurs famously disappeared 66 million years ago at the end of the Cretaceous period. Prior to that, a majority of Earth's creatures were snuffed out between the Permian and Triassic periods, roughly 252 million years ago.

Thanks to the efforts of researchers at UC Riverside and Virginia Tech, it's now known that a similar extinction occurred 550 million years ago, during the Ediacaran period. This discovery is documented in a Proceedings of the National Academy of Sciences paper.

Although unclear whether this represents a true "mass extinction," the percentage of organisms lost is similar to these other events, including the current, ongoing one.

The researchers believe environmental changes are to blame for the loss of approximately 80% of all Ediacaran creatures, which were the first complex, multicellular life forms on the planet.

"Geological records show that the world's oceans lost a lot of oxygen during that time, and the few species that did survive had bodies adapted for lower oxygen environments," said Chenyi Tu, UCR paleoecologist and study co-author.

Unlike later events, this earliest one was more difficult to document because the creatures that perished were soft bodied and did not preserve well in the fossil record.

"We suspected such an event, but to prove it we had to assemble a massive database of evidence," said Rachel Surprenant, UCR paleoecologist and study co-author. The team documented nearly every known Ediacaran animal's environment, body size, diet, ability to move, and habits.

With this project, the researchers sought to disprove the charge that the major loss of animal life at the end of the Ediacaran period was something other than an extinction. Some previously believed the event could be explained by the right data not being collected, or a change in animal behavior, like the arrival of predators.

"We can see the animals' spatial distribution over time, so we know they didn't just move elsewhere or get eaten -- they died out," said Chenyi. "We've shown a true decrease in the abundance of organisms."

They also tracked creatures' surface area to volume ratios, a measurement that suggests declining oxygen levels were to blame for the deaths. "If an organism has a higher ratio, it can get more nutrients, and the bodies of the animals that did live into the next era were adapted in this way," said UCR paleoecologist Heather McCandless, study co-author.

This project came from a graduate class led by UCR paleoecologist Mary Droser and her former graduate student, now at Virginia Tech, Scott Evans. For the next class, the students will investigate the origin of these animals, rather than their extinction.

Ediacaran creatures would be considered strange by today's standards. Many of the animals could move, but they were unlike anything now living. Among them were Obamus coronatus, a disc-shaped creature named for the former president, and Attenborites janeae, a tiny ovoid resembling a raisin named for English naturalist Sir David Attenborough.

"These animals were the first evolutionary experiment on Earth, but they only lasted about 10 million years. Not long at all, in evolutionary terms," Droser said.

Though it's not clear why oxygen levels declined so precipitously at the end of the era, it is clear that environmental change can destabilize and destroy life on Earth at any time. Such changes have driven all mass extinctions including the one currently occurring.

"There's a strong correlation between the success of organisms and, to quote Carl Sagan, our 'pale blue dot,'" said Phillip Boan, UC Riverside geologist and study co-author.

Read more at Science Daily

Nov 22, 2022

Short gamma-ray bursts traced farther into distant universe

A Northwestern University-led team of astronomers has developed the most extensive inventory to date of the galaxies where short gamma-ray bursts (SGRBs) originate.

Using several highly sensitive instruments and sophisticated galaxy modeling, the researchers pinpointed the galactic homes of 84 SGRBs and probed the characteristics of 69 of the identified host galaxies. Among their findings, they discovered that about 85% of the studied SGRBs come from young, actively star-forming galaxies.

The astronomers also found that more SGRBs occurred at earlier times, when the universe was much younger -- and with greater distances from their host galaxies' centers -- than previously known. Surprisingly, several SGRBs were spotted far outside their host galaxies -- as if they were "kicked out," a finding that raises questions as to how they were able to travel so far away.

"This is the largest catalog of SGRB host galaxies to ever exist, so weexpect it to be the gold standard for many years to come," said Anya Nugent, a Northwestern graduate student who led the study focused on modeling host galaxies. "Building this catalog and finally having enough host galaxies to see patterns and draw significant conclusions is exactly what the field needed to push our understanding of these fantastic events and what happens to stars after they die."

The team will publish two papers, detailing the new catalog. Both papers will publish on Monday, Nov. 21 in The Astrophysical Journal. Because SGRBs are among the brightest explosions in the universe, the team calls its catalog BRIGHT (Broadband Repository for Investigating Gamma-ray burst Host Traits). All of BRIGHT's data and modeling products are publicly available online for community use.

Nugent is a graduate student in physics and astronomy at Northwestern's Weinberg College of Arts and Sciences and a member of the Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA). She is advised by Wen-fai Fong, an assistant professor of physics and astronomy at Weinberg and a key member of CIERA, who led a second study focused on SGRB host observations.

Benchmark for future comparisons

When two neutron stars collide, they generate momentary flashes of intense gamma-ray light, known as SGRBs. While the gamma rays last mere seconds, the optical light can continue for hours before fading below detection levels (an event called an afterglow). SGRBs are some of the most luminous explosions in the universe with, at most, a dozen detected and pinpointed each year. They currently represent the only way to study and understand a large population of merging neutron star systems.

Since NASA's Neil Gehrels Swift Observatory first discovered an SGRB afterglow in 2005, astronomers have spent the last 17 years trying to understand which galaxies produce these powerful bursts. Stars within a galaxy can give insight into the environmental conditions needed to produce SGRBs and can connect the mysterious bursts to their neutron-star merger origins. So far, only one SGRB (GRB 170817A) has a confirmed neutron-star merger origin -- as it was detected just seconds after gravitational wave detectors observed the binary neutron-star merger (GW170817).

"In a decade, the next generation of gravitational wave observatories will be able to detect neutron star mergers out to the same distances as we do SGRBs today," Fong said. "Thus, our catalog will serve as a benchmark for comparison to future detections of neutron star mergers."

"The catalog can really make impacts beyond just a single class of transients like SGRBs," said Yuxin "Vic" Dong, study co-author and astrophysics Ph.D. student at Northwestern. "With the wealth of data and results presented in the catalog, I believe a variety of research projects will make use of it, maybe even in ways we have yet not thought of."

Insight into neutron-star systems

To create the catalog, the researchers used several highly sensitive instruments at W.M. Keck Observatory, the Gemini Observatories, the MMT Observatory, the Large Binocular Telescope Observatory and the Magellan Telescopes at Las Campanas Observatory to capture deep imaging and spectroscopy of some of the faintest galaxies identified in the survey of SGRB hosts. The team also used data from two of NASA's Great Observatories, the Hubble Space Telescope and Spitzer Space Telescope.

Prior to these new studies, astronomers characterized host galaxies from only a couple dozen SGRBs. The new catalog is quadruple the number of existing samples. With the advantage of a much larger dataset, the catalog shows that SGRB host galaxies can be either young and star-forming or old and approaching death. This means neutron-star systems form in a broad range of environments and many of them have quick formation-to-merger timescales. Because neutron-star mergers create heavy elements like gold and platinum, the catalog's data also will deepen scientists' understanding of when precious metals were first created in the universe.

"We suspect that the younger SGRBs we found in younger host galaxies come from binary stellar systems that formed in a star formation 'burst' and are so tightly bound that they can merge very fast," Nugent said. "Long-standing theories have suggested there must be ways to merge neutron stars quickly, but, until now, we have not been able to witness them. We find evidence for older SGRBs in the galaxies that are much older and believe the stars in those galaxies either took a longer time to form a binary or were a binary system that was further separated. Hence, those took longer to merge."

Potential of JWST

With the ability to detect the faintest host galaxies from very early times in the universe, NASA's new infrared flagship observatory, the James Webb Space Telescope (JWST), is poised to further advance the understanding of neutron star mergers and how far back in time they began.

"I'm most excited about the possibility of using JWST to probe deeper into the homes of these rare, explosive events," Nugent said. "JWST's ability to observe faint galaxies in the universe could uncover more SGRB host galaxies that are currently evading detection, perhaps even revealing a missing population and a link to the early universe."

"I started observations for this project 10 years ago, and it was so gratifying to be able to pass the torch onto the next generation of researchers," Fong said. "It is one of my career's greatest joys to see years of work come to life in this catalog, thanks to the young researchers who really took this study to the next level."

Read more at Science Daily

Arctic carbon conveyor belt discovered

Every year, the cross-shelf transport of carbon-rich particles from the Barents and Kara Seas could bind up to 3.6 million metric tons of CO2 in the Arctic deep sea for millennia. In this region alone, a previously unknown transport route uses the biological carbon pump and ocean currents to absorb atmospheric CO2 on the scale of Iceland's total annual emissions, as researchers from the Alfred Wegener Institute and partner institutes report in the current issue of the journal Nature Geoscience.

Compared to other oceans, the biological productivity of the central Arctic Ocean is limited, since sunlight is often in short supply -- either due to the Polar Night or to sea-ice cover -- and the available nutrient sources are scarce. Consequently, microalgae (phytoplankton) in the upper water layers have access to less energy than their counterparts in other waters. As such, the surprise was great when, on the expedition ARCTIC2018 in August and September 2018 on board the Russian research vessel Akademik Tryoshnikov, large quantities of particulate -- i.e., stored in plant remains -- carbon were discovered in the Nansen Basin of the central Arctic. Subsequent analyses revealed a body of water with large amounts of particulate carbon to depths of up to two kilometres, composed of bottom water from the Barents Sea. The latter is produced when sea ice forms in winter, then cold and heavy water sinks, and subsequently flows from the shallow coastal shelf down the continental slope and into the deep Artic Basin.

"Based on our measurements, we calculated that through this water-mass transport, more than 2,000 metric tons of carbon flow into the Arctic deep sea per day, the equivalent of 8,500 metric tons of atmospheric CO2. Extrapolated to the total annual amount revealed even 13.6 million metric tons of CO2, which is on the same scale as Iceland's total annual emissions," explains Dr Andreas Rogge, first author of the Nature Geoscience study and an oceanographer at the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI). This plume of carbon-rich water spans from the Barents- and Kara Sea shelf to roughly 1,000 kilometres into the Arctic Basin. In light of this newly discovered mechanism, the Barents Sea -- already known to be the most productive marginal sea in the Arctic -- would appear to effectively remove roughly 30 percent more carbon from the atmosphere than previously believed. Moreover, model-based simulations determined that the outflow manifests in seasonal pulses, since in the Arctic's coastal seas, the absorption of CO2 by phytoplankton only takes place in summer.

Understanding transport and transformation processes within the carbon cycle is essential to creating global carbon dioxide budgets and therefore also projections for global warming. On the ocean's surface, single-celled algae absorb CO2 from the atmosphere and sink towards the deep sea when aged out. Once carbon bound in this manner reaches the deep water, it stays there until overturning currents bring the water back to the ocean's surface, which takes several thousand years in the Arctic. And if the carbon is deposited in deep-sea sediments, it can even be trapped there for millions of years, as only volcanic activity can release it. This process, also known as the biological carbon pump, can remove carbon from the atmosphere for long periods of time and represents a vital sink in our planet's carbon cycle. The process also represents a food source for the local deep sea fauna like sea stars, sponges and worms. What percentage of the carbon is actually absorbed by the ecosystem is something only further research can tell us.

Read more at Science Daily

Solid salamander: Prehistoric amphibian was as heavy as a pygmy hippo

The last of the temnospondyls -- amphibians that look more like crocodiles -- became extinct during the Cretaceous period, about 120 million years ago, after thriving on Earth for more than 200 million years.

Now a team of scientists led by Lachlan Hart, a palaeontologist and PhD candidate in the School of Biological, Earth & Environmental Sciences at UNSW Sydney, has assessed various methods of estimating the weight of these unique extinct animals. The team's study is published in Palaeontology.

"Estimating mass in extinct animals presents a challenge, because we can't just weigh them like we could with a living thing," said Mr Hart. "We only have the fossils to tell us what an animal looked like, so we often need to look at living animals to get an idea about soft tissues, such as fat and skin."

Temnospondyls as case studies

Mr Hart said temnospondyls were "very strange animals."

"Some grew to enormous sizes, six or seven metres long. They went through a larval (tadpole) stage just like living amphibians. Some had very broad and round heads -- such as Australia's Koolasuchus, recently named as the Victorian State Fossil Emblem -- and others, like the temnospondyls we used in this study, had heads that were more croc-like."

The 1.8 metre-long Eryops megacephalus lived during the Permian period in what is now the USA, while the slightly longer Paracyclotosaurus davidi is known from the Triassic of Australia. The more aquatically inclined Paracyclotosaurus was the heftier of the two, tipping the scales at roughly 260 kilograms, where Eryops was a more modest 160 kilograms.

"The size of an animal is important for many aspects of their life," said Mr Hart. "It impacts what they feed on, how they move and even how they handle cold temperatures. So naturally, palaeontologists are interested in calculating the body mass of extinct creatures so we can learn more about how they lived.

"There have been several studies on body mass estimation in other groups of extinct animals, such as dinosaurs, but not extensively on temnospondyls.

"They survived two of Earth's Big Five mass extinction events which makes them a very interesting case study on how animals adapted following these global catastrophes," Mr Hart said.

Because temnospondyls have no direct living relatives, the team of scientists had to assemble a selection of five modern 'analogues' (such as the Chinese Giant Salamander and the Saltwater Crocodile) to test a total of 19 different body mass estimation techniques to determine their suitability for use in temnospondyls.

"We found several methods which gave us consistently accurate body mass estimations in our five living animals, which included using mathematical equations and 3-dimensional digital models of the animals," said Dr Nicolas Campione from the University of New England, Armidale, an authority on body mass estimation who was also involved in the study. "We hypothesised that as these methods are accurate for animals which lived and looked like temnospondyls, they would also be appropriate for use with temnospondyls."

Dr Matthew McCurry, Senior Lecturer in Earth Science at UNSW, and co-author on the study said, "This work has shown there are multiple methods for estimating mass in temnospondyls.

Read more at Science Daily

New study shows repeated stress accelerates aging of the eye

New research from the University of California, Irvine, suggests aging is an important component of retinal ganglion cell death in glaucoma, and that novel pathways can be targeted when designing new treatments for glaucoma patients.

The study, titled, "Stress induced aging in mouse eye," was published today in Aging Cell. Along with her colleagues, Dorota Skowronska Krawczyk, PhD, assistant professor in the Departments of Physiology & Biophysics and Ophthalmology and the faculty of the Center for Translational Vision Research at the UCI School of Medicine, describes the transcriptional and epigenetic changes happening in aging retina. The team shows how stress, such as intraocular pressure (IOP) elevation in the eye, causes retinal tissue to undergo epigenetic and transcriptional changes similar to natural aging. And, how in young retinal tissue, repetitive stress induces features of accelerated aging including the accelerated epigenetic age.

Aging is a universal process that affects all cells in an organism. In the eye, it is a major risk factor for a group of neuropathies called glaucoma. Because of the increase in aging populations worldwide, current estimates show that the number of people with glaucoma (aged 40-80) will increase to over 110 million in 2040.

"Our work emphasizes the importance of early diagnosis and prevention as well as age-specific management of age-related diseases, including glaucoma," said Skowronska-Krawczyk. "The epigenetic changes we observed suggest that changes on the chromatin level are acquired in an accumulative way, following several instances of stress. This provides us with a window of opportunity for the prevention of vision loss, if and when the disease is recognized early."

In humans, IOP has a circadian rhythm. In healthy individuals, it oscillates typically in the 12-21 mmHg range and tends to be highest in approximately two thirds of individuals during the nocturnal period. Due to IOP fluctuations, a single IOP measurement is often insufficient to characterize the real pathology and risk of disease progression in glaucoma patients. Long-term IOP fluctuation has been reported to be a strong predictor for glaucoma progression. This new study suggests that the cumulative impact of the fluctuations of IOP is directly responsible for the aging of the tissue.

"Our work shows that even moderate hydrostatic IOP elevation results in retinal ganglion cell loss and corresponding visual defects when performed on aged animals," said Skowronska-Krawczyk. "We are continuing to work to understand the mechanism of accumulative changes in aging in order to find potential targets for therapeutics. We are also testing different approaches to prevent the accelerated aging process resulting from stress."

Researchers now have a new tool to estimate the impact of stress and treatment on the aging status of retinal tissue, which has made these new discoveries possible. In collaboration with the Clock Foundation and Steve Horvath, PhD, from Altos Labs, who pioneered the development of epigenetic clocks that can measure age based on methylation changes in the DNA of tissues, it was possible for researchers to show that repetitive, mild IOP elevation can accelerate epigenetic age of the tissues.

"In addition to measuring vision decline and some structural changes due to stress and potential treatment, we can now measure the epigenetic age of retinal tissue and use it to find the optimal strategy to prevent vision loss in aging," said Skowronska-Krawczyk.

Read more at Science Daily

1,700-year-old spider monkey remains discovered in Teotihuacán, Mexico

The complete skeletal remains of a spider monkey -- seen as an exotic curiosity in pre-Hispanic Mexico -- grants researchers new evidence regarding social-political ties between two ancient powerhouses: Teotihuacán and Maya Indigenous rulers.

The discovery was made by Nawa Sugiyama, a UC Riverside anthropological archaeologist, and a team of archaeologists and anthropologists who since 2015 have been excavating at Plaza of Columns Complex, in Teotihuacán, Mexico. The remains of other animals were also discovered, as well as thousands of Maya-style mural fragments and over 14,000 ceramic sherds from a grand feast. These pieces are more than 1,700 years old.

The spider monkey is the earliest evidence of primate captivity, translocation, and gift diplomacy between Teotihuacán and the Maya. Details of the discovery will be published in the journal PNAS. This finding allows researchers to piece evidence of high diplomacy interactions and debunks previous beliefs that Maya presence in Teotihuacán was restricted to migrant communities, said Sugiyama, who led the research.

"Teotihuacán attracted people from all over, it was a place where people came to exchange goods, property, and ideas. It was a place of innovation," said Sugiyama, who is collaborating with other researchers, including Professor Saburo Sugiyama, co-director of the project and a professor at Arizona State University, and Courtney A. Hofman, a molecular anthropologist with the University of Oklahoma. "Finding the spider monkey has allowed us to discover reassigned connections between Teotihuacán and Maya leaders. The spider monkey brought to life this dynamic space, depicted in the mural art. It's exciting to reconstruct this live history."

Researchers applied a multimethod archaeometric (zooarchaeology, isotopes, ancient DNA, paleobotany, and radiocarbon dating) approach to detail the life of this female spider monkey. The animal was likely between 5 and 8 years old at the time of death.

Its skeletal remains were found alongside a golden eagle and several rattlesnakes, surrounded by unique artifacts, such as fine greenstone figurines made of jade from the Motagua Valley in Guatemala, copious shell/snail artifacts, and lavish obsidian goods such as blades and projectiles points. This is consistent with evidence of live sacrifice of symbolically potent animals participating in state rituals observed in Moon and Sun Pyramid dedicatory caches, researchers stated in the paper.

Results from the examination of two teeth, the upper and lower canines, indicate the spider monkey in Teotihuacán ate maize and chili peppers, among other food items. The bone chemistry, which offers insight to the diet and environmental information, indicates at least two years of captivity. Prior to arriving in Teotihuacán, it lived in a humid environment, eating primarily plants and roots.

The research is primarily funded by grants awarded to Sugiyama from the National Science Foundation and National Endowment for the Humanities. Teotihuacán is a pre-Hispanic city recognized as an UNESCO World Heritage site and receives more than three million visitors annually.

In addition to studying ancient rituals and uncovering pieces of history, the finding allows for a reconstruction of greater narratives, of understanding how these powerful, advanced societies dealt with social and political stressors that very much reflect today's world, Sugiyama said.

Read more at Science Daily

Nov 20, 2022

Mars was once covered by 300-meter deep oceans, study shows

Mars is called the red planet. But once, it was actually blue and covered in water, bringing us closer to finding out if Mars had ever harboured life.

Most researchers agree that there has been water on Mars, but just how much water is still debated.

Now a study from the University of Copenhagen shows that some 4.5 billion years ago, there was enough water for the entire planet to be covered in a 300-metre-deep ocean.

"At this time, Mars was bombarded with asteroids filled with ice. It happened in the first 100 million years of the planet's evolution. Another interesting angle is that the asteroids also carried organic molecules that are biologically important for life," says Professor Martin Bizzarro from the Centre for Star and Planet Formation.

In addition to water, the icy asteroids also brought biologically relevant molecules such as amino acids to the Red Planet. Amino acids are used when DNA and RNA form bases that contain everything a cell needs.

The study was published in the journal Science Advances.

Mars may have had the conditions for life before Earth

The new study indicates that the oceans that covered the entire planet in water were at least 300 metres deep. They may have been up to one kilometre deep. In comparison, there is actually very little water on Earth, explains Martin Bizzarro.

"This happened within Mars's first 100 million years. After this period, something catastrophic happened for potential life on Earth. It is believed that there was a gigantic collision between the Earth and another Mars-sized planet. It was an energetic collision that formed the Earth-Moon system and, as the same time, wiped out all potential life on Earth," says Martin Bizzarro.

Therefore, the researchers have really strong evidence that conditions allowing the emergence of life were present on Mars long before Earth.

Billion-year-old meteorite

It was by means of a meteorite that is billions of years old that the researchers have been able to look into Mars's past history. The meteorite was once part of Mars's original crust and offers a unique insight into what happened at the time when the solar system was formed.

The whole secret is hiding in the way Mars's surface has been created -- and of which the meteorite was once a part -- because it is a surface that does not move. On Earth it is opposite. The tectonic plates are in perpetual motion and recycled in the planet's interior.

Read more at Science Daily

Plants use their epigenetic memories to adapt to climate change

Animals can adapt quickly to survive adverse environmental conditions. Evidence is mounting to show that plants can, too. A paper publishing in the journal Trends in Plant Science on November 17 details how plants are rapidly adapting to the adverse effects of climate change, and how they are passing down these adaptations to their offspring.

"One day I thought how the living style and experience of a person can affect his or her gametes transmitting molecular marks of their life into their children," says Federico Martinelli, a plant geneticist at the University of Florence. "Immediately I thought that even more epigenetic marks must be transmitted in plants, being that plants are sessile organisms that are subjected to many more environmental stresses than animals during their life."

Plants are facing more environmental stressors than ever. For example, climate change is making winters shorter and less severe in many locations, and plants are responding. "Many plants require a minimum period of cold in order to set up their environmental clock to define their flowering time," says Martinelli. "As cold seasons shorten, plants have adapted to require less period of cold to delay flowering. These mechanisms allow plants to avoid flowering in periods where they have less chances to reproduce."

Because plants don't have neural networks, their memory is based entirely on cellular, molecular, and biochemical networks. These networks make up what the researchers term somatic memory. "These mechanisms allow plants to recognize the occurrence of a previous environmental condition and to react more promptly in presence of the same consequential condition," says Martinelli.

These somatic memories can then be passed to the plants' progeny via epigenetics. "We have highlighted key genes, proteins, and small oligonucleotides, which previous studies have shown play a key role in the memory of abiotic stresses such as drought, salinity, cold, heat, and heavy metals and pathogen attacks," says Martinelli. "In this peer-reviewed opinion piece, we provide several examples that demonstrate the existence of molecular mechanisms modulating plant memory to environmental stresses and affecting the adaptation of offspring to these stresses."

Read more at Science Daily

Artificial neural networks learn better when they spend time not learning at all

Depending on age, humans need 7 to 13 hours of sleep per 24 hours. During this time, a lot happens: Heart rate, breathing and metabolism ebb and flow; hormone levels adjust; the body relaxes. Not so much in the brain.

"The brain is very busy when we sleep, repeating what we have learned during the day," said Maxim Bazhenov, PhD, professor of medicine and a sleep researcher at University of California San Diego School of Medicine. "Sleep helps reorganize memories and presents them in the most efficient way."

In previous published work, Bazhenov and colleagues have reported how sleep builds rational memory, the ability to remember arbitrary or indirect associations between objects, people or events, and protects against forgetting old memories.

Artificial neural networks leverage the architecture of the human brain to improve numerous technologies and systems, from basic science and medicine to finance and social media. In some ways, they have achieved superhuman performance, such as computational speed, but they fail in one key aspect: When artificial neural networks learn sequentially, new information overwrites previous information, a phenomenon called catastrophic forgetting.

"In contrast, the human brain learns continuously and incorporates new data into existing knowledge," said Bazhenov, "and it typically learns best when new training is interleaved with periods of sleep for memory consolidation."

Writing in the November 18, 2022 issue of PLOS Computational Biology, senior author Bazhenov and colleagues discuss how biological models may help mitigate the threat of catastrophic forgetting in artificial neural networks, boosting their utility across a spectrum of research interests.

The scientists used spiking neural networks that artificially mimic natural neural systems: Instead of information being communicated continuously, it is transmitted as discrete events (spikes) at certain time points.

They found that when the spiking networks were trained on a new task, but with occasional off-line periods that mimicked sleep, catastrophic forgetting was mitigated. Like the human brain, said the study authors, "sleep" for the networks allowed them to replay old memories without explicitly using old training data.

Memories are represented in the human brain by patterns of synaptic weight -- the strength or amplitude of a connection between two neurons.

"When we learn new information," said Bazhenov, "neurons fire in specific order and this increases synapses between them. During sleep, the spiking patterns learned during our awake state are repeated spontaneously. It's called reactivation or replay.

"Synaptic plasticity, the capacity to be altered or molded, is still in place during sleep and it can further enhance synaptic weight patterns that represent the memory, helping to prevent forgetting or to enable transfer of knowledge from old to new tasks."

When Bazhenov and colleagues applied this approach to artificial neural networks, they found that it helped the networks avoid catastrophic forgetting.

"It meant that these networks could learn continuously, like humans or animals. Understanding how human brain processes information during sleep can help to augment memory in human subjects. Augmenting sleep rhythms can lead to better memory.

"In other projects, we use computer models to develop optimal strategies to apply stimulation during sleep, such as auditory tones, that enhance sleep rhythms and improve learning. This may be particularly important when memory is non-optimal, such as when memory declines in aging or in some conditions like Alzheimer's disease."

Read more at Science Daily