Mar 15, 2024

Do astronauts experience 'space headaches'?

Space travel and zero gravity can take a toll on the body. A new study has found that astronauts with no prior history of headaches may experience migraine and tension-type headaches during long-haul space flight, which includes more than 10 days in space. The study was published in the March 13, 2024, online issue of Neurology®, the medical journal of the American Academy of Neurology.

"Changes in gravity caused by space flight affect the function of many parts of the body, including the brain," said study author W. P. J. van Oosterhout, MD, PhD, of Leiden University Medical Center in the Netherlands.

"The vestibular system, which affects balance and posture, has to adapt to the conflict between the signals it is expecting to receive and the actual signals it receives in the absence of normal gravity. This can lead to space motion sickness in the first week, of which headache is the most frequently reported symptom. Our study shows that headaches also occur later in space flight and could be related to an increase in pressure within the skull."

The study involved 24 astronauts from the European Space Agency, the U.S. National Aeronautics and Space Administration (NASA) and the Japan Aerospace Exploration Agency.

They were assigned to International Space Station expeditions for up to 26 weeks from November 2011 to June 2018.

Prior to the study, nine astronauts reported never having any headaches and three had a headache that interfered with daily activities in the last year.

None of them had a history of recurrent headaches or had ever been diagnosed with migraine.

Of the total participants, 22 astronauts experienced one or more episode of headache during a total of 3,596 days in space for all participants.

Astronauts completed health screenings and a questionnaire about their headache history before the flight.

During space flight, astronauts filled out a daily questionnaire for the first seven days and a weekly questionnaire each following week throughout their stay in the space station.

The astronauts reported 378 headaches in flight.

Researchers found that 92% of astronauts experienced headaches during flight compared to just 38% of them experiencing headaches prior to flight.

Of the total headaches, 170, or 90%, were tension-type headache and 19, or 10%, were migraine.

Researchers also found that headaches were of a higher intensity and more likely to be migraine-like during the first week of space flight.

During this time, 21 astronauts had one or more headaches for a total of 51 headaches.

Of the 51 headaches, 39 were considered tension-type headaches and 12 were migraine-like or probable migraine.

In the three months after return to Earth, none of the astronauts reported any headaches.

"Further research is needed to unravel the underlying causes of space headache and explore how such discoveries may provide insights into headaches occurring on Earth," said Van Oosterhout.

"Also, more effective therapies need to be developed to combat space headaches as for many astronauts this a major problem during space flights."

This research does not prove that going into space causes headaches; it only shows an association.

A limitation of the study was that astronauts reported their own symptoms, so they may not have remembered all the information accurately.

Read more at Science Daily

Tropical birds could tolerate warming better than expected, study suggests

Consider the globe, spinning silently in space. Its poles and its middle, the equator, remain relatively stable, thermally speaking, for the duration of Earth's annual circuit around the sun. The spaces between -- Earth's temperate zones -- experience seasons, with their characteristic temperature extremes.

It would follow that animals that evolved in each of these zones should match them, physiologically. We expect tropical animals to handle a certain degree of heat, but not wild swings in temperature. That seems to be the case for tropical ectotherms, or "cold-blooded" animals such as amphibians, reptiles, and insects. However, in a first-of-its-kind study of "warm-blooded" endotherms, a University of Illinois Urbana-Champaign team found tropical birds can handle thermal variation just fine.

"We tested the climate variability hypothesis, which predicts that organisms can't handle variation because they haven't seen it over evolutionary time," said study co-author Jeff Brawn, professor emeritus in the Department of Natural Resources and Environmental Sciences (NRES), part of the College of Agricultural, Consumer and Environmental Sciences (ACES) at Illinois. "That may be true for ectotherms, but the evidence is just not there yet for birds in the Neotropics. Now we know they're able to handle it."

Climate change may increase the average annual temperature in the tropics, as well as in microclimates like forest edges or tree canopies. The study provides some reassurance that, at least when looking at temperature alone, tropical birds should be okay. Why does that matter?

"The Neotropics alone are home to 40% of the world's bird species. Anyone who cares about birds should care about what's happening in the tropics," Brawn said. "Also, birds are important for the overall integrity of tropical forest systems, holding down insect populations that could damage trees."

Brawn and co-author Henry Pollock, who did postdoctoral research in NRES, already showed that both temperate and tropical birds can withstand temperature extremes, disproving the climate variability hypothesis across latitudes. Their new study explains whether variation within habitats matters for specific groups of tropical birds.

Many tropical birds spend their lives deep in the forest understory. Their large eyes suggest they're well adapted to the dark, where temperatures stay relatively cool and stable. Conversely, other bird groups zip between the forest canopy and its floor, or in and out of forest gaps and edges. These birds, Pollock reasoned, might have more tolerance to temperature fluctuations than their understory counterparts.

He captured birds from 89 species in Panama and, using a technique called respirometry, measured their metabolic rates across a range of temperatures. The birds were safely cooled and returned to their habitats after testing. He also took advantage of long-term weather station data provided by the Smithsonian Tropical Research Institute to document temperature differences across forest microclimates.

"If you measure temperature in an open area versus in the forest, there are large differences," said Pollock, now the executive director of the Southern Plains Land Trust. "But we did not find any evidence that those differences translated into greater temperature tolerance among groups of tropical birds."

Long-term observations indicate that when tropical forests become fragmented due to deforestation, an increasing phenomenon, certain groups of birds are more likely to decline. Insect-eating understory birds are among the hardest hit. For decades, tropical ornithologists believed narrow temperature tolerances may have been to blame for the declines of understory birds, but this study suggests otherwise.

Pollock is quick to point out that he only measured one aspect of an organism's thermal environment. In the real world, temperature doesn't increase in isolation; typically, when temperature goes up, so does solar radiation. Humidity and precipitation come into play, as well. And all of these things are part of the equation with habitat loss and climate change.

Still, one aspect of the climate variability and microclimate hypotheses can, for now, be put to rest for tropical birds.

"There's very little good news for tropical birds these days, but it's comforting that we've eliminated one factor as to what may go wrong with climate change. It's actually not a surprise; birds are very adaptable," Brawn said. "Heat tolerance alone presents an incomplete situation, but this is further empirical evidence that, if it does get warmer, tropical birds may be able to tolerate a certain level of that."

Read more at Science Daily

Chimp moms play with their offspring through good times and bad

When it comes to nurturing their young, mother chimpanzees go the extra mile, according to a new study. Using 10 years of observational data on wild chimpanzees, researchers found that while adults often play, and young chimps play a lot, when food gets scarce, the adults put mutual play aside and focus on survival.

But in the meantime, mother chimps continue to be their offspring's primary playmate, tickling, chasing, playing 'airplane'. That suggests the mother chimps take on an indispensable role fostering their young's physical and social development even when they are under food stress.

The study observations took place in Kibale National Park in Uganda, and the study analysis, published in Current Biology, was led by Zarin Machanda, an assistant professor of anthropology and biology, and her former postdoctoral associate Kris Sabbi, who is currently a college fellow in human evolutionary biology at Harvard University.

Kibale is the most primate-dense forest in the world, with thirteen species living there including over 1,000 chimpanzees. Researchers started habituating the chimps to the presence of humans in 1987. Over the decades, teams of researchers took detailed field notes of almost every observable behavior -- including climbing, feeding, grooming, calling, aggression, and play.

Through their previous work, Machanda and Sabbi were familiar with the playfulness of chimpanzees and decided to look deeper into the patterns of play behavior. They expected seasonal variations in food availability would affect adult chimps' time spent playing.

For example, when supplies of quality fruits were low, the chimps focused on finding and gathering figs and leaves, and put play time aside. Surprisingly, although chimp mothers had the same challenge in finding food, they continued devoting a lot of their time to nurturing their offspring's development through play.

Learning Lessons from Play

"The research on play ties into an effort to understand the evolution of leadership among chimps," said Machanda. "We were trying to see whether chimps have only one pathway to leadership, which has always been assumed to be aggressiveness, or whether play and other behaviors build multiple dimensions of character that might make them more or less successful."

Play is not very common in the wild, at least among adult animals. Young mammals do play often, but mostly with each other, or at the expense of an exasperated and passive adult. Exceptions include dolphins, monkeys, and apes. Natural selection tends to suppress the costly exercise after it serves its purpose for development, and time comes to focus on finding food, watching out for predators, and mating. With chimps, however, adult play serves to cement social bonds.

Why do some primates play throughout life and other mammals don't? "I think what sets primates apart is that they spend more time growing up compared to other mammals," said Machanda. "They also have highly developed brains and live in structured groups, with very specific rules governing interactions between individuals. Play permits them to build not only physical skills, but also the skills of social interaction."

Social structure in the chimpanzee world may also explain why mother chimps sometimes become the primary play partners for their young. The chimpanzees have a very fluid social system called fission-fusion, which means a group of 60 chimps, for example, may have smaller groups break away for days or weeks, which then merge again while other groups break off.

When food becomes scarce, chimp mothers tend to break away into smaller groups or solo with their babies. "But when they're doing that, they are also limiting the ability of their young ones to play with others, and the moms become the primary playmates," said Sabbi. "They're trading off that lower feeding competition in the larger group for more time and energy being spent playing with their little ones."

By comparison, a troop of 60 baboons always sticks together, so baby baboons always have other baboons close to their age nearby to play with. Baboon mothers usually do not play with their babies.

Types of Play


Play among the chimps often divides depending on their sex. "It's not uncommon to see male chimps to engage in more aggressive types of play, while females are doing a type of play related to parenting," said Machanda. "You see them practice carrying things -- a kind of preparation for future maternal behavior. Males often size each other up, and when they hit their second birthday, play style changes and can get rougher."

Mothers are often the ones that juveniles and older infants come back to. "If they're playing with somebody and it starts to get a little bit too rough, they'll switch it up and go back to playing with mom, because at the end of the day it's a very safe place," said Sabbi.

"If we compare to humans, it's very easy to find lots of evidence in the child psychology literature for how important it is for human mothers and fathers to be playing with their children, especially at really young ages. Moms and dads are important first play partners before kids branch out into their own social networks," she said.

Read more at Science Daily

Study: Best way to memorize stuff? It depends...

Recent experiments by psychologists at Temple University and the University of Pittsburgh shed new light on how we learn and how we remember our real-world experiences.

The research, described in the March 12 online edition of Proceedings of the National Academy of Sciences (PNAS), suggests that varying what we study and spacing out our learning over time can both be helpful for memory -- it just depends on what we're trying to remember.

"Lots of prior research has shown that learning and memory benefit from spacing study sessions out," said Benjamin Rottman, an associate professor of psychology and director of the Causal Learning and Decision-Making Lab at Pitt.

"For example, if you cram the night before a test, you might remember the information the next day for the test, but you will probably forget it fairly soon," he added. "In contrast, if you study the material on different days leading up to the test, you will be more likely to recall it for a longer period of time."

But while the "spacing effect" is one of the most replicated findings in psychological research, much of this work has been predicated on the idea that what you are trying to learn -- the content of the experience itself -- repeats identically each time. Yet that is rarely the case in real life, when some features of our experiences may stay the same, but others are likely to change. For example, imagine repeat trips to your local coffeeshop. While many features may stay the same on each visit, a new barista may be serving you. How does the spacing effect work in light of such variation across experiences?

In two experiments, Temple and Pitt researchers asked participants to repeatedly study pairs of items and scenes that were either identical on each repetition or in which the item stayed the same but the scene changed each time.

One of the experiments asked participants to learn and to test their memory via their smartphones -- an unusual approach for learning and memory research. This enabled researchers to ask participants to learn pairs at various times of the day across 24 hours, more accurately representing how people actually learn information.

In the second experiment, researchers collected data online in a single session.

Emily Cowan, lead author on the PNAS paper and a postdoctoral fellow in Temple's Adaptive Memory Lab, explained: "The combination of these two large-scale experiments allowed us to look at the timing of these 'spacing effects' across both long timescales -- for example, hours to days -- in Experiment No. 1 versus short timescales -- for example, seconds to minutes -- in Experiment No. 2. With this, we were able to ask how memory is impacted both by what is being learned -- whether that is an exact repetition or instead, contains variations or changes -- as well as when it is learned over repeated study opportunities.

"In other words, using these two designs, we could examine how having material that more closely resembles our experiences of repetition in the real world -- where some aspects stay the same but others differ -- impacts memory if you are exposed to that information in quick succession versus over longer intervals… from seconds to minutes, or hours to days."

As in prior experiments, researchers found that spaced learning benefited item memory. But they also found that memory was better for the items that had been paired with different scenes compared with those shown with the same scene each time. For example, if you want to remember a new person's name, repeating the name but associating it with different information about the person can actually be helpful.

"In contrast," Rottman said, "we found that for associative memory -- memory for the item and which scene it was paired with -- benefited from stability. Spacing only benefited memory for the pairs that were repeated exactly, and only if there were pretty long gaps -- hours to days -- between study opportunities. For example, if you are trying to remember the new person's name and something about them, like their favorite food, it is more helpful to repeat that same exact name-food pairing multiple times with spacing between each."

The Pitt-Temple experiments represent basic memory research. "Because of how nuanced memory is, it is hard to provide clear advice for things like studying for a test because the sort of material can be so different," Rottman said. "But in theory our findings should be broadly relevant to different sorts of tasks, like remembering someone's name and things about them, studying for a test, and learning new vocabulary in a foreign language.

"At the same time, because all these sorts of tasks have lots of differences, it is hard to make really concrete advice for them. We would need to do follow-up research to provide more concrete guidance for each case."

Cowan continued: "This work demonstrates the benefits of spaced learning on memory are not absolute, instead depending on the variability present in the content across repetitions and the timing between learning opportunities, expanding our current understanding of how the way in which we learn information can impact how it is remembered. Our work suggests that both variability and spacing may present methods to improve our memory for isolated features and associative information, respectively, raising important applications for future research, education, and our everyday lives."

Read more at Science Daily

Mar 14, 2024

Ancient ice may still exist in distant space objects, researchers find

A paper recently accepted by Icarus presents findings about the Kuiper Belt Object 486958 Arrokoth, shedding new light on the preservation of volatile substances like carbon monoxide (CO) in such distant celestial bodies.

Co-authored by Dr. Samuel Birch at Brown University and SETI Institute senior research scientist Dr. Orkan Umurhan, the paper "Retention of CO Ice and Gas Within 486958 Arrokoth" uses Arrokoth as a case study to propose that many Kuiper Belt Objects (KBOs) -- remnants from the dawn of our solar system -- could still retain their original volatile ices, challenging previous notions about the evolutionary path of these ancient entities.

Previous KBO evolution models have needed help predicting the fate of volatiles in these cold, distant objects.

Many relied on cumbersome simulations or flawed assumptions, underestimating how long these substances could last.

The new research offers a simpler yet effective approach, likening the process to how gas escapes through porous rock.

It suggests that KBOs like Arrokoth can maintain their volatile ices for billions of years, forming a kind of subsurface atmosphere that slows further ice loss.

"I want to emphasize that the key thing is that we corrected a deep error in the physical model people had been assuming for decades for these very cold and old objects," said Umurhan.

"This study could be the initial mover for re- evaluating the comet interior evolution and activity theory."

This study challenges existing predictions and opens up new avenues for understanding the nature of comets and their origins.

The presence of such volatile ices in KBOs supports a fascinating narrative of these objects as "ice bombs," which activate and display cometary behavior upon altering their orbit closer to the sun.

This hypothesis could help explain phenomena like the intense outburst activity of comet 29P/Schwassmann- Wachmann, potentially changing the understanding of comets.

Read more at Science Daily

Multiple air pollutants linked to asthma symptoms in children

Exposure to several combinations of toxic atmospheric pollutants may be triggering asthma symptoms among children, a recent analysis suggests.

The study, published in the journal Science of the Total Environment, showed that 25 different combinations of air pollutants were associated with asthma symptoms among 269 elementary school children diagnosed with asthma in Spokane, Washington. In line with previous research, the Washington State University-led study revealed a socioeconomic disparity -- with one group of children from a lower-income neighborhood exposed to more toxic combinations, a total of 13 of the 25 identified in this research.

"It's not just one pollutant that can be linked to asthma outcomes. This study examined the variety and combinations of air toxics that may be associated with asthma symptoms," said lead author Solmaz Amiri, a WSU researcher in the Elson S. Floyd College of Medicine.

While other studies have focused on a limited number of pollutants, Amiri and her colleagues used the data-crunching power of machine learning techniques to analyze the potential exposure effects of 109 air pollutants and their combinations on asthma outcomes.

The researchers drew on data collected and modeled by the Environmental Protection Agency on air toxics present in individual neighborhoods surrounding 10 Spokane elementary schools. They also accessed anonymized data from the elementary schools for reports of students diagnosed with asthma who experienced symptoms such as coughing, wheezing, difficulty breathing and the need to use an inhaler.

The study looked at asthma symptoms occurring in 2019 and 2020 in the six months before the pandemic lockdowns started in March 2020. The researchers then associated these data with air pollutant exposures that occurred within those six months and with two longer-term exposure periods of three years and five years prior to the asthma symptoms.

The researchers found that three specific pollutants were significantly associated with asthma symptoms across all three exposure periods.

The toxicants involved may have unfamiliar names -- 1,1,1 trichloroethane, 2-nitropropane and 2, 4, 6 trichlorophenol -- but they derive from commonly used materials. The first is a widely used solvent in industry but was formerly used in household cleaners and glues. The second is an additive to paints and other finishes, and the third is an anti-septic and anti-mildew agent that was banned in the 1980s but may still be found in some pesticides and preservatives made before then.

"Some of these air toxics were discontinued in the U.S., but they can still be found in materials that may be in storage or people have in their backyard or garage. Other air toxics still exist at least in the environment," said Amiri.

This study did not intend to pinpoint the source of any one air pollutant or the exact reason why one group of children from a lower-income neighborhood was highly exposed to air pollutants. However, proximity of known air pollution sources may play a role, Amiri said, such as living close to a highway with a lot of traffic or facilities that use solvents, such as paint producers or factories.

The finding of a likely socioeconomic disparity in air toxic exposures is consistent with previous research showing that children from lower-income areas, often indicated by schools with a higher percentage of students who qualify for free or reduced meals, are exposed to a wide variety of air pollutants in the neighborhoods where they live.

While the current study is limited to the mid-sized city of Spokane, Amiri noted that the findings align with another study in New York City which found similar air pollutants significantly associated with asthma outcomes.

"Both in Spokane and New York City, regardless of the setting -- how large or small the cities are -- these air toxics appear to be influencing asthma among children," she said.

Read more at Science Daily

How fear unfolds inside our brains

Our nervous systems are naturally wired to sense fear. Whether prompted by the eerie noises we hear alone in the dark or the approaching growl of a threatening animal, our fear response is a survival mechanism that tells us to remain alert and avoid dangerous situations.

But if fear arises in the absence of tangible threats, it can be harmful to our well-being. Those who have suffered episodes of severe or life-threatening stress can later experience intense feelings of fear, even during situations that lack a real threat. Experiencing this generalization of fear is psychologically damaging and can result in debilitating long-term mental health conditions such as post-traumatic stress disorder (PTSD).

The stress-induced mechanisms that cause our brain to produce feelings of fear in the absence of threats have been mostly a mystery. Now, neurobiologists at the University of California San Diego have identified the changes in brain biochemistry and mapped the neural circuitry that cause such a generalized fear experience. Their research, published in the journal Science on March 15, 2024, provides new insights into how fear responses could be prevented.

In their report, former UC San Diego Assistant Project Scientist Hui-quan Li, (now a senior scientist at Neurocrine Biosciences), Atkinson Family Distinguished Professor Nick Spitzer of the School of Biological Sciences and their colleagues describe the research behind their discovery of the neurotransmitters -- the chemical messengers that allow the brain's neurons to communicate with one another -- at the root of stress-induced generalized fear.

Studying the brains of mice in an area known as the dorsal raphe (located in the brainstem), the researchers found that acute stress induced a switch in the chemical signals in the neurons, flipping from excitatory "glutamate" to inhibitory "GABA" neurotransmitters, which led to generalized fear responses.

"Our results provide important insights into the mechanisms involved in fear generalization," said Spitzer, a member of UC San Diego's Department of Neurobiology and Kavli Institute for Brain and Mind. "The benefit of understanding these processes at this level of molecular detail -- what is going on and where it's going on -- allows an intervention that is specific to the mechanism that drives related disorders."

Building upon this new finding of a stress-induced switch in neurotransmitters, considered a form of brain plasticity, the researchers then examined the postmortem human brains of individuals who had suffered from PTSD. A similar glutamate-to-GABA neurotransmitter switch was confirmed in their brains as well.

The researchers next found a way to stop the production of generalized fear. Prior to the experience of acute stress, they injected the dorsal raphe of the mice with an adeno-associated virus (AAV) to suppress the gene responsible for synthesis of GABA. This method prevented the mice from acquiring generalized fear.

Further, when mice were treated with the antidepressant fluoxetine (branded as Prozac) immediately after a stressful event, the transmitter switch and subsequent onset of generalized fear were prevented.

Not only did the researchers identify the location of the neurons that switched their transmitter, but they demonstrated the connections of these neurons to the central amygdala and lateral hypothalamus, brain regions that were previously linked to the generation of other fear responses.

Read more at Science Daily

Menopause explains why some female whales live so long

Females of some whale species have evolved to live drastically longer lives so they can care for their families, new research shows.

The study focussed on five whale species that -- along with humans -- are the only mammals known to go through menopause.

The findings show that females of these whale species that experience menopause live around 40 years longer than other female whales of a similar size.

By living longer without extending their "reproductive lifespan" (the years in which they breed), these females have more years to help their children and grandchildren, without increasing the "overlap" period when they compete with their daughters by breeding and raising calves at the same time.

This new research shows that -- despite being separated by 90 million years of evolution -- whales and humans show remarkably similar life histories, which have evolved independently.

The study was carried out by the universities of Exeter and York, and the Center for Whale Research.

"The process of evolution favours traits and behaviours by which an animal passes its genes to future generations," said lead author Dr Sam Ellis, from the University of Exeter.

"The most obvious way for a female to do this is to breed for the entire lifespan -- and this is what happens in almost all animal species. There are more than 5,000 mammal species, and only six are known to go through menopause.

"So the question is: how and why did menopause evolve? Our study provides some of the answers to this fascinating puzzle."

Menopause is known to exist in five species of toothed whale: short-finned pilot whales, false killer whales, killer whales, narwhals and beluga whales.

As well as outliving females of other similar-sized species, females in these five species outlive the males of their own species. For example, female killer whales can live into their 80s, while males are typically dead by 40.

"The evolution of menopause and a long post-reproductive life could only happen in very specific circumstances," said Professor Darren Croft, of the University of Exeter and Executive Director at the Center for Whale Research

"Firstly, a species must have a social structure in which females spend their lives in close contact with their offspring and grand-offspring.

"Secondly, the females must have an opportunity to help in ways that improve the survival chances of their family. For example, female toothed whales are known to share food and use their knowledge to guide the group to find food when it is in short supply."

Professor Dan Franks, from the University of York, said: "Previous research on menopause evolution has tended to focus on single species, typically humans or killer whales.

"This study is the first to cross several species, enabled by the recent discovery of menopause in multiple species of toothed whales.

"Our study provides evidence that menopause evolved by expanding female lifespan beyond their reproductive years, rather than from reduced reproductive lifespan.

"This is a question that has long been asked in anthropology, but can only be directly answered with a comparative study."

Commenting on parallels with the evolution of menopause in humans, Professor Croft added: "It's fascinating that we share this life history with a taxonomic group we're so different from.

"Despite these differences, our results show that humans and toothed whales show convergent life history -- just like in humans, menopause in toothed whales evolved by selection to increase the total lifespan without also extending their reproductive lifespan."

Read more at Science Daily

Mar 13, 2024

Giant volcano discovered on Mars

In a groundbreaking announcement at the 55th Lunar and Planetary Science Conference held in The Woodlands, Texas, scientists revealed the discovery of a giant volcano and possible sheet of buried glacier ice in the eastern part of Mars' Tharsis volcanic province, near the planet's equator. Imaged repeatedly by orbiting spacecraft around Mars since Mariner 9 in 1971 -- but deeply eroded beyond easy recognition, the giant volcano had been hiding in plain sight for decades in one of Mars' most iconic regions, at the boundary between the heavily fractured maze-like Noctis Labyrinthus (Labyrinth of the Night) and the monumental canyons of Valles Marineris (Valleys of Mariner).

Provisionally designated "Noctis volcano" pending an official name, the structure is centered at 7° 35' S, 93° 55' W. It reaches +9022 meters (29,600 feet) in elevation and spans 450 kilometers (280 miles) in width. The volcano's gigantic size and complex modification history indicate that it has been active for a very long time. In its southeastern part lies a thin, recent volcanic deposit beneath which glacier ice is likely still present. This combined giant volcano and possible glacier ice discovery is significant, as it points to an exciting new location to study Mars' geologic evolution through time, search for life, and explore with robots and humans in the future.

"We were examining the geology of an area where we had found the remains of a glacier last year when we realized we were inside a huge and deeply eroded volcano," said Dr. Pascal Lee, planetary scientist with the SETI Institute and the Mars Institute based at NASA Ames Research Center, and the lead author of the study.

Several clues, taken together, give away the volcanic nature of the jumble of layered mesas and canyons in this eastern part of Noctis Labyrinthus. The central summit area is marked by several elevated mesas forming an arc, reaching a regional high and sloping downhill away from the summit area. The gentle outer slopes extend out to 225 kilometers (140 miles) away in different directions. A caldera remnant -- the remains of a collapsed volcanic crater once host to a lava lake -- can be seen near the center of the structure. Lava flows, pyroclastic deposits (made of volcanic particulate materials such as ash, cinders, pumice and tephra) and hydrated mineral deposits occur in several areas within the structure's perimeter.

"This area of Mars is known to have a wide variety of hydrated minerals spanning a long stretch of Martian history. A volcanic setting for these minerals had long been suspected. So, it may not be too surprising to find a volcano here," explained Sourabh Shubham, a graduate student at the University of Maryland's Department of Geology and the study's co-author. "In some sense, this large volcano is a long-sought 'smoking gun'."

In addition to the volcano, the study reports the discovery of a large, 5000 square kilometer (1930 square mile) area of volcanic deposits within the volcano's perimeter presenting a large number of low, rounded and elongated, blister-like mounds. This "blistered terrain" is interpreted to be a field of "rootless cones," mounds produced by explosive steam venting or steam swelling when a thin blanket of hot volcanic materials comes to rest on top of a water or ice-rich surface.

Just a year ago, Lee, Shubham and their colleague John W. Schutt had identified the spectacular remains of a glacier -- or "relict glacier" -- through a sizeable erosional opening in the same volcanic blanket, in the form of a light-toned deposit (LTD) of sulfate salt with the morphologic traits of a glacier. The sulfate deposit, made mainly of jarosite, a hydrous sulfate, was interpreted to have formed when the blanket of volcanic pyroclastic materials came to rest on a glacier and reacted chemically with the ice. Breached rootless cones identified in the current study show similar occurrences of polyhydrated sulfates, further suggesting the blistered volcanic blanket may be hiding a vast sheet of glacier ice underneath it.

The Noctis volcano presents a long and complex history of modification, possibly from a combination of fracturing, thermal erosion, and glacial erosion. Researchers interpret the volcano to be a vast shield made of layered accumulations of pyroclastic materials, lavas, and ice, the latter resulting from repeated buildups of snow and glaciers on its flanks through time. As fractures and faults eventually developed, in particular in connection with the uplift of the broader Tharsis region on which the volcano sits, lavas began to rise through different parts of the volcano, leading to thermal erosion and removal of vast amounts of buried ice and the catastrophic collapse of entire sections of the volcano.

Subsequent glaciations continued their erosion, giving many canyons within the structure their present distinctive shape. In this context the "relict glacier" and the possible buried sheet of glacier ice around it, might be remnants of the latest glaciation episode affecting the Noctis volcano.

But much about the newly discovered giant volcano remains a mystery. Although it is clear that it has been active for a long time and began to build up early in Mars' history, it is unknown how early exactly. Similarly, although it has experienced eruptions even in modern times, it is unknown if it is still volcanically active and might erupt again. And if it has been active for a very long time, could the combination of sustained warmth and water from ice have allowed the site to harbor life?

As mysteries surrounding the Noctis volcano continue to puzzle scientists, the site is already emerging as an exciting new location to study Mars' geologic evolution, search for life, and plan future robotic and human exploration. The possible presence of glacier ice at shallow depths near the equator means that humans could potentially explore a less frigid part of the planet while still being able to extract water for hydration and manufacturing rocket fuel (by breaking down H2O into hydrogen and oxygen).

"It's really a combination of things that makes the Noctis volcano site exceptionally exciting. It's an ancient and long-lived volcano so deeply eroded that you could hike, drive, or fly through it to examine, sample, and date different parts of its interior to study Mars' evolution through time. It has also had a long history of heat interacting with water and ice, which makes it a prime location for astrobiology and our search for signs of life. Finally, with glacier ice likely still preserved near the surface in a relatively warm equatorial region on Mars, the place is looking very attractive for robotic and human exploration," said Lee.

Read more at Science Daily

Vehicle brakes produce charged particles that may harm public health

Scientists know relatively little about particles released into the air when a vehicle driver brakes, though evidence suggests those particles may be more harmful to health than particles exiting the tailpipe.

In a new study in Proceedings of the National Academy of Sciences, University of California, Irvine researchers show how most of these particles emitted during light braking carry an electric charge -- something that could potentially be exploited to help reduce air pollution from vehicles.

"We found that up to 80% of aerosol particles emitted from braking are electrically charged, and that many of them are in fact highly charged," saidAdam Thomas, a doctoral candidate in the lab of Jim Smith, professor of chemistry, who led the study alongside UCI postdoctoral researcher Paulus Bauer.

To do the work, the team used a large lathe to spin a detached brake rotor and caliper.

They then measured the electric charge of the aerosols emitted into the air and discovered the 80 percent figure.

"I was very surprised," said Smith. "We were also surprised that this has not really been studied given how common cars are in human societies."

The research is part of a broader team effort at UCI to understand the public health impacts of non-tailpipe emissions in areas beset by car traffic, including many areas in Southern California.

"The toxicity and health effects of brake wear particles are largely unknown," said Manabu Shiraiwa, professor of aerosol chemistry at UCI and one of the researchers behind the university-wide project.

"Recent results from my lab indicate that they may induce oxidative stress, but more research is needed."

The new study reveals a problem that may grow as electric cars become more and more common over the next several decades.

Electric cars, Smith explained, are not truly zero-emission vehicles, so municipalities need to think about strategies to reduce emissions from brake use as well as tailpipes.

The team found that the percentage of charged particles emitted largely depended on the material makeup of brake pads.

Because the particles carry an electric charge, this should make it relatively easy to remove from the air.

"If they are charged, they can be removed easily from the air before they have a chance to have an impact at all on health," said Smith.

"All you would need to do is to collect them with an electrostatic precipitator -- a device that exposes the charged particles to an electric field and efficiently sweeps them away."

The public health risk posed by brake emissions is not borne equally by a population -- lower-income parts of cities tend to be more traffic-heavy than others, which creates an environmental justice issue wherein certain socioeconomic classes are more exposed to brake emissions than others.

According to Professor Barbara Finlayson-Pitts, Distinguished Emeritus Professor of chemistry and the principal investigator of the project at UCI, emissions from braking are not well-characterized but are potentially significant in high-traffic areas.

"These areas are often in poorer communities and highlight an important aspect of environmental justice that has been largely overlooked," Finlayson-Pitts said.

Read more at Science Daily

First recognition of self in the mirror is spurred by touch

Most babies begin recognizing themselves in mirrors when they are about a year and half old. This kind of self-recognition is an important developmental milestone, and now scientists at The University of Texas at Austin have discovered a key driver for it: experiences of touch.

Their new study found babies who were prompted to touch their own faces developed self-recognition earlier than those who did not.

The research was published this month in the journal Current Biology.

"This suggests that babies pulling on their toes or tapping their fingers are not just playing," said Jeffrey Lockman, a professor of human development and family sciences at UT and senior author on the paper.

"They are developing self-awareness through self-directed activity. I think this work demonstrates a possible mechanism by which self-recognition can develop based on active experience that human babies naturally generate."

Researchers began by placing small vibrating discs on the foreheads and cheeks of toddlers when they were around 14 months old, before the usual age at which self-recognition occurs.

In response to the vibration, the children would reach up and touch the disc.

Next, researchers turned the children to face a mirror and watched as they reached up to touch the discs.

Researchers then had the children perform the standard mirror-mark test for self-recognition in which a small mark of paint or makeup was placed on each child's face.

If the child looked in the mirror and touched the mark on their own face or said words like their name or "me," they demonstrated self-recognition.

Researchers also observed a control group of children who were exposed to the laboratory experience with mirrors but not the vibrating discs.

Both groups were comparable at the beginning of the study and observed monthly until they recognized themselves or reached 21 months.

The children who touched their face more frequently recognized themselves in the mirror about two months earlier, on average, than when children typically first begin to recognize themselves in a mirror.

The study challenges a longstanding assumption that self-recognition in early childhood is somehow hardwired.

For a long time, scientists believed early recognition in the mirror was a built-in function of human brains and those of our closest primate relatives, versus linked to sensory or motor experiences.

The researchers said the findings may have implications for interventions for children with motor development delays.

"Interventions for infants who have issues related to motor skills are typically focused on reaching for objects in the external world and manipulating them," Lockman said.

"These findings suggest that reaching to the body may be equally important and that exploring the body is the gateway to self-knowledge."

Read more at Science Daily

Alaska dinosaur tracks reveal a lush, wet environment

A large find of dinosaur tracks and fossilized plants and tree stumps in far northwestern Alaska provides new information about the climate and movement of animals near the time when they began traveling between the Asian and North American continents roughly 100 million years ago.

The findings by an international team of scientists led by paleontologist Anthony Fiorillo were published Jan. 30 in the journal Geosciences. Fiorillo researched in Alaska while at Southern Methodist University. He is now executive director of the New Mexico Museum of Natural History and Science.

University of Alaska Fairbanks geology professor Paul McCarthy, with the UAF Geophysical Institute and UAF College of Natural Science and Mathematics, was a leading contributor to the research. He and UAF graduate student Eric Orphys are among the eight co-authors.

Fiorillo and McCarthy are longtime collaborators.

"We've had projects for the last 20 years in Alaska trying to integrate sedimentology, dinosaur paleontology and the paleoclimate indicators," McCarthy said. "We've done work in three other formations -- in Denali, on the North Slope and in Southwest Alaska -- and they're about 70 million years old."

"This new one is in a formation that's about 90 to 100 million years old," he said.

Fiorillo said the additional age is notable.

"What interested us about looking at rocks of this age is this is roughly the time that people think of as the beginning of the Bering Land Bridge -- the connection between Asia and North America," he said. "We want to know who was using it, how they were using it and what the conditions were like."

Research into the paleoclimate can help scientists understand the warming world of today, the authors write.

"The mid-Cretaceous was the hottest point in the Cretaceous," said McCarthy, a sedimentologist and fossil soils specialist. "The Nanushuk Formation gives us a snapshot of what a high-latitude ecosystem looks like on a warmer Earth."

A rich find of evidence


The Nanushuk Formation is an outcropped layer of sedimentary rock 800 to 5,000 feet thick across the central and western North Slope. It dates to roughly 94 million to 113 million years ago in the mid-Cretaceous Period and about when the Bering Land Bridge began.

The fieldwork occurred in 2015-2017 and centered on Coke Basin, a circular geologic feature of the Nanushuk Formation. The basin is in the DeLong Mountains foothills along the Kukpowruk River, about 60 miles south of Point Lay and 20 miles inland from the Chukchi Sea.

In the area, Fiorillo and McCarthy found approximately 75 fossil tracks and other indicators attributed to dinosaurs living in a riverine or delta setting.

"This place was just crazy rich with dinosaur footprints," Fiorillo said.

One site stands out, Fiorillo said.

"We were at a spot where we eventually realized that for at least 400 yards we were walking on an ancient landscape," he said. "On that landscape we found large upright trees with little trees in between and leaves on the ground. We had tracks on the ground and fossilized feces."

They found numerous fossilized tree stumps, some 2 feet in diameter.

"It was just like we were walking through the woods of millions of years ago," he said.

The Nanushuk Formation encompasses rock of marine and non-marine characteristics and composition, but the authors' research focuses primarily on the non-marine sediments exposed along the upper Kukpowruk River.

"One of the things we did in our paper was look at the relative frequencies of the different kinds of dinosaurs," Fiorillo said. "What was interesting to us was that the bipedal plant eaters were clearly the most abundant."

Two-legged plant eaters accounted for 59% of the total tracks discovered. Four-legged plant eaters accounted for 17%, with birds accounting for 15% and non-avian, mostly carnivorous, bipedal dinosaurs at 9%.

"One of the things that was interesting is the relative frequency of bird tracks," Fiorillo said.

The authors point out that nearly half of North America's shorebirds breed in the warm months of today's Arctic. They suggest that the high number of fossil bird tracks along the Kukpowruk River indicates the warm paleoclimate was a similar driver for Cretaceous Period birds.

A wet and warm place

Carbon isotope analysis of wood samples led to a determination that the region received about 70 inches of precipitation annually. This record of increased precipitation during the mid-Cretaceous provides new data that supports global precipitation patterns associated with the Cretaceous Thermal Maximum, the authors write.

The Cretaceous Thermal Maximum was a long-term trend approximately 90 million years ago in which average global temperatures were significantly higher than those of today.

"The temperature was much warmer than it is today, and what's possibly more interesting is that it rained a lot," Fiorillo said. "The samples we analyzed indicate it was roughly equivalent to modern-day Miami. That's pretty substantial."

Of note is that the Alaska site investigated by Fiorillo and McCarthy was about 10 to 15 degrees latitude farther north in the mid-Cretaceous than it is today.

McCarthy's role as a fossil soils expert was to analyze old rocks and sediments to interpret the type of environment that existed at the time.

"We can say here's a river channel, here's a flood deposit, here's a levee, here's the floodplain, here's a swamp," he said. "And so if we're able to find tracks in that section, then you can sometimes say that a group of dinosaurs seems to have really liked being here as opposed to there."

Fiorillo said the site indicates there's much more work to be done.

Read more at Science Daily

Mar 12, 2024

Nasa’s Webb, Hubble telescopes affirm universe’s expansion rate, puzzle persists

When you are trying to solve one of the biggest conundrums in cosmology, you should triple check your homework. The puzzle, called the "Hubble Tension," is that the current rate of the expansion of the universe is faster than what astronomers expect it to be, based on the universe's initial conditions and our present understanding of the universe's evolution.

Scientists using NASA's Hubble Space Telescope and many other telescopes consistently find a number that does not match predictions based on observations from ESA's (European Space Agency's) Planck mission. Does resolving this discrepancy require new physics? Or is it a result of measurement errors between the two different methods used to determine the rate of expansion of space?

Hubble has been measuring the current rate of the universe's expansion for 30 years, and astronomers want to eliminate any lingering doubt about its accuracy. Now, Hubble and NASA's James Webb Space Telescope have tag-teamed to produce definitive measurements, furthering the case that something else -- not measurement errors -- is influencing the expansion rate.

"With measurement errors negated, what remains is the real and exciting possibility we have misunderstood the universe," said Adam Riess, a physicist at Johns Hopkins University in Baltimore. Riess holds a Nobel Prize for co-discovering the fact that the universe's expansion is accelerating, due to a mysterious phenomenon now called "dark energy."

As a crosscheck, an initial Webb observation in 2023 confirmed that Hubble measurements of the expanding universe were accurate. However, hoping to relieve the Hubble Tension, some scientists speculated that unseen errors in the measurement may grow and become visible as we look deeper into the universe. In particular, stellar crowding could affect brightness measurements of more distant stars in a systematic way.

The SH0ES (Supernova H0 for the Equation of State of Dark Energy) team, led by Riess, obtained additional observations with Webb of objects that are critical cosmic milepost markers, known as Cepheid variable stars, which now can be correlated with the Hubble data.

"We've now spanned the whole range of what Hubble observed, and we can rule out a measurement error as the cause of the Hubble Tension with very high confidence," Riess said.

The team's first few Webb observations in 2023 were successful in showing Hubble was on the right track in firmly establishing the fidelity of the first rungs of the so-called cosmic distance ladder.

Astronomers use various methods to measure relative distances in the universe, depending upon the object being observed. Collectively these techniques are known as the cosmic distance ladder -- each rung or measurement technique relies upon the previous step for calibration.

But some astronomers suggested that, moving outward along the "second rung," the cosmic distance ladder might get shaky if the Cepheid measurements become less accurate with distance. Such inaccuracies could occur because the light of a Cepheid could blend with that of an adjacent star -- an effect that could become more pronounced with distance as stars crowd together and become harder to distinguish from one another.

The observational challenge is that past Hubble images of these more distant Cepheid variables look more huddled and overlapping with neighboring stars at ever farther distances between us and their host galaxies, requiring careful accounting for this effect. Intervening dust further complicates the certainty of the measurements in visible light. Webb slices though the dust and naturally isolates the Cepheids from neighboring stars because its vision is sharper than Hubble's at infrared wavelengths.

"Combining Webb and Hubble gives us the best of both worlds. We find that the Hubble measurements remain reliable as we climb farther along the cosmic distance ladder," said Riess.

The new Webb observations include five host galaxies of eight Type Ia supernovae containing a total of 1,000 Cepheids, and reach out to the farthest galaxy where Cepheids have been well measured -- NGC 5468 -- at a distance of 130 million light-years. "This spans the full range where we made measurements with Hubble. So, we've gone to the end of the second rung of the cosmic distance ladder," said co-author Gagandeep Anand of the Space Telescope Science Institute in Baltimore, which operates the Webb and Hubble telescopes for NASA.

Hubble and Webb's further confirmation of the Hubble Tension sets up other observatories to possibly settle the mystery. NASA's upcoming Nancy Grace Roman Space Telescope will do wide celestial surveys to study the influence of dark energy, the mysterious energy that is causing the expansion of the universe to accelerate. ESA's Euclid observatory, with NASA contributions, is pursuing a similar task.

Read more at Science Daily

Study explores impacts of Arctic warming on daily weather patterns in the U.S.

Arctic sea ice is shrinking as the world continues to warm, and a new study led by researchers at Penn State may provide a better understanding of how the loss of this ice may impact daily weather in the middle latitudes, like the United States.

The researchers used climate models and a machine learning approach to tease out the impacts of ice sea loss on the future of large-scale meteorological patterns over North America. They reported in the Journal of Climate that ice sea loss de-amplified these patterns and their impacts on temperature near the surface -- meaning, for example, cold weather events may be less cold.

"The Arctic in general is the source of cold air for us when we have these really cold events," said Melissa Gervais, assistant professor in the Department of Meteorology and Atmospheric Science at Penn State and lead author of the study. "As warming continues, we know that the Arctic is going to be less cold. What this work shows us is that the loss of sea ice also changes weather patterns that bring cold air to the middle latitudes. So, warming both depletes your source of cold air and makes it harder to transport."

Sea ice acts like a blanket over the ocean, keeping warmer water from losing heat to the atmosphere, Gervais said. Once the ice is gone, heat from the ocean can enter the atmosphere and create a low-pressure system over where the ice had been, resulting in less transport of cold Arctic air to other parts of Earth, the scientists said.

As sea ice melts, the Arctic is warming at a faster rate than the rest of the planet, a process called Arctic amplification. And while it would be expected that less cold air would be transported from the Arctic to the middle latitudes under these conditions, the new study allowed the researchers to probe more deeply into the mechanisms responsible for these changes.

"Our research allowed us to dig a little bit deeper into what is going on," Gervais said. "We were able to see that in addition to the impact of Arctic amplification, there also is an impact on the actual circulation or flow in the atmosphere."

To test the impact on weather patterns, the scientists ran a climate model under two scenarios -- one with ice levels consistent with the 1980s and 1990s, and the other with reduced ice levels expected by the end of the century.

They used self-organizing maps, a machine learning method, to classify patterns of daily weather in the troposphere, the lowest layer of Earth's atmosphere where most weather occurs. They then explored how those general weather patterns translate into variables that are closer to the surface.

"Without using this machine learning method, we would not have been able to really robustly understand the processes involved," Gervais said. "For studies like this, where we're using a large volume of climate model simulations, we can't find these patterns by hand."

One weather pattern particularly impacted by the loss of sea ice involved cold weather anomalies over North America. The pattern is associated with strong cold anomalies, which reached roughly 29 degrees Fahrenheit under current sea ice conditions but warmed significantly under the scenarios with less sea ice, the scientists said.

"We found that when we lose sea ice, not only is that anomaly reduced, but it also actually becomes a warm pattern," Gervais said. "So, the same pattern in the upper atmosphere is now actually bringing warmer temperatures near the surface."

Read more at Science Daily

Researchers identify gene involved in neuronal vulnerability in Alzheimer's disease

Early stages of neurodegenerative disorders are characterized by the accumulation of proteins in discrete populations of brain cells and degeneration of these cells. For most diseases, this selective vulnerability pattern is unexplained, yet it could yield major insight into pathological mechanisms. Alzheimer's disease (AD), the world-leading cause of dementia, is defined by the appearance of two hallmark pathological lesions, amyloid plaques (extracellular aggregates of Aβ peptides) and neurofibrillary tangles (intracellular aggregates of hyperphosphorylated tau, or NFTs). While plaques are widespread in the neocortex and hippocampus, NFTs follow a well-defined regional pattern that starts in principal neurons from the entorhinal cortex.

In a new study from Boston University Chobanian & Avedisian School of Medicine, researchers have identified a gene they believe may lead to the degeneration of the neurons that are most vulnerable to AD.

"We are trying to understand why certain neurons in the brain are particularly vulnerable during the earliest stages of AD. Why they accumulate and degenerate very early is unknown. We believe elucidating this vulnerability would allow for a new therapeutic avenue for AD," said corresponding author Jean-Pierre Roussarie, PhD, assistant professor of anatomy & neurobiology at the school.

In collaboration with leading computational genomic experts from Rice University, the BU researchers along with co-corresponding author, Patricia Rodriguez-Rodriguez, PhD, from Karolinska Institute, used cutting-edge analysis tools with machine learning to identify the gene DEK as possibly responsible for vulnerability of entorhinal cortex neurons.

They injected viruses into the entorhinal cortex of experimental models and neurons grown in the lab to manipulate levels of the DEK gene.

When they reduced the levels of the DEK gene, vulnerable neurons started to accumulate tau and to degenerate.

According to the researchers, preventing these neurons from degeneration by targeting DEK or proteins that collaborate with DEK, would prevent patients from developing memories loss and would curtail the disease before it spreads to larger areas of the brain.

"Given that entorhinal cortex neurons are necessary for the formation of new memories and since they are so vulnerable and the first to die, this explains why the first symptom of AD is the inability to form new memories," said Roussarie.

The researchers believe these findings are the first step in understanding how these fragile neurons die, yet they hope to uncover additional genes to fully understand what leads to the death of critical memory-forming neurons.

Read more at Science Daily

For people who speak many languages, there's something special about their native tongue

A new study of people who speak many languages has found that there is something special about how the brain processes their native language.

In the brains of these polyglots -- people who speak five or more languages -- the same language regions light up when they listen to any of the languages that they speak. In general, this network responds more strongly to languages in which the speaker is more proficient, with one notable exception: the speaker's native language. When listening to one's native language, language network activity drops off significantly.

The findings suggest there is something unique about the first language one acquires, which allows the brain to process it with minimal effort, the researchers say.

"Something makes it a little bit easier to process -- maybe it's that you've spent more time using that language -- and you get a dip in activity for the native language compared to other languages that you speak proficiently," says Evelina Fedorenko, an associate professor of neuroscience at MIT, a member of MIT's McGovern Institute for Brain Research, and the senior author of the study.

Saima Malik-Moraleda, a graduate student in the Speech and Hearing Bioscience and Technology Program at Harvard University, and Olessia Jouravlev, a former MIT postdoc who is now an associate professor at Carleton University, are the lead authors of the paper, which appears today in the journal Cerebral Cortex.

Many languages, one network

The brain's language processing network, located primarily in the left hemisphere, includes regions in the frontal and temporal lobes. In a 2021 study, Fedorenko's lab found that in the brains of polyglots, the language network was less active when listening to their native language than the language networks of people who speak only one language.

In the new study, the researchers wanted to expand on that finding and explore what happens in the brains of polyglots as they listen to languages in which they have varying levels of proficiency. Studying polyglots can help researchers learn more about the functions of the language network, and how languages learned later in life might be represented differently than a native language or languages.

"With polyglots, you can do all of the comparisons within one person. You have languages that vary along a continuum, and you can try to see how the brain modulates responses as a function of proficiency," Fedorenko says.

For the study, the researchers recruited 34 polyglots, each of whom had at least some degree of proficiency in five or more languages but were not bilingual or multilingual from infancy. Sixteen of the participants spoke 10 or more languages, including one who spoke 54 languages with at least some proficiency.

Each participant was scanned with functional magnetic resonance imaging (fMRI) as they listened to passages read in eight different languages. These included their native language, a language they were highly proficient in, a language they were moderately proficient in, and a language in which they described themselves as having low proficiency.

They were also scanned while listening to four languages they didn't speak at all. Two of these were languages from the same family (such as Romance languages) as a language they could speak, and two were languages completely unrelated to any languages they spoke.

The passages used for the study came from two different sources, which the researchers had previously developed for other language studies. One was a set of Bible stories recorded in many different languages, and the other consisted of passages from "Alice in Wonderland" translated into many languages.

Brain scans revealed that the language network lit up the most when participants listened to languages in which they were the most proficient. However, that did not hold true for the participants' native languages, which activated the language network much less than non-native languages in which they had similar proficiency. This suggests that people are so proficient in their native language that the language network doesn't need to work very hard to interpret it.

"As you increase proficiency, you can engage linguistic computations to a greater extent, so you get these progressively stronger responses. But then if you compare a really high-proficiency language and a native language, it may be that the native language is just a little bit easier, possibly because you've had more experience with it," Fedorenko says.

Brain engagement

The researchers saw a similar phenomenon when polyglots listened to languages that they don't speak: Their language network was more engaged when listening to languages related to a language that they could understand, than compared to listening to completely unfamiliar languages.

"Here we're getting a hint that the response in the language network scales up with how much you understand from the input," Malik-Moraleda says. "We didn't quantify the level of understanding here, but in the future we're planning to evaluate how much people are truly understanding the passages that they're listening to, and then see how that relates to the activation."

The researchers also found that a brain network known as the multiple demand network, which turns on whenever the brain is performing a cognitively demanding task, also becomes activated when listening to languages other than one's native language.

"What we're seeing here is that the language regions are engaged when we process all these languages, and then there's this other network that comes in for non-native languages to help you out because it's a harder task," Malik-Moraleda says.

In this study, most of the polyglots began studying their non-native languages as teenagers or adults, but in future work, the researchers hope to study people who learned multiple languages from a very young age. They also plan to study people who learned one language from infancy but moved to the United States at a very young age and began speaking English as their dominant language, while becoming less proficient in their native language, to help disentangle the effects of proficiency versus age of acquisition on brain responses.

Read more at Science Daily

Mar 11, 2024

CSI in space: Analyzing bloodstain patterns in microgravity

As more people seek to go where no man has gone before, researchers are exploring how forensic science can be adapted to extraterrestrial environments.

A new study by Staffordshire University and the University of Hull highlights the behaviour of blood in microgravity and the unique challenges of bloodstain pattern analysis aboard spacecraft.

Bloodstain expert Zack Kowalske is a Crime Scene Investigator based in Atlanta, USA, and led the study as part his PhD research at Staffordshire University.

"Studying bloodstain patterns can provide valuable reconstructive information about a crime or accident. However, little is known about how liquid blood behaves in an altered gravity environment. This is an area of study that, while novel, has implications for forensic investigations in space," he commented.

"Forensic science is more than just trying to solve crimes; it additionally has a role in accident reconstruction or failure analysis. With this concept, consider how various forensic disciplines could be utilized in a critical accident onboard a space station or shuttle."

Experiments were conducted aboard a Zero Gravity Corporation modified Boeing 727 parabolic aircraft.

A mixture of 40% glycerin and 60% red food colouring was used, simulating the relative density and viscosity of human blood.

Blood droplets were propelled from a hydraulic syringe toward a target during periods of reduced gravity between 0.00 and 0.05 g. From these blood stains, the researchers reconstructed the angle of impact.

Co-author Professor Graham Williams, from the University of Hull, explained: "With the lack of gravitational influence, surface tension and cohesion of blood droplets are amplified. What this means is that blood in space has a higher tendency to stick to surfaces until a greater force causes detachment. Within the application of bloodstain formation, it means that blood drops exhibit a slower spread rate and, therefore, have shapes and sizes that would not be reflective on Earth.

"On Earth, gravity and air drag have a noticeable influence on skewing the calculated angle. The initial hypothesis was that because of the absence of gravity, certain mathematical calculations would be more accurate. However, the amplified effect of surface tension became a predominant factor that caused the calculation to have greater variance, even in the absence of gravity."

This is the first study relating to the behavior of blood in free flight.

With the rate of technological evolution in space exploration, the authors say that the need for reliable forensic science techniques will become increasingly important.

Zack added: "We find ourselves in a new era of forensic science; just as mid-19th century research asked the question of what a bloodstain meant in relation to cause; we are once again at the beginning of new questions that tie in how new environments influence forensic science.

Read more at Science Daily

New study reveals insight into which animals are most vulnerable to extinction due to climate change

In a new study, researchers have used the fossil record to better understand what factors make animals more vulnerable to extinction from climate change. The results could help to identify species most at risk today from human-driven climate change. The findings have been published today in the journal Science.

Past climate change (often caused by natural changes in greenhouse gases due to volcanic activity) has been responsible for countless species' extinctions during the history of life on Earth. But, to date, it has not been clear what factors cause species to be more or less resilient to such change, and how the magnitude of climate change affects extinction risk.

Led by researchers at the University of Oxford, this new study sought to answer this question by analysing the fossil record for marine invertebrates (such as sea urchins, snails, and shellfish) over the past 485 million years. Marine invertebrates have a rich and well-studied fossil record, making it possible to identify when, and potentially why, species become extinct.

Using over 290,000 fossil records covering more than 9,200 genera, the researchers collated a dataset of key traits that may affect resilience to extinction, including traits not studied in depth previously, such as preferred temperature. This trait information was integrated with climate simulation data to develop a model to understand which factors were most important in determining the risk of extinction during climate change.

Key findings:

  • The authors found that species exposed to greater climate change were more likely to become extinct. In particular, species that experienced temperature changes of 7°C or more across geological stages were significantly more vulnerable to extinction.
  • The authors also found that species occupying climatic extremes (for instance in polar regions) were disproportionately vulnerable to extinction, and animals that could only live in a narrow range of temperatures (especially ranges less than 15°C) were significantly more likely to become extinct.
  • However, geographic range size was the strongest predictor of extinction risk. Species with larger geographic ranges were significantly less likely to go extinct. Body size was also important, with smaller-bodied species more likely to become extinct.
  • All of the traits studied had a cumulative impact on extinction risk. For instance, species with both small geographic ranges and narrow thermal ranges were even more susceptible to extinction than species that had only one of these traits.


Cooper Malanoski (Department of Earth Sciences, University of Oxford), first author of the study, said: 'Our study revealed that geographic range was the strongest predictor of extinction risk for marine invertebrates, but that the magnitude of climate change is also an important predictor of extinction, which has implications for biodiversity today in the face of climate change.'

With current human-driven climate change already pushing many species up to and beyond the brink of extinction, these results could help identify the animals that are most at risk, and inform strategies to protect them.

Lead author Professor Erin Saupe (Department of Earth Sciences, University of Oxford) said: 'The evidence from the geological past suggests that global biodiversity faces a harrowing future, given projected climate change estimates. In particular, our model suggests that species with restricted thermal ranges of less than 15°C, living in the poles or tropics, are likely to be at the greatest risk of extinction. However, if the localized climate change is large enough, it could lead to significant extinction globally, potentially pushing us closer to a sixth mass extinction.'

According to the research team, future work should explore how climate change interacts with other potential drivers of extinction, such as ocean acidification and anoxia (where seawater becomes depleted of oxygen).

Read more at Science Daily

Loss of nature costs more than previously estimated

Researchers propose that governments apply a new method for calculating the benefits that arise from conserving biodiversity and nature for future generations.

The method can be used by governments in cost-benefit analyses for public infrastructure projects, in which the loss of animal and plant species and 'ecosystem services' -- such as filtering air or water, pollinating crops or the recreational value of a space -- are converted into a current monetary value.

This process is designed to make biodiversity loss and the benefits of nature conservation more visible in political decision-making.

However, the international research team says current methods for calculating the values of ecosystem services "fall short" and have devised a new approach, which they believe could easily be deployed in Treasury analysis underpinning future Budget statements.

Their approach, published in the journal Science, takes into consideration the increase in monetary value of nature over time as human income increases, as well as the likely deterioration in biodiversity, making it more of a scarce resource.

This contrasts with current methods, which do not consider how the value of ecosystem services changes over time.

"Our study provides governments with a formula to estimate the future values of scarce ecosystem services that can be used in decision-making processes," said Moritz Drupp, Professor of Sustainability Economics at the University of Hamburg and lead author on this study.

Two factors play a key role in this value adjustment: on the one hand, income will rise and with it the prosperity of the world's population -- by an estimated two percent per year after adjusting for inflation.

As incomes go up, people are willing to pay more to conserve nature.

"On the other hand, the services provided by ecosystems will become more valuable the scarcer they become," said Professor Drupp. "The fact that scarce goods become more expensive is a fundamental principle in economics, and it also applies here. And in view of current developments, unfortunately, we must expect the loss of biodiversity to continue."

According to the researchers, the present value of ecosystem services must therefore be set much higher in today's cost-benefit analyses, to more than 130 percent if just including the rise of income.

If also taking into account the impact on Red List Index endangered species, the value adjustment would amount to more than 180 percent.

Accounting for these effects will increase the likelihood of projects that conserve ecosystem services passing a cost-benefit test.

The research team includes three UK-based authors: Professor Mark Freeman (University of York), Dr. Frank Venmans (LSE), and Professor Ben Groom (University of Exeter).

"The monetary values for the environment that are currently used by policy makers in the appraisal of public investments and regulatory change mean that nature becomes relatively less valuable over time compared to other goods and services," said Professor Groom.

"Our work shows this is wrong. We propose an uplift in the values of ecosystems over time. This proposal could easily be deployed in the Treasury's analysis that will underpin future Budget statements."

Dr Venmans added: "Take coral reefs as a specific example. These are expected to decline in area and biodiversity as the climate changes, meaning that the remaining reefs will be much more valuable than today, and even more so as household incomes rise. This matters when we assess coral reef preservation with long-lasting effects."

Professor Freeman said: "The government is under considerable pressure from many sides for additional public investment. Ensuring that the protection of ecosystems is appraised in a way that is consistent with other public projects, including HS2 and other infrastructure spending, is critical. This is what our work aims to achieve."

The researchers say that as political decisions can alleviate the loss of biodiversity, it is important that governments are able to adequately assess the consequences of their decisions today and in the future.

Read more at Science Daily

Lost tombs and quarries rediscovered on British military base in Cyprus

More than forty archaeological sites in Cyprus dating potentially as far back as the Bronze Age that were thought lost to history have been relocated by University of Leicester scientists working for the Ministry of Defence.

A small team of archaeologists from University of Leicester Archaeological Services, funded by the DIO Overseas Stewardship Project, undertook a 'walkover survey' -- a systematic surveying and recording of visible archaeological remains -- of the Eastern Sovereign Base Area at Dhekelia (ESBA) on the south coast of the island. The work, licensed by Cyprus' Department of Antiquities in Nicosia, is to inform site management by the DIO, which is the custodian of the UK and overseas Defence estate.

Dhekelia is about 30km south-east of Nicosia, and 80km north-east of the Western Sovereign Base Area (WSBA) at Akrotiri where the University of Leicester has been working since 2015.

The task of the walkover was to relocate around 60 possible archaeological sites that had been recorded in the early 1960s prior to the development of the garrison within the Dhekelia base, and the laying out of the Kingsfield Airstrip at the western end of the area.

In preparation of the survey a Geographic Information System (GIS) record was compiled that included all the known information, and from that co-ordinate points for the possible sites were exported to standard handheld GPS units. Archaeologists then visited each site and searched for the evidence that had been previously recorded. When successfully found, each site would then be photographed, GPS located, and recorded on pro forma sheets.

In total, 51 sites including 5 historic buildings were located. Some records survived for 47 of the sites, but a further four were known only from labels on a 1:25,000 scale plan. Although the dating of most of the sites is currently unknown, they are likely to span from the Bronze Age which started c.2500 BC to the Byzantine period which ended in the 12th Century AD, and to include sites from the Hellenistic period (312 -- 58 BC) and Roman periods (58 BC -- 395 AD).

Particular highlights included three coastal quarries where stone was being taken off low spits running out into the sea. One quarry had a little ramp that looked like it was used for loading slabs of quarried rock into boats tied in deep water alongside, and another had dozens of very clear circular grinding stone removals which, where immediately adjacent to each other, left behind distinct clover leaf shapes in the bed rock.

Large areas of rock cut tomb extended over several hectares in one part of the inland plateau. Most of these tombs were in a very poor state and some bore clear signs of looting in the form of adjacent mounds of earth. Many tombs have been used as convenient areas for fly tipping. One tomb, part of a substantial cemetery surrounding a monastery to the west of Xylotymbou village was being used for caging cats.

Matt Beamish from University of Leicester Archaeological Services, who led the survey, said:

"Our GIS and survey methods had worked well when used for a similar survey of the Akrotiri peninsula in 2019. Many of the sites we were planning to survey had been last visited over 20 years ago, and in many instances had been reported as no longer existing or being unfindable. On reflection this had more to do with inadequate mapping, lack of preparation and lack of satellite location technologies: we found that many of the sites could be re-found with a little bit of patience.

"There were undoubtedly problems with some of the archive information which was incomplete and had been inaccurately redrawn at some stage in the past. Some sites had clearly been lost to the subsequent development of roads and buildings."

The Dhekelia Sovereign base is around 20km wide and 7km deep and sits on the east side of Larnaca Bay. The topography is varied including a flat coastal strip meeting steep limestone cliffs and hills, with a broadly flat plateau on the interior which includes more areas of rocky outcrop and is bisected by rivers which are generally dry beds under cultivation. The coastal strip and plateau include areas of agriculture and horticulture, and areas of olive and citrus grove and scrub. In the north of the area there are large dairy and livestock farms.

Cyprus' position on Mediterranean sea routes has led to a rich and diverse cultural heritage, and it is famed for the preservation of many archaeological sites from the Bronze Age, Hellenistic/Iron Age, Roman, and Byzantine or medieval periods. At the western end of the Dhekelia area this occupation is represented in a significant archaeological landscape comprising a large Bronze Age defended hilltop settlement at Kokkinokremnos and an adjacent Iron Age hillfort at Vikla, both sitting above the Roman harbour town of Koutsopetria: all these protected sites are subject to recent research excavations. The Roman harbour is all now infilled, possibly stemming from a catastrophic tsunami event.

Much of the known archaeology across Dhekelia is funerary, and this mostly comprises rock cut tombs, some of which were built into the limestone caves (generally Hellenistic/Iron Age), and rock cut shaft graves (generally Byzantine/Roman-Medieval).

Matt Beamish added: "The survey was very successful with the identification of significant archaeological areas. We know that many more archaeological sites will exist which are not obvious to the naked eye. Much of the area has seen no systematic archaeological survey, and the application of remote sensing or aerial survey perhaps using LiDAR would enable a wider picture of previous human activity to be drawn. The information will enable the DIO to better manage the archaeological sites within the Sovereign Base Administration Area, and allow a wider understanding of Dhekelia's archaeological heritage."

Alex Sotheran, Archaeology Advisor, DIO, praised the survey and the results:

"The work carried out by Matt and the team has really improved our knowledge and understanding of the archaeology across the Dhekelia area and will allow for an improved system of management of these vital and important heritage assets going forward."

The data created during the survey has been entered into DIO's Historic Buildings, Sites and Monuments Record, which in turn is vital for helping to protect the historic environment across the Ministry of Defence's UK and overseas estate.

Read more at Science Daily

Mar 10, 2024

Researchers develop artificial building blocks of life

The DNA carries the genetic information of all living organisms and consists of only four different building blocks, the nucleotides. Nucleotides are composed of three distinctive parts: a sugar molecule, a phosphate group and one of the four nucleobases adenine, thymine, guanine and cytosine. The nucleotides are lined up millions of times and form the DNA double helix, similar to a spiral staircase. Scientists from the UoC's Department of Chemistry have now shown that the structure of nucleotides can be modified to a great extent in the laboratory.

The researchers developed so-called threofuranosyl nucleic acid (TNA) with a new, additional base pair.

These are the first steps on the way to fully artificial nucleic acids with enhanced chemical functionalities.

The study 'Expanding the Horizon of the Xeno Nucleic Acid Space: Threose Nucleic Acids with Increased Information Storage' was published in the Journal of the American Chemical Society.

Artificial nucleic acids differ in structure from their originals.

These changes affect their stability and function. "Our threofuranosyl nucleic acid is more stable than the naturally occurring nucleic acids DNA and RNA, which brings many advantages for future therapeutic use," said Professor Dr Stephanie Kath-Schorr.

For the study, the 5-carbon sugar deoxyribose, which forms the backbone in DNA, was replaced by a 4-carbon sugar.

In addition, the number of nucleobases was increased from four to six.

By exchanging the sugar, the TNA is not recognized by the cell's own degradation enzymes.

This has been a problem with nucleic acid-based therapeutics, as synthetically produced RNA that is introduced into a cell is rapidly degraded and loses its effect.

The introduction of TNAs into cells that remain undetected could now maintain the effect for longer.

Read more at Science Daily

Nanodevices can produce energy from evaporating tap or seawater

Evaporation is a natural process so ubiquitous that most of us take it for granted. In fact, roughly half of the solar energy that reaches the earth drives evaporative processes. Since 2017, researchers have been working to harness the energy potential of evaporation via the hydrovoltaic (HV) effect, which allows electricity to be harvested when fluid is passed over the charged surface of a nanoscale device. Evaporation establishes a continuous flow within nanochannels inside these devices, which act as passive pumping mechanisms. This effect is also seen in the microcapillaries of plants, where water transport occurs thanks to a combination of capillary pressure and natural evaporation.

Although hydrovoltaic devices currently exist, there is very little functional understanding of the conditions and physical phenomena that govern HV energy production at the nanoscale. It's an information gap that Giulia Tagliabue, head of the Laboratory of Nanoscience for Energy Technology (LNET) in the School of Engineering, and PhD student Tarique Anwar wanted to fill. They leveraged a combination of experiments and multiphysics modelling to characterize fluid flows, ion flows, and electrostatic effects due to solid-liquid interactions, with the goal of optimizing HV devices.

"Thanks to our novel, highly controlled platform, this is the first study that quantifies these hydrovoltaic phenomena by highlighting the significance of various interfacial interactions. But in the process, we also made a major finding: that hydrovoltaic devices can operate over a wide range of salinities, contradicting prior understanding that highly purified water was required for best performance," says Tagliabue.

The LNET study has recently been published in the Cell Press journal Device.

A revealing multiphysics model


The researchers' device represents the first hydrovoltaic application of a technique called nanosphere colloidal lithography, which allowed them to create a hexagonal network of precisely spaced silicon nanopillars. The spaces between the nanopillars created the perfect channels for evaporating fluid samples, and could be finely tuned to better understand the effects of fluid confinement and the solid/liquid contact area.

"In most fluidic systems containing saline solutions, you have an equal number of positive and negative ions. However, when you confine the liquid to a nanochannel, only ions with a polarity opposite to that of the surface charge will remain," Anwar explains. "This means that if you allow liquid to flow through the nanochannel, you will generate current and voltages."

"This goes back to our major finding that the chemical equilibrium for the surface charge of the nanodevice can be exploited to extend the operation of hydrovoltaic devices across the salinity scale," adds Tagliabue. "Indeed, as the fluid ion concentration increases, so does the surface charge of the nanodevice. As a result, we can use larger fluid channels while working with higher-concentration fluids. This makes it easier to fabricate devices for use with tap or seawater, as opposed to only purified water."

Water, water everywhere

Because evaporation can occur continuously over a wide range of temperatures and humidities -- and even at night -- there are many exciting potential applications for more efficient HV devices. The researchers hope to explore this potential with the support of a Swiss National Science Foundation Starting Grant, which aims to develop "a completely new paradigm for waste-heat recovery and renewable energy generation at large and small scales," including a prototype module under real-world conditions on Lake Geneva.

And because HV devices could theoretically be operated anywhere there is liquid -- or even moisture, like sweat -- they could also be used to power sensors for connected devices, from smart TVs to health and fitness wearables. With the LNET's expertise in light energy harvesting and storage systems, Tagliabue is also keen to see how light and photothermal effects could be used to control surface charges and evaporation rates in HV systems.

Finally, the researchers also see important synergies between HV systems and clean water generation.

Read more at Science Daily

Lack of focus doesn't equal lack of intelligence -- it's proof of an intricate brain

Imagine a busy restaurant: dishes clattering, music playing, people talking loudly over one another. It's a wonder that anyone in that kind of environment can focus enough to have a conversation. A new study by researchers at Brown University's Carney Institute for Brain Science provides some of the most detailed insights yet into the brain mechanisms that help people pay attention amid such distraction, as well as what's happening when they can't focus.

In an earlier psychology study, the researchers established that people can separately control how much they focus (by enhancing relevant information) and how much they filter (by tuning out distraction). The team's new research, published in Nature Human Behaviour, unveils the process by which the brain coordinates these two critical functions.

Lead author and neuroscientist Harrison Ritz likened the process to how humans coordinate muscle activity to perform complex physical tasks.

"In the same way that we bring together more than 50 muscles to perform a physical task like using chopsticks, our study found that we can coordinate multiple different forms of attention in order to perform acts of mental dexterity," said Ritz, who conducted the study while a Ph.D. student at Brown.

The findings provide insight into how people use their powers of attention as well as what makes attention fail, said co-author Amitai Shenhav, an associate professor in Brown's Department of Cognitive, Linguistic and Psychological Sciences.

"These findings can help us to understand how we as humans are able to exhibit such tremendous cognitive flexibility -- to pay attention to what we want, when we want to," Shenhav said. "They can also help us better understand limitations on that flexibility, and how limitations might manifest in certain attention-related disorders such as ADHD."

The focus-and-filter test

To conduct the study, Ritz administered a cognitive task to participants while measuring their brain activity in an fMRI machine. Participants saw a swirling mass of green and purple dots moving left and right, like a swarm of fireflies. The tasks, which varied in difficulty, involved distinguishing between the movement and colors of the dots. For example, participants in one exercise were instructed to select which color was in the majority for the rapidly moving dots when the ratio of purple to green was almost 50/50.

Ritz and Shenhav then analyzed participants' brain activity in response to the tasks.

Ritz, who is now a postdoctoral fellow at the Princeton Neuroscience Institute, explained how the two brain regions work together during these types of tasks.

"You can think about the intraparietal sulcus as having two knobs on a radio dial: one that adjusts focusing and one that adjusts filtering," Ritz said. "In our study, the anterior cingulate cortex tracks what's going on with the dots. When the anterior cingulate cortex recognizes that, for instance, motion is making the task more difficult, it directs the intraparietal sulcus to adjust the filtering knob in order to reduce the sensitivity to motion.

"In the scenario where the purple and green dots are almost at 50/50, it might also direct the intraparietal sulcus to adjust the focusing knob in order to increase the sensitivity to color. Now the relevant brain regions are less sensitive to motion and more sensitive to the appropriate color, so the participant is better able to make the correct selection."

Ritz's description highlights the importance of mental coordination over mental capacity, revealing an often-expressed idea to be a misconception.

"When people talk about the limitations of the mind, they often put it in terms of, 'humans just don't have the mental capacity' or 'humans lack computing power,'" Ritz said. "These findings support a different perspective on why we're not focused all the time. It's not that our brains are too simple, but instead that our brains are really complicated, and it's the coordination that's hard."

Ongoing research projects are building on these study findings. A partnership with physician-scientists at Brown University and Baylor College of Medicine is investigating focus-and-filter strategies in patients with treatment-resistant depression. Researchers in Shenhav's lab are looking at the way motivation drives attention; one study co-led by Ritz and Brown Ph.D. student Xiamin Leng examines the impact of financial rewards and penalties on focus-and-filter strategies.

Read more at Science Daily