Oct 22, 2021

Astronomers provide 'field guide' to exoplanets known as hot Jupiters

Hot Jupiters -- giant gas planets that race around their host stars in extremely tight orbits -- have become a little bit less mysterious thanks to a new study combining theoretical modeling with observations by the Hubble Space Telescope.

While previous studies mostly focused on individual worlds classified as "hot Jupiters" due to their superficial similarity to the gas giant in our own solar system, the new study is the first to look at a broader population of the strange worlds. Published in Nature Astronomy, the study, led by a University of Arizona researcher, provides astronomers with an unprecedented "field guide" to hot Jupiters and offers insight into planet formation in general.

Although astronomers think that only about 1 in 10 stars host an exoplanet in the hot Jupiter class, the peculiar planets make up a sizeable portion of exoplanets discovered to date, due to the fact that they are bigger and brighter than other types of exoplanets, such as rocky, more Earthlike planets or smaller, cooler gas planets. Ranging in size from about one-third the size of Jupiter to 10 Jupiter masses, all hot Jupiters orbit their host star at an extremely close range, usually much closer than Mercury, the innermost planet in our solar system, is to the sun. A "year" on a typical hot Jupiter lasts hours, or at most a few days. For comparison, Mercury takes almost three months to complete a trip around the sun.

Because of their close orbits, most, if not all, hot Jupiters are thought to be locked in a high-speed embrace with their host stars, with one side eternally exposed to the star's radiation and the other shrouded in perpetual darkness. The surface of a typical hot Jupiter can get as hot as almost 5,000 degrees Fahrenheit, with "cooler" specimens reaching 1,400 degrees -- hot enough to melt aluminum.

The research, which was led by Megan Mansfield, a NASA Sagan Fellow at the University of Arizona's Steward Observatory, used observations made with the Hubble Space Telescope that allowed the team to directly measure emission spectra from hot Jupiters, despite the fact that Hubble can't image any of these planets directly.

"These systems, these stars and their hot Jupiters, are too far away to resolve the individual star and its planet," Mansfield said. "All we can see is a point -- the combined light source of the two."

Mansfield and her team used a method known as secondary eclipsing to tease out information from the observations that allowed them to peer deep into the planets' atmospheres and gain insights into their structure and chemical makeup. The technique involves repeated observations of the same system, catching the planet at various places in its orbit, including when it dips behind the star.

"We basically measure the combined light coming from the star and its planet and compare that measurement with what we see when the planet is hidden behind its star," Mansfield said. "This allows us to subtract the star's contribution and isolate the light emitted by the planet, even though we can't see it directly."

The eclipse data provided the researchers with insight into the thermal structure of the atmospheres of hot Jupiters and allowed them to construct individual profiles of temperatures and pressures for each one. The team then analyzed near-infrared light, which is a band of wavelengths just beyond the range humans can see, coming from each hot Jupiter system for so-called absorption features. Because each molecule or atom has its own specific absorption profile, like a fingerprint, looking at different wavelengths allows researchers to obtain information about the chemical makeup of hot Jupiters. For example, if water is present in the planet's atmosphere, it will absorb light at 1.4 microns, which falls into the range of wavelengths that Hubble can see very well.

"In a way, we use molecules to scan through the atmospheres on these hot Jupiters," Mansfield said. "We can use the spectrum we observe to get information on what the atmosphere is made of, and we can also get information on what the structure of the atmosphere looks like."

The team went a step further by quantifying the observational data and comparing it to models of the physical processes believed to be at work in the atmospheres of hot Jupiters. The two sets matched very well, confirming that many predictions about the planets' nature based on theoretical work appear to be correct, according to Mansfield, who said the findings are "exciting because they were anything but guaranteed."

The results suggest that all hot Jupiters, not just the 19 included in the study, are likely to contain similar sets of molecules, like water and carbon monoxide, along with smaller amounts of other molecules. The differences among individual planets should mostly amount to varying relative amounts of these molecules. The findings also revealed that the observed water absorption features varied slightly from one hot Jupiter to the next.

"Taken together, our results tell us there is a good chance we have the big picture items figured out that are happening in the chemistry of these planets," Mansfield said. "At the same time, each planet has its own chemical makeup, and that also influences what we see in our observations."

According to the authors, the results can be used to guide expectations of what astronomers might be able to see when looking at a hot Jupiter that hasn't been studied before. The launch of NASA's news flagship telescope, the James Webb Space Telescope, slated for Dec. 18, has exoplanet hunters excited because Webb can see in a much broader range of infrared light, and will allow a much more detailed look at exoplanets, including hot Jupiters.

Read more at Science Daily

Climate change affects animal behavior

Humans are shaping environments at an accelerating rate. Indeed, one of the most important current topics of research is the capacity of animals to adapt to human-induced environmental change and how that change affects the expression of animal traits.

With the help of data collected on a little over one hundred animal species, researchers from the University of Helsinki and Lancaster University studied which behavioural traits are the most sensitive to human-induced environmental change, and to which human-induced changes in the environment animals respond the most sensitively. From the largest to the smallest, the groups of organisms included in the study were fish, birds, crustaceans and mammals. In addition, insects, amphibians and lizards were represented.

All the behavioural traits included in the study -- aggression, activity, boldness, sociability and exploration of their environment -- changed markedly due to environmental change brought about by humans.

"The biggest change was seen in the animals' activity in exploring their environment. Animals have a strong response to all forms of environmental change, but climate change engendered the greatest change in animal behaviour," says Postdoctoral Researcher Petri Niemelä from the Faculty of Biological and Environmental Sciences, University of Helsinki.

In addition to climate change, the other forms of human-induced environmental change included in the modelling were changes in carbon dioxide concentration and nutrient levels, alien species and other biotic changes caused by humans, as well as direct human impact through, for example, urbanisation or other human disturbances.

Changes in activity or other behaviour can often be the initial change in animals instigated by climate change.

"Behavioural change can serve as a buffer with which animals avoid the immediate negative effects of environmental change. For instance, such change can compensate for low reproductive success or increased mortality caused by environmental change. By changing their behaviour, animals can also gain more information on the altered environment."

Read more at Science Daily

Savannah chimpanzees, a model for the understanding of human evolution

To prosper, most great apes need lush forests in Africa (bonobos, chimpanzees, and gorillas) or Southeast Asia (orangutans), except for some groups of chimpanzees that live in savannahs, habitats characterised by high temperatures and very low seasonal rainfall.

Adriana Hernández, Serra Hunter professor at the Faculty of Psychology of the University of Barcelona, co-led the study conducted by an international team of primatologists who reviewed the existing research on the behaviour and ecology of savannah chimpanzees to understand how these apes adapt to extreme conditions.

According to the researchers, the environmental conditions of these places would lead to a specific type of behaviours and physiological responses in these chimpanzees -such as resting in caves or digging in order to get water- which are not observed in their counterparts that live in more forested areas, where they do not deal with these extreme environmental conditions.

"The study on savannah chimpanzees and what we call the landscape savannah effect have important implications for reconstructing the behaviour of the first hominis who lived in similar habitats and therefore, it helps us to better understand our own evolution," notes Adriana Hernández, who co-led the study, published in the journal Evolutionary Anthropology, together with Stacy Lindshield, from the University of Purdue (United States).

The genetically closest-to-humans evolutionary living relative

Chimpanzees (Pan troglodytes) are our closest living relatives, since they share 98.7% of their DNA with humans and have a common ancestor that lived between 4.5 and 6 million years ago. Despite this proximity, they lack some of the biological and cultural traits that humans possess to adapt to extreme heat, such as numerous eccrine sweat glands, relative lack of hair, or the ability to create artefacts such as water containers and sun hats to mitigate dehydration and sunstroke.

The chimpanzees that live in the savannah are taxonomically indistinguishable from other chimpanzees. For this reason, comparisons of behaviour, morphology and ecology with chimpanzees that live in more forested landscapes provide key information for hypothesising how early humans may have adapted millions of years ago while African forests were receding and gave place to savannahs.

"We know that early hominins adapted to savannah environments similar to those occupied by chimpanzees today, and researchers think that savannah conditions caused adaptations in our ancestors, such as brain expansion or tolerance to high temperatures," says Adriana Hernández, who is also the co-director of research at the Jane Goodall Institute Spain. "Therefore -she continues-, understanding how our genetically closest living relatives adapt to a dry, hot, seasonal and open environment, very similar to those where early hominins lived, helps us to model how our ancestors might have adapted and how the features that define us as humans might have emerged."

Strategies to adapt to high temperatures

Among the different characteristics of savannah chimpanzees described in the study, their strategies to deal with high temperatures stand out. "Understanding how they deal with heat can help us better understand what strategies human ancestors may have used to cope with high temperatures. Some strategies are probably the same for chimpanzees and hominins, such as the use of caves or going into water pools to cool down," notes the researcher. Another example the researcher highlights is the ways in which these chimpanzees try to hydrate themselves during the advanced dry season, such as digging for water when this resource is reduced to just a few spots in the landscape. "Early hominins also had to deal with low water availability during part of the year," Hernández adds.

Groups distributed over larger areas

The study also confirmed that chimpanzee social groups in the savannah are distributed over unusually large areas of around 100 km², while chimpanzees living in more forested areas have ranges between 3 and 30 km², approximately. "However, although group sizes are similar in different habitats, chimpanzees in the savannah have a much lower population density, which could be explained by the low availability of food in this habitat."

Despite the fact that we know much more about savannah chimpanzees now than ever before, their exact numbers are unknown, although according to the researchers "there are fewer than those living in the forest areas, as the total area they occupy is much smaller." In addition, because they have a lower population density, there are far fewer individuals in areas of the same size than in the forest. "It should be noted that there are far fewer sites where savannah chimpanzees have been studied, as there are only two study sites where savannah chimpanzees are habituated to humans and their behaviour can be observed directly. In contrast, there are many study sites where chimpanzees are fully habituated to researchers in the forest, a habitat where these primates have been studied for decades," explains Adriana Hernández.

Keys to understanding adaptation to climate change


Another important contribution of this study is that it helps to understand the potential effects of climate change on the species. "The adaptation of savannah chimpanzees to extreme climates can help us model how chimpanzees that currently inhabit forests might adapt to changes that climate studies project will make their environments drier and warmer. This is important, since the species is categorized as Endangered and the West African subspecies (Pan troglodytes verus) is Critically Endangered," says the expert.

Read more at Science Daily

Scientists look beyond the individual brain to study the collective mind

In a new paper, scientists suggest that efforts to understand human cognition should expand beyond the study of individual brains. They call on neuroscientists to incorporate evidence from social science disciplines to better understand how people think.

"Accumulating evidence indicates that memory, reasoning, decision-making and other higher-level functions take place across people," the researchers wrote in a review in the journal Frontiers in Systems Neuroscience. "Cognition extends into the physical world and the brains of others."

The co-authors -- neuroscientist Aron Barbey, a professor of psychology at the University of Illinois Urbana-Champaign; Richard Patterson, a professor emeritus of philosophy at Emory University; and Steven Sloman, a professor of cognitive, linguistic and psychological sciences at Brown University -- wanted to address the limitations of studying brains in isolation, out of the context in which they operate and stripped of the resources they rely on for optimal function.

"In cognitive neuroscience, the standard approach is essentially to assume that knowledge is represented in the individual brain and transferred between individuals," Barbey said. "But there are, we think, important cases where those assumptions begin to break down."

Take, for instance, the fact that people often "outsource" the task of understanding or coming to conclusions about complex subject matter, using other people's expertise to guide their own decision-making.

"Most people will agree that smoking contributes to the incidence of lung cancer -- without necessarily understanding precisely how that occurs," Barbey said. "And when doctors diagnose and treat disease, they don't transfer all of their knowledge to their patients. Instead, patients rely on doctors to help them decide the best course of action.

"Without relying on experts in our community, our beliefs would become untethered from the social conventions and scientific evidence that are necessary to support them," he said. "It would become unclear, for example, whether 'smoking causes lung cancer,' bringing into question the truth of our beliefs, the motivation for our actions."

To understand the role that knowledge serves in human intelligence, the researchers wrote that it is necessary to look beyond the individual and to study the community.

"Cognition is, to a large extent, a group activity, not an individual one," Sloman said. "People depend on others for their reasoning, judgment and decision-making. Cognitive neuroscience is not able to shed light on this aspect of cognitive processing."

The limitations of individual knowledge and human dependence on others for understanding are the themes of "The Knowledge Illusion: Why We Never Think Alone," a book Sloman wrote with Phil Fernbach, a cognitive scientist and professor of marketing at the University of Colorado.

"The challenge for cognitive neuroscience becomes how to capture knowledge that does not reside in the individual brain but is outsourced to the community," Barbey said.

Neuroscientific methods such as functional MRI were designed to track activity in one brain at a time and have limited capacity for capturing the dynamics that occur when individuals interact in large communities, he said.

Some neuroscientists are trying to overcome this limitation. In a recent study, researchers placed two people face-to-face in a scanner and tracked their brain activity and eye movements while they interacted. Other teams use a technique called "hyperscanning," which allows the simultaneous recording of brain activity in people who are physically distant from each another but interacting online.

Such efforts have found evidence suggesting that the same brain regions are activated in people who are effectively communicating with one another or cooperating on a task, Barbey said. These studies are also showing how brains operate differently from one another, depending on the type of interaction and the context.

Several fields of research are ahead of neuroscience in understanding and embracing the collective, collaborative nature of knowledge, Patterson said. For example, "social epistemology" recognizes that knowledge is a social phenomenon that depends on community norms, a shared language and a reliable method for testing the trustworthiness of potential sources.

"Philosophers studying natural language also illustrate how knowledge relies on the community," Patterson said. "For example, according to 'externalism,' the meaning of words depends on how they are used and represented within a social context. Thus, the meaning of the word and its correct use depends on collective knowledge that extends beyond the individual."

To address these shortfalls, neuroscientists can look to other social science fields, Barbey said.

Read more at Science Daily

Researchers map neurons in the brain involved with social interactions with others in groups

Meaningful social interactions are critical to an individual's well-being, and such interactions rely on people's behaviors towards one another. In research published in Science, investigators at Massachusetts General Hospital (MGH) have mapped the neurons in the brain that allow a monkey to process and remember the interactions and behaviors of another monkey to influence the animal's own actions. The findings might be used to develop treatment strategies for people with neuropsychiatric conditions.

The study had three Rhesus monkeys sit around a rotary table and take turns to offer an apple slice to one of the other two monkeys. At the same time, the researchers recorded the activity of individual neurons in a brain area known to play a role in social cognition, called the dorsomedial prefrontal cortex (dmPFC).

During these interactions, the monkeys reciprocated past offers of an apple slice and retaliated when they did not receive a slice from another. The researchers' recordings identified distinct neurons in the dmPFC that responded to the actions of other monkeys in the group. Certain neurons were activated with a particular action and outcome of specific individuals within the group (such as a neighbor monkey offering an apple slice leads to the outcome of receiving the reward). Many of the neurons encoded information not only about the actions and outcomes of specific individuals but also about their past behavior. This information about past interactions with group members influenced an animal's upcoming decisions to reciprocate or retaliate, and investigators could use the neuronal information to predict which monkey would receive an apple slice from a particular monkey even before it was offered.

"This finding suggested that the dmPFC plays a role in strategic decisions. To test this idea, we disrupted the normal activity in this area and found that the animals were less likely to reciprocate," says lead author Raymundo Báez-Mendoza, PhD, an investigator in the Department of Neurosurgery at MGH.

The results suggest that the dmPFC plays an important role in mapping out our actions and outcomes as well as the actions of others. "In neuropsychiatric conditions in which this ability is compromised, treatments aimed at improving the functioning of this brain area, either directly or indirectly, might improve peoples' lives," says senior author Ziv Williams, MD.

Read more at Science Daily

Oct 21, 2021

New galaxy images reveal a fitful start to the Universe

New images have revealed detailed clues about how the first stars and structures were formed in the Universe and suggest the formation of the Galaxy got off to a fitful start.

An international team of astronomers from the University of Nottingham and Centro de Astrobiología (CAB, CSIC-INTA) used data from the Hubble Space Telescope (HST) and the Gran Telescopio Canarias (GTC), the so-called Frontier Fields, to locate and study some of the smallest faintest galaxies in the nearby universe. This has revealed the formation of the galaxy was likely to be fitful. The first results have just been published in the journal Monthly Notices of the Royal Astronomical Society (MNRAS).

One of the most interesting questions that astronomers have been trying to answer for decades is how and when the first galaxies formed. Concerning the how, one possibility is that the formation of the first stars within galaxies started at a steady pace, slowly building up a more and more massive system. Another possibility is that the formation was more violent and discontinuous, with intense, but short lived bursts of star formation triggered by events such as mergers and enhanced gas accretion.

"Galaxy formation can be compared to a car," explains Pablo G. Pérez-González, one of the co-authors of the paper, affiliated to the Centro de Astrobiología (CAB/CSIC-INTA) in Spain, and principal investigator of the international collaboration behind this study. "The first galaxies might have had a 'diesel' star-forming engine, slowly but continuously adding up new stars, without much acceleration and gently turning gas into relatively small stars for long periods of time. Or the formation could have been jerky, with bursts of star formation producing incredibly large stars that disrupt the galaxy and make it cease its activity for a while or even forever. Each scenario is linked to different processes, such as galaxy mergers or the influence of supermassive black holes, and they have an effect on when and how the carbon or oxygen, that are essential for our life, formed."

Using the gravitational lensing power of some of the Universe's most massive galaxy clusters with the exceptional GTC data coming from a project entitled the Survey for high-z Red and Dead Sources (SHARDS) the astronomers searched for nearby analogs of the very first galaxies formed in the Universe, so that they could be studied in much more detail.

Dr Alex Griffiths from the University Nottingham was one of the lead UK researchers on the study, he explains: "Until we have the new James Webb Space telescope, we cannot observe the first galaxies ever formed, they are just too faint. So we looked for similar beasts in the nearby Universe and we dissected them with the most powerful telescopes we currently have."

The researchers combined the power of the most advanced telescopes, such as HST and GTC, with the aid of "natural telescopes." Professor Chris Conselice, from the University of Manchester is a co-author on the study, he said: "Some galaxies live in large groups, what we call clusters, which contain huge amounts of mass in the form of stars, but also gas and dark matter. Their mass is so large that they bend space-time, and act as natural telescopes. We call them gravitational lenses and they allow us to see faint and distant galaxies with enhanced brightness and at a higher spatial resolution."

Observations of some of these massive clusters acting as gravitational telescopes is the base of the Frontier Field survey. The study showed that the formation of the galaxy was likely to be stop-start with bursts of activity followed by lulls. Dr Griffiths from the University of Nottingham said: "Our main result is that the start of galaxy formation is fitful, like a jerky car engine, with periods of enhanced star formation followed by sleepy intervals. It is unlikely that galaxy mergers have played a substantial role in the triggering of these bursts of star formation and it is more likely due to alternative causes that enhance gas accretion, we need to search for those alternatives.

"We were able to find these objects due to the high quality SHARDS data coupled with imaging data from the Hubble Space Telescope to detect hot gas heated by newly formed stars in very small galaxies. This hot gas emits in certain wavelengths, what we call emission lines, just as a neon light. Analysing these emission lines can provide an insight into the formation and evolution of a galaxy."

Read more at Science Daily

Some of the world’s oldest rubies linked to early life

While analyzing some of the world's oldest coloured gemstones, researchers from the University of Waterloo discovered carbon residue that was once ancient life, encased in a 2.5 billion-year-old ruby.

The research team, led by Chris Yakymchuk, professor of Earth and Environmental Sciences at Waterloo, set out to study the geology of rubies to better understand the conditions necessary for ruby formation. During this research in Greenland, which contains the oldest known deposits of rubies in the world, the team found a ruby sample that contained graphite, a mineral made of pure carbon. Analysis of this carbon indicates that it is a remnant of early life.

"The graphite inside this ruby is really unique. It's the first time we've seen evidence of ancient life in ruby-bearing rocks," says Yakymchuk. "The presence of graphite also gives us more clues to determine how rubies formed at this location, something that is impossible to do directly based on a ruby's colour and chemical composition."

The presence of the graphite allowed the researchers to analyze a property called isotopic composition of the carbon atoms, which measures the relative amounts of different carbon atoms. More than 98 per cent of all carbon atoms have a mass of 12 atomic mass units, but a few carbon atoms are heavier, with a mass of 13 or 14 atomic mass units.

"Living matter preferentially consists of the lighter carbon atoms because they take less energy to incorporate into cells," said Yakymchuk. "Based on the increased amount of carbon-12 in this graphite, we concluded that the carbon atoms were once ancient life, most likely dead microorganisms such as cyanobacteria."

The graphite is found in rocks older than 2.5 billion years ago, a time on the planet when oxygen was not abundant in the atmosphere, and life existed only in microorganisms and algae films.

During this study, Yakymchuk's team discovered that this graphite not only links the gemstone to ancient life but was also likely necessary for this ruby to exist at all. The graphite changed the chemistry of the surrounding rocks to create favourable conditions for ruby growth. Without it, the team's models showed that it would not have been possible to form rubies in this location.

From Science Daily

Europeans in the Americas 1000 years ago

Columbus was not the first European to reach the Americas. The Vikings got there centuries beforehand, although exactly when has remained unclear. Here, an international team of scientists show that Europeans were already active in the Americas in 1021 AD.

The Vikings sailed great distances in their iconic longships. To the west, they established settlements in Iceland, Greenland and eventually a base at L'Anse aux Meadows, Newfoundland, Canada. However, it has remained unclear when this first transatlantic activity took place. Here, scientists show that Europeans were present in the Americas in 1021 AD -- precisely 1000 years ago this year. This date also marks the earliest known point by which the Atlantic had been crossed, and migration by humankind had finally encircled the entire planet.

A Solar Storm Solution

In this study, the chopping of wood by Vikings at L'Anse aux Meadows was dated to exactly the year 1021 AD. The three pieces of wood studied, from three different trees, all came from contexts archaeologically attributable to the Vikings. Each one also displayed clear evidence of cutting and slicing by blades made of metal -- a material not produced by the indigenous population. The exact year was determinable because a massive solar storm occurred in 992 AD that produced a distinct radiocarbon signal in tree rings from the following year.

"The distinct uplift in radiocarbon production that occurred between 992 and 993 AD has been detected in tree-ring archives from all over the world" says Associate Professor Michael Dee (University of Groningen), director of the research. Each of the three wooden objects exhibited this signal 29 growth rings (years) before the bark edge. "Finding the signal from the solar storm 29 growth rings in from the bark allowed us to conclude that the cutting activity took place in the year 1021 AD" says Dr Margot Kuitems (University of Groningen), first author of the paper.

How Far, How Often?

The number of Viking expeditions to the Americas, and the duration of their stay over the Atlantic, remain unknown. All current data suggest that the whole endeavour was somewhat short lived, and the cultural and ecological legacy of this first European activity in the Americas is likely to have been small. Nonetheless, botanical evidence from L'Anse aux Meadows has confirmed that the Vikings did explore lands further south than Newfoundland.

The Sagas

1021 AD is the earliest year in which European presence in the Americas can be scientifically proven. Previous dates for the Viking presence in the Americas have relied heavily on the Icelandic Sagas. However, these began as oral histories and were only written down centuries after the events they describe. Whilst contradictory and at times fantastical, the Sagas also suggest encounters occurred, both violent and amiable, between the Europeans and the indigenous people of the region. However, little archaeological evidence has been uncovered to support such exchanges. Other medieval accounts also exist, which imply prominent figures on the European mainland were made aware the Vikings had made landfall across the Atlantic.

Read more at Science Daily

Brain activation in sleeping toddlers shows memory for words

Very young children learn words at a tremendous rate. Now researchers at the Center for Mind and Brain at the University of California, Davis, have for the first time seen how specific brain regions activate as two-year-olds remember newly learned words -- while the children were sleeping. The work is published Oct. 19 in Current Biology.

"We can now leverage sleep to look at basic mechanisms of learning new words," said Simona Ghetti, professor at the Center for Mind and Brain and UC Davis Department of Psychology.

At two to three years old, children enter a unique age in memory development, Ghetti said. But young children are challenging to study, and they especially dislike being in a functional MRI scanner.

"The scariest things to small children are darkness and loud noises, and that's what it's like during an MRI scan," Ghetti said.

Ghetti's team had previously found that if children fell asleep in a scanner while it wasn't working, they could later start the scan and see brain activation in response to songs the children had heard earlier.

In the new study, they looked at how toddlers retained memories of words.

Graduate student Elliott Johnson and Ghetti created a series of made-up, but realistic sounding words as names for a series of objects and puppets. In the first session, two-year-olds were introduced to two objects and two puppets, then tested on their memory of the names after a few minutes. A week later, they returned and were tested on whether they remembered the names of the objects and puppets. Soon after the second test, they slept overnight in an MRI scanner. The researchers played back the words the children had learned, as well as other words, as they slept.

Activation of the hippocampus in learning

The researchers found activation of the hippocampus and the anterior medial temporal lobe when the sleeping children were played words they had previously learned. This activation correlated with how well they had performed when they initially learned the words a week earlier.

"This suggests that the hippocampus is particularly important for laying down the initial memory for words," Ghetti said. "This compares quite well with findings from older children and adults, where the hippocampus is associated with learning and with recalling recent memories" Johnson added.

Although young children are rapidly forming memories of new words, they are also losing a lot of memories. When we form a memory, it includes the context: where, when, what else was going on. But if we just learned the name of an object, we don't need to remember the context to use the word again. That extra detail can go.

Read more at Science Daily

Hit the sleep ‘sweet spot’ to keep brain sharp

Like so many other good things in life, sleep is best in moderation. A multiyear study of older adults found that both short and long sleepers experienced greater cognitive decline than people who slept a moderate amount, even when the effects of early Alzheimer's disease were taken into account. The study was led by researchers at Washington University School of Medicine in St. Louis.

Poor sleep and Alzheimer's disease are both associated with cognitive decline, and separating out the effects of each has proven challenging. By tracking cognitive function in a large group of older adults over several years and analyzing it against levels of Alzheimer's-related proteins and measures of brain activity during sleep, the researchers generated crucial data that help untangle the complicated relationship among sleep, Alzheimer's and cognitive function. The findings could aid efforts to help keep people's minds sharp as they age.

The findings are published Oct. 20 in the journal Brain.

"It's been challenging to determine how sleep and different stages of Alzheimer's disease are related, but that's what you need to know to start designing interventions," said first author Brendan Lucey, MD, an associate professor of neurology and director of the Washington University Sleep Medicine Center. "Our study suggests that there is a middle range, or 'sweet spot,' for total sleep time where cognitive performance was stable over time. Short and long sleep times were associated with worse cognitive performance, perhaps due to insufficient sleep or poor sleep quality. An unanswered question is if we can intervene to improve sleep, such as increasing sleep time for short sleepers by an hour or so, would that have a positive effect on their cognitive performance so they no longer decline? We need more longitudinal data to answer this question."

Alzheimer's is the main cause of cognitive decline in older adults, contributing to about 70% of dementia cases. Poor sleep is a common symptom of the disease and a driving force that can accelerate the disease's progression. Studies have shown that self-reported short and long sleepers are both more likely to perform poorly on cognitive tests, but such sleep studies typically do not include assessments of Alzheimer's disease.

To tease apart the separate effects of sleep and Alzheimer's disease on cognition, Lucey and colleagues turned to volunteers who participate in Alzheimer's studies through the university's Charles F. and Joanne Knight Alzheimer Disease Research Center. Such volunteers undergo annual clinical and cognitive assessments, and provide a blood sample to be tested for the high-risk Alzheimer's genetic variant APOE4. For this study, the participants also provided samples of cerebrospinal fluid to measure levels of Alzheimer's proteins, and each slept with a tiny electroencephalogram (EEG) monitor strapped to their foreheads for four to six nights to measure brain activity during sleep.

In total, the researchers obtained sleep and Alzheimer's data on 100 participants whose cognitive function had been monitored for an average of 4 1/2 years. Most (88) had no cognitive impairments, 11 were very mildly impaired, and one had mild cognitive impairment. The average age was 75 at the time of the sleep study.

The researchers found a U-shaped relationship between sleep and cognitive decline. Overall, cognitive scores declined for the groups that slept less than 4.5 or more than 6.5 hours per night -- as measured by EEG -- while scores stayed stable for those in the middle of the range. EEG tends to yield estimates of sleep time that are about an hour shorter than self-reported sleep time, so the findings correspond to 5.5 to 7.5 hours of self-reported sleep, Lucey said.

The U-shaped relationship held true for measures of specific sleep phases, including rapid-eye movement (REM), or dreaming, sleep; and non-REM sleep. Moreover, the relationship held even after adjusting for factors that can affect both sleep and cognition, such as age, sex, levels of Alzheimer's proteins, and the presence of APOE4.

"It was particularly interesting to see that not only those with short amounts of sleep but also those with long amounts of sleep had more cognitive decline," said co-senior author David Holtzman, MD, a professor of neurology. "It suggests that sleep quality may be key, as opposed to simply total sleep."

Each person's sleep needs are unique, and people who wake up feeling rested on short or long sleep schedules should not feel compelled to change their habits, Lucey said. But those who are not sleeping well should be aware that sleep problems often can be treated.

Read more at Science Daily

Oct 20, 2021

Amount of information in visible universe quantified

Researchers have long suspected a connection between information and the physical universe, with various paradoxes and thought experiments used to explore how or why information could be encoded in physical matter. The digital age propelled this field of study, suggesting that solving these research questions could have tangible applications across multiple branches of physics and computing.

In AIP Advances, from AIP Publishing, a University of Portsmouth researcher attempts to shed light on exactly how much of this information is out there and presents a numerical estimate for the amount of encoded information in all the visible matter in the universe -- approximately 6 times 10 to the power of 80 bits of information. While not the first estimate of its kind, this study's approach relies on information theory.

"The information capacity of the universe has been a topic of debate for over half a century," said author Melvin M. Vopson. "There have been various attempts to estimate the information content of the universe, but in this paper, I describe a unique approach that additionally postulates how much information could be compressed into a single elementary particle."

To produce the estimate, the author used Shannon's information theory to quantify the amount of information encoded in each elementary particle in the observable universe as 1.509 bits of information. Mathematician Claude Shannon, called the Father of the Digital Age because of his work in information theory, defined this method for quantifying information in 1948.

"It is the first time this approach has been taken in measuring the information content of the universe, and it provides a clear numerical prediction," said Vopson. "Even if not entirely accurate, the numerical prediction offers a potential avenue toward experimental testing."

Recent research sheds light on the ways information and physics interact, such as how information exits a black hole. However, the precise physical significance of information remains elusive, but multiple radical theories contend information is physical and can be measured.

In previous studies, Vopson postulated information is a fifth state of matter alongside solid, liquid, gas, and plasma, and that elusive dark matter could be information. Vopson's study also included derivation of a formula that reproduces accurately the well-known Eddington number, the total number of protons in the observable universe.

Read more at Science Daily

How quickly does the climate recover?

Climate change is causing temperatures to rise and is also increasing the likelihood of storms, heavy rain, and flooding -- the recent flood disaster in the Ahr valley in Germany is just one such example. What we need to ask ourselves in this connection is how quickly the climate can recover from the warming caused by an increase in carbon dioxide in the atmosphere.

Professor Philip Pogge von Strandmann of Johannes Gutenberg University Mainz (JGU), Germany, set out to investigate this aspect by considering the significant rise in global temperatures of five to eight degrees Celsius that took place 56 million years ago -- the fastest natural period of global warming that has impacted on our climate, known as the Paleocene-Eocene Thermal Maximum (PETM). It was most likely triggered by a volcanic eruption that released huge amounts of carbon dioxide or CO2 into the atmosphere. We know that the higher the temperature, the faster rock will weather, and, in addition, if there is a lot of CO2 in the atmosphere, some of it will react with water, forming carbonic acid -- the very acid that promotes and accelerates the weathering process. Because of the weathering process, this atmospheric carbon will eventually find its way into the seas via rivers, where it binds CO2 as carbonate and form a persistent ocean-based reservoir of carbon dioxide. "Our theory was that if rock weathers faster due to the increased temperatures, it also helps convert a lot of carbon dioxide from the atmosphere into insoluble carbonate in seawater -- meaning that, over the long term, CO2 levels would end up falling again and the climate would ultimately recover," explained Pogge von Strandmann. This effect could have helped to keep the Earth's climate fairly stable over billions of years and it could have even prevented the total extinction of all life on the planet.

Weathering of rocks contributes to climate stabilization

In order to test this theory, Professor Philip Pogge von Strandmann and his team decided to analyze the weathering processes that occurred during the warming event 56 million years ago. Their findings indicate that the theory may well be correct. "Rock weathering during that time increased by 50 percent as a result of global warming; erosion -- the physical part of weathering -- actually tripled. Another consequence of the rise in temperature was that evaporation, rainfall, and storms also increased, which then led to even more erosion. As a result of this increased rock weathering, the climate stabilized, but it took between 20,000 and 50,000 years for this to happen," said Pogge von Strandmann, summarizing the team's findings.

But how did the researchers come to these conclusions? After all, these weathering processes took place 56 million years ago. The answer lies in the rocks themselves. When rocks dissolve, they release lithium -- the isotopes lithium-6 and lithium-7 to be precise -- which escapes into any surrounding water. The proportion of the isotopes lithium-6 and lithium-7 present in water is determined by the type of weathering, in other words, the amount of erosion produced by weathering. Clay, which is found at the bottom of the sea, mainly stores lithium-6, while lithium-7 remains in the water. The research team conducted two types of scientific investigation: They examined marine carbonates that were formed 56 million years ago -- a type of rock that absorbs chemical components from water. They also investigated clay minerals from Denmark and Svalbard, which also formed during this period, looking at the relative proportions of lithium isotopes in these two different kinds of minerals. The researchers were able to use the data obtained to draw conclusions about weathering and climate 56 million years ago. Their results have been published in the journal Science Advances.

Read more at Science Daily

More than 99.9% of studies agree: Humans caused climate change

More than 99.9% of peer-reviewed scientific papers agree that climate change is mainly caused by humans, according to a new survey of 88,125 climate-related studies.

The research updates a similar 2013 paper revealing that 97% of studies published between 1991 and 2012 supported the idea that human activities are altering Earth's climate. The current survey examines the literature published from 2012 to November 2020 to explore whether the consensus has changed.

"We are virtually certain that the consensus is well over 99% now and that it's pretty much case closed for any meaningful public conversation about the reality of human-caused climate change," said Mark Lynas, a visiting fellow at the Alliance for Science at Cornell University and the paper's first author.

"It's critical to acknowledge the principal role of greenhouse gas emissions so that we can rapidly mobilize new solutions, since we are already witnessing in real time the devastating impacts of climate related disasters on businesses, people and the economy," said Benjamin Houlton, Dean of the College of Agriculture and Life Sciences at Cornell and a co-author of the study, "Greater than 99% Consensus on Human Caused Climate Change in the Peer-Reviewed Scientific Literature," which published Oct. 19 in the journal Environmental Research Letters.

In spite of such results, public opinion polls as well as opinions of politicians and public representatives point to false beliefs and claims that a significant debate still exists among scientists over the true cause of climate change. In 2016, the Pew Research Center found that only 27% of U.S. adults believe that "almost all" scientists agreed that climate change is due to human activity, according to the paper. A 2021 Gallup poll pointed to a deepening partisan divide in American politics on whether Earth's rising observed temperatures since the Industrial Revolution were primarily caused by humans.

"To understand where a consensus exists, you have to be able to quantify it," Lynas said. "That means surveying the literature in a coherent and non-arbitrary way in order to avoid trading cherry-picked papers, which is often how these arguments are carried out in the public sphere."

In the study, the researchers began by examining a random sample of 3,000 studies from the dataset of 88,125 English-language climate papers published between 2012 and 2020. They found only four out of the 3,000 papers were skeptical of human-caused climate change. "We knew that [climate skeptical papers] were vanishingly small in terms of their occurrence, but we thought there still must be more in the 88,000," Lynas said.

Co-author Simon Perry, a United Kingdom-based software engineer and volunteer at the Alliance for Science, created an algorithm that searched out keywords from papers the team knew were skeptical, such as "solar," "cosmic rays" and "natural cycles." The algorithm was applied to all 88,000-plus papers, and the program ordered them so the skeptical ones came higher in the order. They found many of these dissenting papers near the top, as expected, with diminishing returns further down the list. Overall, the search yielded 28 papers that were implicitly or explicitly skeptical, all published in minor journals.

If the 97% result from the 2013 study still left some doubt on scientific consensus on the human influence on climate, the current findings go even further to allay any uncertainty, Lynas said. "This pretty much should be the last word," he said.

Read more at Science Daily

DNA tangles can help predict evolution of mutations

Tangles in unwound DNA can create mutational hotspots in the genomes of bacteria, according to a new study by the Milner Centre for Evolution at the University of Bath. The study authors say these findings will help us in the future to predict the evolution of bacteria and viruses over time, which could aid vaccine design and better understanding of antibiotic resistance.

While most evolution is shaped by natural selection, where only those individuals who are adapted for their environment are able to survive and pass on their genes, a new study published in Nature Communications shows that evolution is also influenced by tangles in the DNA strands.

A team of scientists, led by the University of Bath in collaboration with the University of Birmingham, looked at the evolution of two strains of the soil bacteria Pseudomonas fluorescens (SBW25 and Pf0-1).

When the scientists removed a gene that enables the bacteria to swim, both strains of the bacteria quickly evolved the ability to swim again, but using quite different routes.

One of the strains (called SBW25), always mutated the same part of a particular gene to regain mobility.

However, the other strain (called Pf0-1) mutated different places in different genes each time the scientists repeated the experiment.

To understand why one strain evolved predictably and the other was unpredictable, they compared the DNA sequences of the two strains. They found that in the SBW25 strain, which mutated in a predictable way, there was a region where the DNA strand looped back on itself forming a hairpin-shaped tangle.

These tangles can disrupt the cell machinery, called DNA polymerase, which copies the gene during cell division, and so makes mutations more likely to happen.

When the team removed the hairpin structure using six silent mutations (without changing the sequence of the protein produced), this abolished the mutational hotspot and the bacteria started evolving in a much wider variety of ways to get back its swimming ability.

Dr Tiffany Taylor, from the Milner Centre for Evolution, said: "DNA normally forms a double helix structure, but when the DNA is copied, the strands are briefly separated.

"We've found there are hotspots in the DNA where the sequence causes the separated strands of DNA to get twisted back on themselves -- a bit like when you pull apart the strands of a rope -- this results in a tangle.

"When the DNA polymerase enzyme runs along the strand to copy the gene, it bumps into the tangle and can skip, causing a mutation.

"Our experiments show that we were able to create or remove mutational hotspots in the genome by altering the sequence to cause or prevent the hairpin tangle.

"This shows that while natural selection is still the most important factor in evolution, there are other factors at play too.

"If we knew where the potential mutational hotspots in bacteria or viruses were, it might help us to predict how these microbes could mutate under selective pressure."

Mutational hotspots have already been found in cancer cells, and the researchers plan to search for them across a range of bacterial species, including important pathogens.

This information can help scientists better understand how bacteria and viruses evolve, which can help in developing vaccines against new variants of diseases. It can also make it easier to predict how microbes might develop resistance to antibiotics.

Dr James Horton, who has recently completed his PhD at the Milner Centre for Evolution, said: "Like many exciting discoveries, this was found by accident. The mutations we were looking at were so-called silent because they don't change the resulting protein sequence, so initially we didn't think they were particularly important.

Read more at Science Daily

Cat bacteria treats mouse skin infection, may help you and your pets as well

Researchers at University of California San Diego School of Medicine used bacteria found on healthy cats to successfully treat a skin infection on mice. These bacteria may serve as the basis for new therapeutics against severe skin infections in humans, dogs and cats.

The study, published in eLife on October 19, 2021, was led by Richard L. Gallo, MD, PhD, Distinguished Professor and chair of the Department of Dermatology at UC San Diego School of Medicine, whose team specializes in using bacteria and their products to treat illnesses -- an approach known as "bacteriotherapy."

Skin is colonized by hundreds of bacterial species that play important roles in skin health, immunity and fighting infection. All species need to maintain a diverse balance of healthy skin bacteria to fight potential pathogens.

"Our health absolutely depends on these 'good' bacteria," said Gallo. "They rely on our healthy skin to live, and in return some of them protect us from 'bad' bacteria. But if we get sick, 'bad' bacteria can take advantage of our weakened defenses and cause infection."

This is the case with methicillin-resistant Staphylococcus pseudintermedius (MRSP), a bacterium commonly found on domesticated animals that becomes infectious when the animals are sick or injured. MRSP is an emerging pathogen that can jump between species and cause severe atopic dermatitis, or eczema. These infections are common in dogs and cats, and can also occur in humans, though rates of human infection vary around the world. As its name suggests, MRSP is resistant to common antibiotics and has been difficult to treat in clinical and veterinary settings.

To address this, researchers first screened a library of bacteria that normally live on dogs and cats and grew them in the presence of MRSP. From this, they identified a strain of cat bacteria called Staphylococcus felis (S. felis) that was especially good at inhibiting MRSP growth. They found that this special strain of S. felis naturally produces multiple antibiotics that kill MRSP by disrupting its cell wall and increasing the production of toxic free radicals.

"The potency of this species is extreme," said Gallo. "It is strongly capable of killing pathogens, in part because it attacks them from many sides -- a strategy known as 'polypharmacy.' This makes it particularly attractive as a therapeutic."

Bacteria can easily develop resistance to a single antibiotic. To get around this, S. felis has four genes that code for four distinct antimicrobial peptides. Each of these antibiotics is capable of killing MRSP on their own, but by working together, they make it more difficult for the bacteria to fight back.

Having established how S. felis kills the MRSP, the next step was to see whether it could work as a therapy on a live animal. The team exposed mice to the most common form of the pathogen and then added either S. felis bacteria or bacterial extract to the same site. The skin showed a reduction in scaling and redness after either treatment, compared with animals that had no treatment. There were also fewer viable MRSP bacteria left on the skin after treatment with S. felis.

Next steps include plans for a clinical trial to confirm whether S. felis can be used to treat MRSP infections in dogs. Bacteriotherapies like this one can be delivered via topical sprays, creams or gels that contain either live bacteria or purified extract of the antimicrobial peptides.

While these products are in development, what should pet owners do in the meantime?

"Don't stop washing your pets to keep these 'good' bacteria on them," said Gallo. "Skin has evolved to protect the 'good' bacteria, so soap and detergents don't usually wash the good guys off."

Read more at Science Daily

Oct 19, 2021

NASA, ULA launch Lucy Mission to ‘fossils’ of planet formation

NASA's Lucy mission, the agency's first to Jupiter's Trojan asteroids, launched at 5:34 a.m. EDT Saturday on a United Launch Alliance (ULA) Atlas V rocket from Space Launch Complex 41 at Cape Canaveral Space Force Station in Florida.

Over the next 12 years, Lucy will fly by one main-belt asteroid and seven Trojan asteroids, making it the agency's first single spacecraft mission in history to explore so many different asteroids. Lucy will investigate these "fossils" of planetary formation up close during its journey.

"Lucy embodies NASA's enduring quest to push out into the cosmos for the sake of exploration and science, to better understand the universe and our place within it," said NASA Administrator Bill Nelson. "I can't wait to see what mysteries the mission uncovers!"

About an hour after launch, Lucy separated from the second stage of the ULA Atlas V 401 rocket. Its two massive solar arrays, each nearly 24 feet (7.3 meters) wide, successfully unfurled about 30 minutes later and began charging the spacecraft's batteries to power its subsystems.

"Today's launch marks a genuine full-circle moment for me as Lucy was the first mission I approved in 2017, just a few months after joining NASA," said Thomas Zurbuchen, associate administrator for the Science Mission Directorate at the agency's Headquarters in Washington. "A true mission of discovery, Lucy is rich with opportunity to learn more about these mysterious Trojan asteroids and better understand the formation and evolution of the early solar system."

Lucy sent its first signal to Earth from its own antenna to NASA's Deep Space Network at 6:40 a.m. The spacecraft is now traveling at roughly 67,000 mph (108,000 kph) on a trajectory that will orbit the Sun and bring it back toward Earth in October 2022 for a gravity assist.

Named for the fossilized skeleton of one of our earliest known hominin ancestors, the Lucy mission will allow scientists to explore two swarms of Trojan asteroids that share an orbit around the Sun with Jupiter. Scientific evidence indicates that Trojan asteroids are remnants of the material that formed giant planets. Studying them can reveal previously unknown information about their formation and our solar system's evolution in the same way the fossilized skeleton of Lucy revolutionized our understanding of human evolution.

"We started working on the Lucy mission concept early in 2014, so this launch has been long in the making," said Hal Levison, Lucy principal investigator, based out of the Boulder, Colorado, branch of Southwest Research Institute (SwRI), which is headquartered in San Antonio. "It will still be several years before we get to the first Trojan asteroid, but these objects are worth the wait and all the effort because of their immense scientific value. They are like diamonds in the sky."

Lucy's Trojan destinations are trapped near Jupiter's Lagrange points -- gravitationally stable locations in space associated with a planet's orbit where smaller masses can be trapped. One swarm of Trojans is ahead of the gas giant planet, and another is behind it. The asteroids in Jupiter's Trojan swarms are as far away from Jupiter as they are from the Sun.

The spacecraft's first Earth gravity assist in 2022 will accelerate and direct Lucy's trajectory beyond the orbit of Mars. The spacecraft will then swing back toward Earth for another gravity assist in 2024, which will propel Lucy toward the Donaldjohanson asteroid -- located within the solar system's main asteroid belt -- in 2025.

Lucy will then journey toward its first Trojan asteroid encounter in the swarm ahead of Jupiter for a 2027 arrival. After completing its first four targeted flybys, the spacecraft will travel back to Earth for a third gravity boost in 2031, which will catapult it to the trailing swarm of Trojans for a 2033 encounter.

"Today we celebrate this incredible milestone and look forward to the new discoveries that Lucy will uncover," said Donya Douglas-Bradshaw, Lucy project manager at NASA's Goddard Space Flight Center in Greenbelt, Maryland.

Read more at Science Daily

People love the billionaire, but hate the billionaires’ club

Americans may respect and admire how individual billionaires -- think Oprah Winfrey or Bill Gates -- made their billions, even as they rage against the "top 1%" as a group, new research finds.

In eight related studies, people tended to have fewer problems with hearing about the extreme wealth of a particular wealthy person, even as they thought it was unfair that billionaires in general controlled so much riches.

"When there's this group of people at the top, we think that's unfair and wonder how luck or the economic system may have played a role in how they made all the money," said Jesse Walker, co-author of the study and assistant professor of marketing at The Ohio State University's Fisher College of Business.

"But when we look at one person at the top, we tend to think that person is talented and hard-working and they're more deserving of all the money they made."

And this difference may have real-life implications: People are more likely to support wealth taxes on the super-rich when they think about a group like the top 1%, but less likely to support these taxes when they think about a specific rich person.

Walker conducted the study with Thomas Gilovich, professor of psychology, and Stephanie Tepper, a PhD student in psychology, both at Cornell University. Their findings were published today (Oct. 18, 2021) in the Proceedings of the National Academy of Sciences.

In one study, 201 survey respondents had very different opinions about how much more a CEO should make relative to the average employee depending on how this fact was presented.

One group of participants read that the salaries of the CEOs of the largest 350 companies in America had grown from 48 times the average worker in 1995 to 372 times today.

The other group of participants read about one specific company in the top 350, called Avnet, and how Avnet's CEO, Robert Eisen, had seen his salary grow from 48 times the average worker in 1995 to 372 times today.

Participants in the study read that observers attributed the growth of all 350 companies, or the growth of Avnet, to their CEOs.

Those who were told about Avnet's CEO thought that the ratio of his salary to the average employee should be significantly higher than did those who were told about the whole group of CEOs.

"We appear to be a bit more tolerant of lavish compensation when it is an individual CEO being compensated, rather than CEOs as a group," Walker said.

The way the wealthy are portrayed and praised in society and the media may play a large role in how accepting people are of economic inequality, he said.

In one study, participants were shown a Forbes magazine cover. Half saw a cover adapted from an issue that highlighted the wealthiest people in the world. The cover was edited to remove five billionaires that most people were familiar with, such as Gates and Winfrey, in order to eliminate any positive or negative biases people might have toward them. It included only the seven billionaires that most people would either know nothing about or not feel strongly about.

The other half were shown a cover with only one of the seven billionaires.

After reading a brief description of the person or persons on the cover, participants were asked to write a few sentences conveying how they felt about the person or persons, and rate how much the person or persons deserved their wealth and how they thought they earned those riches.

The findings were striking, Walker said.

The comments of those who wrote about the individual were less angry than those who wrote about the group, and more likely to reflect the belief that the individual billionaire's success was due to talent and hard work.

"People in our study were clearly more upset by the wealth of the seven individuals pictured on a single cover than they were by any one of them pictured alone," Walker said.

And there was more. People who saw the seven billionaires pictured together were more in favor of an inheritance tax to close the gap between the wealthy and poor than were those who saw only one billionaire.

"How we think of the wealthiest people -- as a group or as individuals -- seems to affect even our policy preferences," he said.

The issue of how we think about policy regarding inequality is important, Walker said. Economic inequality has grown substantially over the past decades, particularly during the COVID-19 pandemic. One analysis suggests that U.S. billionaires saw their wealth surge $1.8 trillion (62%) during the pandemic.

Research has shown that countries with greater economic inequality tend to have higher homicide rates, greater infant mortality, lower well-being and lower commitment to democratic institutions.

"How we express and communicate information about inequality is important. Talking about "the 1%" is going to get a different reaction than personalizing it by talking about one person in that exclusive club," Walker said.

Read more at Science Daily

Ancient driftwood tracks 500 years of Arctic warming and sea ice

A new study reconstructs the path of frozen trees as they made their way across the Arctic Ocean over 500 years, giving scientists a unique look into changes in sea ice and currents over the last half millennium.

By dating and tracing pieces of driftwood on beaches in Svalbard, Norway's archipelago in the Arctic Circle, scientists have determined where these fallen trees floated. Retracing the driftwood's journey let the researchers reconstruct, for the first time, both the level of sea ice over time and the currents that propelled the driftwood-laden ice.

Borne by rivers to the ocean, fallen trees from the north's expansive boreal forests can be frozen in sea ice and float far, but the new research showsfewer trees are making the long journey as the sea ice that carries them shrinks away.

The new study found a distinct drop in new driftwood arrivals over the last 30 years, reflecting the steep decline in sea ice coverage in a warming Arctic and provides a higher-resolution picture of past Arctic Ocean conditions than other methods allow. The study is published in the Journal of Geophysical Research: Oceans, which publishes research that advances our understanding of the ocean and its processes.

Sea ice is sensitive to climate change and is an important part of Arctic ecosystems, so understanding how ice, ocean temperatures and currents have varied together over time is necessary for predicting coming changes in the Arctic. But doing so can be elusive: Ice melts, after all. The oldest sea ice is only about four years old (and getting younger), so scientists need to turn to other records.

"This is the first time driftwood has been used to look at large-scale changes in Arctic sea ice dynamics and circulation patterns," said geoscientist Georgia Hole at the University of Oxford, who led the study.

"They're taking the analysis one step further to connect changes in driftwood to changes in sea ice, and that's where we want to go. It's really exciting," said Hans Linderholm, a paleoclimatologist at the University of Gothenburg in Sweden who was not involved in the research.

Important ice cubes

The Arctic Ocean collects trees that naturally fall into high-latitude rivers in North America and Eurasia. When it was cold enough, some of the trees were frozen into the sea ice. The ice then floated across the ocean, swept along by ocean currents and winds, until beaching on Svalbard's shores. There they sat, some for hundreds of years, until researchers like Hole and Linderholm came along.

Researchers have used driftwood for climate-change studies before, but the new study is the first to test how useful Arctic driftwood is for peering into past currents and ice coverage. To check their work, the study directly compared driftwood-inferred sea ice coverage to the observational record of sea ice.

"This is a fantastic resource to say something about ocean currents and sea ice conditions," said Linderholm. "I think they do have a case for matching [tree] provenance changes to changes in sea ice conditions, which is what we're looking for: to have sea ice information prior to observations."

Tracing trees

In the summer months of 2016 and 2018, Hole and her collaborators combed several beaches in northern Svalbard for driftwood. Back in the lab, they analyzed the tree rings to determine what kind of tree it was and compared the tree ring patterns of each driftwood slice to a database of measured rings from trees across the boreal forests. Hole could then trace trees to individual countries, watersheds and even rivers and see how driftwood sources varied over time.

Hole paired her driftwood data with early sea ice observations, from 1600 to 1850, thanks to records from Icelandic fishers, seal hunters and passing ships. More recent sea ice data came from airplane and satellite imagery. Finally, she compared driftwood-tracking data with sea ice conditions and currents to see how well they correlated.

Her data revealed a slow and steady northward migration of the lowest-latitude sea ice, reflecting warming, along with swings in driftwood arrivals between North America and Eurasia.

"We also saw an increase in variability in the driftwood record from 1700 to 1850, which we interpret as increased variability in sea ice," said Hole. Colder conditions tend to have more sea ice, so earlier driftwood reflected a wider range of sources. As the Arctic warmed up and sea ice melted, less driftwood could make the long journey.

Read more at Science Daily

New model points to solution to global blood shortage

Blood transfusions save lives, yet the precious fluid is in desperately short supply, not just in the U.S. but around the globe. But what if transfusions don't always require blood?

A new mathematical model of the body's interacting physiological and biochemical processes -- including blood vessel expansion, blood thickening and flow-rate changes in response to the transfusion of red blood cells -- shows that patients with anemia, or blood with low oxygen levels, can be effectively treated with transfusions of blood substitutes that are more readily available.

The research, co-authored by scientists at Stanford University and the University of California, San Diego (UCSD), was published on Oct. 14 in the Journal of Applied Physiology.

Using a different fluid could also eliminate a harmful consequence of blood transfusion: Blood use has been observed to lower lifespan by 6 percent per unit transfused per decade because of its adverse side effects.

"Instead of real blood, we can use a substitute that can lower the costs and eliminate blood transfusion's negative effects," said lead study author Weiyu Li, a PhD student in energy resources engineering at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth).

Transfusion is a common procedure for transferring blood components directly to anemic patients' circulation. Red blood cells are uniquely equipped to perform the function of carrying oxygen, which is why they are used for transfusions for patients experiencing anemia. But the process of obtaining, storing and delivering the correct, sanitary blood type for each patient is also intensive and costly. Moreover, the supply of blood that is available falls far short of the demand: The global deficit across all countries without enough supply totals about 100 million units of blood per year.

"You could deliver more goods, in this case, oxygen, with less -- that's actually the basic idea of sustainability," said senior study author Daniel Tartakovsky, a professor of energy resources engineering at Stanford Earth. "It's all about how to do more with less."

Transfusion of red blood cells is done to improve the likelihood that oxygen vital to organ and tissue function will be delivered. However, the process also thickens the blood, and that increased viscosity can be a problem, according to the research. The new model shows that during transfusion, some patients' blood vessels do not dilate and, since their blood has been thickened by additional red blood cells, it is more viscous and does not circulate as easily to deliver oxygen. For these patients, treating anemia with a 2-unit transfusion -- currently, the most frequently used transfusion quantity -- would reduce blood flow, regardless of the state of anemia, according to the model.

However, for many people, transfusion causes blood vessels to dilate, thereby increasing circulation and delivering more oxygen to the body. The findings reveal the advantage of anemic patients whose blood vessels dilate during transfusion. The model suggests that either abstaining from transfusion or transfusing alternative fluids known as plasma expanders, which prompt blood vessels to dilate, may be a more effective way to increase oxygen delivery. Plasma expanders consist of solutions of high-molecular-weight starch dissolved in normal saline; they have been in use in transfusion medicine for several decades and have proven to be effective in experimental studies.

"At present, blood transfusion is determined by addressing the wrong target, namely restoring oxygen-carrying capacity," said co-author Marcos Intaglietta, a professor and founder of the bioengineering discipline at UCSD. "But the logical target of a blood transfusion is restoring oxygen-delivery capacity."

Projections of the team's results show that safe and low-cost blood substitutes can decrease the overall cost of blood transfusion by 10 times, while significantly lowering the negative aspects of the process. Their model of the body's circulatory processes was derived from previously published experiments on how mammals react to transfusion.

"Our mathematical model identifies natural physiological processes that explain the conclusion of multiple observational studies: People can get the benefit of blood transfusion without using blood," Tartakovsky said. "But nothing really comes out of modeling alone -- it has to be grounded in observations, investigational studies and experience."

The co-authors hope their findings will lead to clinical trials that test the capacity for non-blood alternatives to increase oxygen delivery. To date, there have not been consistent results from rigorous medical trials that support the notion that small amounts of blood are more effective than just adding human plasma, according to the study authors.

Read more at Science Daily

Oct 18, 2021

Titan’s river maps may advise Dragonfly’s 'sedimental' journey

With future space exploration in mind, a Cornell-led team of astronomers has published the final maps of Titan's liquid methane rivers and tributaries -- as seen by NASA's late Cassini mission -- so that may help provide context for Dragonfly's upcoming 2030s expedition.

The fluvial maps and details of their accuracy were published in the Planetary Science Journal (August 2021.) In addition to the maps, the work examined what could be learned by analyzing Earth's rivers by using degraded radar data -- similar to what Cassini saw.

Like water on Earth, liquid methane and ethane fill Titan's lakes, rivers and streams. But understanding those channels -- including their twists and branch-like turns -- is key to knowing how that moon's sediment transport system works and the underlying geology.

"The channel systems are the heart of Titan's sediment transport pathways," said Alex Hayes, associate professor of astronomy in the College of Arts and Sciences. "They tell you how organic material is routed around Titan's surface, and identifies locations where the material might be concentrated near tectonic or perhaps even cryovolcanic features.

"Further, those materials either can be sent down into Titan's liquid water interior ocean, or alternatively, mixed with liquid water that gets transported up to the surface," Hayes said.

Larger than the planet Mercury and fully shrouded in a dense nitrogen and methane atmosphere, Titan is the only other place in the solar system with an active hydrologic system, which includes rain, channels, lakes and seas.

"Unlike Mars, it's not 3.6 billion years ago when you would have seen lakes and channels on Titan. It's today," Hayes said. "Examining Titan's hydrologic system represents an extreme example comparable to Earth's hydrologic system -- and it's the only instance where we can actively see how a planetary landscape evolves in the absence of vegetation."

Julia Miller '20 led the detailed work of examining Cassini's Synthetic Aperture Radar (SAR) images of Titan's surface, looking for fluvial characteristics and then comparing those images to those available on Earth.

On Earth, fluvial geomorphology is typically studied with topographic data and high-resolution visible images, but that was not available for Titan. Instead, Miller used Earth-based radar images and degraded them to match the Cassini radar images of Titan.

This way, Miller could understand the limits of the Cassini dataset and know which results are robust for analysis using low, roughly 1-kilometer resolution data.

"Although the quality and quantity of Cassini SAR images put significant limits on their utility for investigating river networks," Miller said, "they can still be used to understand Titan's landscape at a fundamental level."

River shapes say a lot. "You can use sort of what the river looks like to try to say some things about the type of material that it's flowing through, or like how steep the surfaces, or just what went on in that region," Miller said. "This is using the rivers as a starting point, to then, ideally, learn more about the planet."

The Dragonfly mission to Titan is slated to launch in 2027 and is scheduled to arrive at Titan in 2034.

Read more at Science Daily

Lakes are changing worldwide: Human activities to blame

International research led by Luke Grant, Inne Vanderkelen and Prof Wim Thiery of the VUB research group BCLIMATE shows that global changes in lake temperature and ice cover are not due to natural climate variability and can only be explained by massive greenhouse gas emissions since the Industrial Revolution. The influence of human-induced climate change is evident in rising lake temperatures and in the fact that the ice cover forms later and melts sooner.

"These physical properties are fundamental to lake ecosystems," says Grant, a researcher at VUB and lead author of the study. "As impacts continue to increase in the future, we risk severely damaging lake ecosystems, including water quality and populations of native fish species. This would be disastrous for the many ways in which local communities depend on lakes, ranging from drinking water supply to fishing."

The team also predicted future development under different warming scenarios. In a low-emission scenario, the average warming of lakes in the future is estimated to stabilise at +1.5°C above pre-industrial levels and the duration of ice cover to be 14 days shorter. In a high-emission world, these changes could lead to an increase of +4.0 °C and 46 fewer days of ice.

At the beginning of the project, the authors observed changes in lakes around the world: temperatures are rising and seasonal ice cover is shorter. However, the role of climate change in these trends had not yet been demonstrated.

"In other words, we had to rule out the possibility that these changes were caused by the natural variability of the climate system," says fellow VUB researcher and study co-author Vanderkelen.

The team therefore developed multiple computer simulations with models of lakes on a global scale, on which they then ran a series of climate models. Once the team had built up this database, they applied a methodology described by the Intergovernmental Panel on Climate Change (IPCC). After determining the historical impact of climate change on lakes, they also analysed various future climate scenarios.

The results show that it is highly unlikely that the trends in lake temperatures and ice cover in recent decades can be explained solely by natural climate variability. Moreover, the researchers found clear similarities between the observed changes in lakes and model simulations of lakes in a climate influenced by greenhouse gas emissions.

"This is very convincing evidence that climate change caused by humans has already impacted lakes," says Grant. Projections of lake temperatures and ice cover loss unanimously indicate increasing trends for the future. For every 1°C increase in global air temperature, lakes are estimated to warm by 0.9°C and lose 9.7 days of ice cover. In addition, the analysis revealed significant differences in the impact on lakes at the end of the century, depending on the measures taken by humans to combat climate change.

Read more at Science Daily

Climate change and human pressure mean migration may be 'no longer worth it'

Animals that migrate north to breed are being put at risk by ongoing climate change and increasing human pressure, losing earlier advantages for migration, declining in numbers and faring much worse than their resident counterparts, according to scientists writing in Trends in Ecology & Evolution.

Many animals, including mammals, birds and insects migrate long distances north to breed, taking advantage of the seasonally plentiful food, fewer parasites and diseases, and the relative safety from predators.

However, the international research team, including scientists from the University of Bath, found changes in climate and increasing human pressure have eroded these benefits and in many cases led to lower reproductive success and higher mortality in migrating species.

The researchers warn that reduced advantages for long-distance migration have potentially serious consequences for the structure and function of ecosystems.

They highlighted 25 recent studies, describing how migration is becoming less profitable for various terrestrial animals, including caribou, shorebirds and Monarch butterflies, which migrate over 1000km during the summer to north temperate and arctic regions to breed, returning south in the winter.

Travelling such long distances is very costly in terms of energy but the benefits of food supply, fewer diseases and predators meant the benefits outweighed the cost, however the researchers say this is no longer the case for many populations.

Whilst some animals might shift their breeding ranges slightly further north to compensate for the change in environmental conditions, migratory animals are hardwired to continue the dangerous trip each year to breed, despite the lack of benefit.

Dr Vojt?ch Kubelka, the leading author and former Visiting Researcher at the University of Bath's Milner Centre for Evolution, said: "These findings are alarming. We have lived with the notion that northern breeding grounds represent safe harbours for migratory animals.

"On the contrary, numerous Arctic and North temperate sites may now represent ecological traps or even worse degraded environments for diverse migratory animals, including shorebirds, caribou or butterflies."

Food supplies and availability in the North may be climatically mismatched with reproduction of migratory animals, incurring higher offspring mortality, as described for many migratory birds.

Also new parasites and pathogens are emerging in the Arctic, creating new pressures, and top predators are increasingly preying on nests and eating eggs and chicks before they get a chance to fledge.

Dr Kubelka said: "Lemmings and voles used to be the main food source for predators such as foxes in the Arctic, however the milder winters can cause rain to fall on snow and then re-freeze, preventing the lemmings from reaching their food."

"With fewer lemmings and voles to feed on, foxes eat the eggs and chicks of migratory birds instead.

"We've seen that rates of nest predation of Arctic migratory shorebirds has tripled over the last 70 years, in large part due to climate change."

The authors suggest that Arctic and northern temperate breeding grounds need substantial conservation attention, in addition to well-recognised problems at stopover sites and wintering areas of migratory species.

Next to the concrete conservation measures, the authors propose a simple framework on how to map the stressors for migratory animals across the space and time, helping to distinguish among suitable, naturally improved or protected habitats on one hand and the ecological traps or degraded environments with reduced or eroded benefits for migratory behaviour on the other hand.

Dr Kubelka said: "The recognition of emerging threats and the proposed framework of migration profitability classification will help to identify the most endangered populations and regions, enabling the implementation of suitable conservation measures."

Professor Tamás Székely, Royal Society Wolfson Research Merit Award holder at the University of Bath's Milner Centre for Evolution, said: "Animal migration from equatorial regions to the North temperate and the Arctic is one of the largest movements of biomass in the world. But with reduced profitability of migration behaviour and smaller number of offspring joining the population, the negative trend will continue and fewer and fewer individuals will be returning back to the North.

"The Earth is a complex ecosystem -- changes in migration profitability affect populations of migrating animals which precipitate in alterations of species composition, trophic food webs as well as the whole ecosystem functioning.

"These patterns are particularly threatening for migratory animals as large numbers of those species are already negatively affected outside the breeding period, at their stopover sites and wintering grounds -- and many have formerly relied on the northern latitudes to provide relative safe breeding grounds."

Read more at Science Daily

So-called junk DNA plays critical role in mammalian development

 Nearly half of our DNA has been written off as junk, the discards of evolution: sidelined or broken genes, viruses that got stuck in our genome and were dismembered or silenced, none of it relevant to the human organism or human evolution.

But research over the last decade has shown that some of this genetic "dark matter" does have a function, primarily in regulating the expression of host genes -- a mere 2% of our total genome -- that code for proteins. Biologists continue to debate, however, whether these regulatory sequences of DNA play essential or detrimental roles in the body or are merely incidental, an accident that the organism can live without.

A new study led by researchers at University of California, Berkeley, and Washington University explored the function of one component of this junk DNA, transposons, which are selfish DNA sequences able to invade their host genome. The study shows that at least one family of transposons -- ancient viruses that have invaded our genome by the millions -- plays a critical role in viability in the mouse, and perhaps in all mammals. When the researchers knocked out a specific transposon in mice, half their mouse pups died before birth.

This is the first example of a piece of "junk DNA" being critical to survival in mammals.

In mice, this transposon regulates the proliferation of cells in the early fertilized embryo and the timing of implantation in the mother's uterus. The researchers looked in seven other mammalian species, including humans, and also found virus-derived regulatory elements linked to cell proliferation and timing of embryo implantation, suggesting that ancient viral DNA has been domesticated independently to play a crucial role in early embryonic development in all mammals.

According to senior author Lin He, UC Berkeley professor of molecular and cell biology, the findings highlight an oft-ignored driver of evolution: viruses that integrate into our genome and get repurposed as regulators of host genes, opening up evolutionary options not available before.

"The mouse and humans share 99% of their protein coding genes in their genomes -- we are very similar with each other," He said. "So, what constitutes the differences between mice and humans? One of the major differences is gene regulation -- mice and humans have the same genes, but they can be regulated differently. Transposons have the capacity to generate a lot of gene regulatory diversity and could help us to understand species-specific differences in the world."

Colleague and co-senior author Ting Wang, the Sanford and Karen Loewentheil Distinguished Professor of Medicine in the Department of Genetics at the Washington University School of Medicine in St. Louis, Missouri, agrees.

"The real significance of this story is it tells us how evolution works in the most unexpected manner possible," Wang said. "Transposons were long considered useless genetic material, but they make up such a big portion of the mammalian genome. A lot of interesting studies illustrate that transposons are a driving force of human genome evolution. Yet, this is the first example that I know of where deletion of a piece of junk DNA leads to a lethal phenotype, demonstrating that the function of specific transposons can be essential."

The finding could have implications for human infertility. According to first author Andrew Modzelewski, a UC Berkeley postdoctoral fellow, nearly half of all miscarriages in humans are undiagnosed or don't have a clear genetic component. Could transposons like this be involved?

"If 50% of our genome is non-coding or repetitive -- this dark matter -- it is very tempting to ask the question whether or not human reproduction and the causes of human infertility can be explained by junk DNA sequences," he said.

Embryo implantation

He, the Thomas and Stacey Siebel Distinguished Chair Professor at UC Berkeley, studies the 98% or more of our genome that does not code for proteins. For most of He's career, she has focused on microRNAs and longer pieces of non-coding RNAs, both of which are potent gene regulators. Five years ago, however, her team accidentally discovered a microRNA regulator for a transposon family called MERVL (mouse endogenous retroviral elements) that was involved in cell fate determination of early mouse embryos. The unexpected abundance of transposon transcription in mouse embryos led He's team to investigate the developmental functions of transposons, which have taken up residence in the genomes of nearly every organism on Earth.

In a paper appearing this week in the journal Cell, He and her team identify the key regulatory DNA involved: a piece of a transposon -- a viral promoter -- that has been repurposed as a promoter for a mouse gene that produces a protein involved in cell proliferation in the developing embryo and in the timing of implantation of the embryo. A promoter is a short DNA sequence that is needed upstream of a gene in order for the gene to be transcribed and expressed.

Wild mice use this transposon promoter, calledMT2B2, to initiate transcription of the gene Cdk2ap1 specifically in early embryos to produce a short protein "isoform" that increases cell proliferation in the fertilized embryo and speeds its implantation in the uterus. Using CRISPR-EZ, a simple and inexpensive technique that Modzelewski and He developed several years ago, they disabled the MT2B2 promoter and found that mice instead expressed the Cdk2ap1 gene from its default promoter as a longer form of the protein, a long isoform, that had the opposite effect: decreased cell proliferation and delayed implantation.

The result of this knockout was the death at birth of about half the pups.

Modzelewski said that the short form of the protein appears to make the many embryos of the mouse implant with a regular spacing within the uterus, preventing crowding. When the promoter is knocked out so that the long form is present only, the embryos implant seemingly randomly, some of them over the cervix, which blocks exit of the fully developed fetus and sometimes kills the mother during the birthing process.

They found thatwithin a 24-hour period prior to embryo implantation, the MT2B2 promoter ramps up expression of the Cdk2ap1 gene so much that the short form of the protein makes up 95% of the two isoforms present in embryos. The long isoform is normally produced later in gestation when the default promoter upstream of the Cdk2ap1 gene becomes active.

Working with Wanqing Shao, co-first author of the study and a postdoctoral fellow in Wang's group at Washington University, the team searched through published data on preimplantation embryos for eight mammalian species -- human, rhesus monkey, marmoset, mouse, goat, cow, pig and opossum -- to see whether transposons are turned on briefly before implantation in other species. These online data came from a technique called single cell RNA sequencing, or scRNA-seq, which records the levels of messenger RNA in single cells, an indication of which genes are turned on and transcribed. In all cases, they had to retrieve the data on non-coding DNA because it is typically removed before analysis, with the presumption that it's unimportant.

While transposons are generally specific to individual species -- humans and mice, for example, have largely different sets -- the researchers found that different species-specific transposon families were turned on briefly before implantation in all eight mammals, including the opossum, the only mammal in the group that does not employ a placenta to implant embryos in the uterus.

"What's amazing is that different species have largely different transposons that are expressed in preimplantation embryos, but the global expression profiles of these transposons are nearly identical among all the mammalian species," He said.

Colleague and co-senior author Davide Risso, a former UC Berkeley postdoctoral fellow and now associate professor of statistics at the University of Padua in Italy, developed a method for linking specific transposons to preimplantation genes so as to weed out the thousands of copies of related transposons that exist in the genome. This method is crucial to identifying individual transposon elements with important gene regulatory activity.

"It's interesting to note that the data that we used were mostly based on the previous sequencing technology, called SMART-seq, which covers the full sequence of the RNA molecules. The current popular technique, 10x genomics technology, would not have shown us the different levels of protein isoforms. They're blind to them," Risso said.

Viruses are evolutionary reservoir

The researchers found that in nearly all of the eight mammalian species, both short and long Cdk2ap1 isoforms occur, but are switched on at different times and in different proportions that correlate with whether embryos implant early, as in mice, or late, as in cows and pigs. Thus, at the protein level, both the short and long isoforms appear conserved, but their expression patterns are species-specific.

"If you have a lot of the short Cdk2ap1 isoform, like mice, you implant very early, while in species like the cow and pig, which have none to very little of the short isoform, it's up to two weeks or longer for implantation," Modzelewski said.

Wang suspects that the promoter that generates the long form of the protein could be the mouse's original promoter, but that a virus that integrated into the genome long ago was later adapted as a regulatory element to produce the shorter form and the opposite effect.

"So, what happened here is a rodent-specific virus came in, and then somehow the host decided, 'OK, I'm going to use you as my promoter to express this shorter Cdk2ap1 isoform.' We see the redundancy that's built into the system, where we can take advantage of whatever nature throws at us and make it useful," he said. "And then, this new promoter happened to be stronger than the old promoter. I think this fundamentally changed the phenotype of rodents; maybe that's what makes them grow faster -- a gift of having a shorter pre-implantation time. So, they probably gained some fitness benefit from this virus."

"Whatever you look at in biology, you're going to see transposons being used, simply because there are just so many sequences," Wang added. "They essentially provide an evolutionary reservoir for selection to act upon."

Read more at Science Daily