Feb 25, 2021

Scientists link star-shredding event to origins of universe's highest-energy particles

 A team of scientists has detected the presence of a high-energy neutrino -- a particularly elusive particle -- in the wake of a star's destruction as it is consumed by a black hole. This discovery, reported in the journal Nature Astronomy, sheds new light on the origins of Ultrahigh Energy Cosmic Rays -- the highest energy particles in the Universe.

The work, which included researchers from more than two dozen institutions, including New York University and Germany's DESY research center, focused on neutrinos -- subatomic particles that are produced on Earth only in powerful accelerators.

Neutrinos -- as well as the process of their creation -- are hard to detect, making their discovery, along with that of Ultrahigh Energy Cosmic Rays (UHECRs), noteworthy.

"The origin of cosmic high-energy neutrinos is unknown, primarily because they are notoriously hard to pin down," explains Sjoert van Velzen, one of the paper's lead authors and a postdoctoral fellow in NYU's Department of Physics at the time of the discovery. "This result would be only the second time high-energy neutrinos have been traced back to their source."

Previous research by van Velzen, now at the Netherlands' Leiden University, and NYU physicist Glennys Farrar, a co-author of the new Nature Astronomy paper, found some of the earliest evidence of black holes destroying stars in what are now known as Tidal Disruption Events (TDEs). These findings set the stage for determining if TDEs could be responsible for producing UHECRs.

The research reported in Nature Astronomy offered support for this conclusion.

Previously, the IceCube Neutrino Observatory, a National Science Foundation-backed detector located in the South Pole, reported the detection of a neutrino, whose path was later traced by the Zwicky Transient Facility at Caltech's Palomar Observatory.

Specifically, its measurements showed a spatial coincidence of a high-energy neutrino and light emitted after a TDE -- a star consumed by a black hole.

"This suggests these star shredding events are powerful enough to accelerate high-energy particles," van Velzen explains.

"Discovering neutrinos associated with TDEs is a breakthrough in understanding the origin of the high-energy astrophysical neutrinos identified by the IceCube detector at the South Pole whose sources have so far been elusive," adds Farrar, who proposed in a 2009 paper that UHECRs could be accelerated in TDEs. "The neutrino-TDE coincidence also sheds light on a decades old problem: the origin of Ultrahigh Energy Cosmic Rays."

Read more at Science Daily

From melody to language

 In the first few months of their lives, babies cry, babble, gurgle and make a variety of other peculiar sounds. It can be difficult to imagine that they are actually laying the foundations for later speech with these utterances. However, there is a determining element that proves that even their cries can be assigned to a particular language: the speech melody -- or, more accurately: prosody.

"Every language is characterised by specific musical elements, which we call prosody," says Kathleen Wermke. Prosody, in simple terms, is the combination of intonation (melody) and rhythm. Earlier studies have shown that even newborns are able to distinguish different languages, like German or French, using prosodic cues, particularly melody. With the help of these musical elements, infants recognise the respective language long before they are able to perceive its special features such as consonants, vowels or syllables.

Study with more than 67,000 baby sounds

Kathleen Wermke is a Professor at the Würzburg University Hospital at the Department of Orthodontics and Head of the Center for Pre-speech Development and Developmental Disorders. Together with scientists from the USA and New Zealand, she has now examined the vocalisations of a total of 277 infants over the first six months of life in more detail. In total, the team analysed more than 67,500 cry vocalisations -- the so-called hungry-crying -, cooing and babbling sounds.

"We have found a clear developmental pattern towards more complexity," Wermke summarises the result of the study, which has now been published in the journal Scientific Reports. According to the study, this increasing degree of complexity is an important building block on the way to language development. According to the research team, these findings do not only significantly improves our understanding of the early preparatory processes for language acquisition, it also makes it possible to identify potential signs of a language development disorder.

Complexity increases over the course of the first six months

In their study, the team distinguished between two types of vocalisations in babies: cry and non-cry vocalisations in technical language. Or, to put it another way, the "communicative" crying uttered in the presence of the mother, which results from discomfort such as hunger and when there is a desire for contact. And, on the other hand, the sounds a baby makes when it feels comfortable and interacts vocally. "The aim of the study was to conduct an objective developmental analysis of prosodic antecedents in the form of melodies in healthy infants from birth to 6 months of age in all their vocalisations," says Wermke. Her hypothesis was: both types of vocalisation show a characteristic developmental increase in complex melodies.

In fact, the evaluation shows that the melodies of spontaneous cries become increasingly complex during the first 180 days of life; complex meaning that simple melodies (single-arc) are increasingly replaced by multiple-arc melodies, i.e. the foundation for the richness of variants of later intonation patterns in speech is already laid during crying. The development was comparable for phonetic utterances that fall under the category of "comfort vocalisations." The degree of complexity also increased in these, but with a temporary decline at around 140 days of age.

Rapid brain growth is the foundation

"Already at the end of the first month of life, the cry repertoire of the babies studied shows a complex melody in more than half of the cases," says Wermke. From single-arc to multiple-arc melodies in 30 days: this developmental programme is based on an early maturity of the neurophysiological mechanisms underlying melody production. In fact, the brains of newborns also grow tremendously fast during this time, and newborns show amazing coordination between breathing and phonation. Furthermore, the scientists believe that the early occurrence of complex cry melodies indicates that infants have already undergone a kind of training, a "preparatory" intrauterine development before birth, in order to start with the development of melodies immediately after or at the starting signal "birth."

Wermke and her co-authors have an explanation for this slight decrease in complexity between the ages of four and five months: "During this time, infants expand their repertoire of vocalic utterances to include new components that interact with the overall melodic contour, namely vowel- and consonant-like elements," says Kathleen Wermke. At the same time, the larynx and vocal tract are changing, which entails a series of adaptation processes in sound production. In addition to this, infants also begin to produce their first syllable combinations in babbling during this phase. "This new developmental period evidently causes a temporary 'regression' in melody development to establish vocal development on a higher hierarchical level. Thereafter, the infant begins to intentionally imitate intonation patterns of the surrounding language(s) in consonant-vowel syllable sequences in babbling.

Read more at Science Daily

Like wine, environmental conditions impact flavor of whiskey, study finds

 Flavor differences in whiskey can be discerned based solely on the environment in which the barley used to make the whiskey is grown, a new study co-authored by an Oregon State University researcher found.

This is first scientific study that found the environmental conditions, or terroir, of where the barley is grown impacts the flavor of whiskey, said Dustin Herb, an author of the study and a courtesy faculty member in the Department of Crop and Soil Science at Oregon State University.

"Terroir is increasingly being used to differentiate and market agricultural products, most commonly wine, as consumers grow more interested in the origins of their food," Herb said. "Understanding terroir is something that involves a lot of research, a lot of time and a lot of dedication. Our research shows that environmental conditions in which the barley is grown have a significant impact."

Herb, who is originally from Lebanon, Oregon, and earned his undergraduate and doctoral degrees from Oregon State, is the only American author of the study, which was published in the journal Foods. The other authors are all from Ireland, where the study was conducted.

Herb's doctoral research at Oregon State with Pat Hayes, a barley breeder in the College of Agricultural Sciences, focused on the contributions of barley to beer flavor. Their research found notable differences in the taste of beers malted from barley varieties reputed to have flavor qualities.

That research caught the attention of Waterford Distillery. The Irish distillery reached out to Herb, flew him to Ireland and asked him if he could design a study that would attempt to answer the question of whether terroir exists in whiskey. They dubbed it The Whisky Terroir Project. (Whiskey can be spelled with and without an "e.")

Herb designed a study that involved planting two common commercial varieties of barley in Ireland -- Olympus and Laureate -- in two distinct environments: Athy, Co. Kildare and Buncloudy, Co. Wexford in 2017 and 2018. Athy is an inland site and Buncloudy is a coastal site. They were selected in part because they have different soil types and different temperature ranges and rainfall levels during the barley growing season.

The crops of each barley variety at each site in each year were harvested, stored, malted and distilled in a standardized way. Once distilled, the product is called "new make spirit." (It isn't called whiskey until it is matured in a wooden cask for at least three years.)

The researchers used gas chromatography mass spectrometry and the noses of a six-person trained sensory panel to determine which compounds in the barley most contributed to the aroma of the new make spirit.

That analysis, along with further mathematical and statistical analysis, found that the environment in which the barley was grown had a greater contribution to the aroma of the whiskey than the variety of the barley. That was the clear indication of the impact terroir has on the new make spirit.

Furthermore, the sensory analysis found distinct differences in the aroma characteristics of the new make spirit from the barley grown in each location. In Athy, it was more positively associated with sweet, cereal/grainy, feinty/earthy, oily finish, soapy, sour, stale and mouldy sensory attributes and in Bunclody it was more associated with dried fruit and solventy attributes.

"What this does is actually make the farmer and the producer come to the forefront of the product," Herb said. "It gets to the point where we might have more choices and it might provide an opportunity for a smaller brewer or a smaller distiller or a smaller baker to capitalize on their terroir, like we see in the wine industry with a Napa Valley wine, or Willamette Valley wine or a French Bordeaux."

The sensory analysis also found differences in the aromatic profiles between the 2017 and 2018 seasons that were studied.

"This makes us think there might be a vintage aspect to the whiskey like wine, where you buy a 2019 or a 2020 or a 2016," Herb said. "Could the whiskey industry operate in a similar way, where someone is going to seek out a certain vintage of a certain year?"

To answer that question, more research needs to be done, Herb said. That is a project the Whisky Terroir Project plans to tackle: examining flavor changes in the spirits as they mature in casks and to see what happens with the terroir impact.

The team is also scaling up the research to study terroir in commercial-scale barley fields over a five-year period.

In addition to Herb, who also works full-time as a plant breeder at Albany, Oregon-based OreGro, which develops turf and forage products, other authors of the paper are: Maria Kyraleou and Kieran Kilcawley of the Teagasc Food Research Park; Grace O'Reilly and Neil Conway of Waterford Distillery; and Tom Bryan of Boormalt.

Read more at Science Daily

New experiences enhance learning by resetting key brain circuit

 A study of spatial learning in mice shows that exposure to new experiences dampens established representations in the brain's hippocampus and prefrontal cortex, allowing the mice to learn new navigation strategies. The study, published in Nature, was supported by the National Institutes of Health.

"The ability to flexibly learn in new situations makes it possible to adapt to an ever-changing world," noted Joshua A. Gordon, M.D., Ph.D., a senior author on the study and director of the National Institute of Mental Health, part of NIH. "Understanding the neural basis of this flexible learning in animals gives us insight into how this type of learning may become disrupted in humans."

Dr. Gordon co-supervised the research project with Joseph A. Gogos, M.D., Ph.D., and Alexander Z. Harris, M.D., Ph.D., both of Columbia University, New York City.

Whenever we encounter new information, that information must be consolidated into a stable, lasting memory for us to recall it later. A key mechanism in this memory consolidation process is long-term potentiation, which is a persistent strengthening of neural connections based on recent patterns of activity. Although this strengthening of neural connections may be persistent, it can't be permanent, or we wouldn't be able to update memory representations to accommodate new information. In other words, our ability to remember new experiences and learn from them depends on information encoding that is both enduring and flexible.

To understand the specific neural mechanisms that make this plasticity possible, the research team, led by Alan J. Park, Ph.D., of Columbia, examined spatial learning in mice.

Spatial learning depends on a key circuit between the ventral hippocampus (a structure located in the middle of the brain) and the medial prefrontal cortex (located just behind the forehead). Connectivity between these brain structures strengthens over the course of spatial learning. If the connectivity remains at maximum strength, however, it impairs later adaptation to new tasks and rules. The researchers hypothesized that exposure to a new experience may serve as an environmental trigger that dampens established hippocampal-prefrontal connectivity, enabling flexible spatial learning.

In the first task, the researchers trained mice to navigate a maze in a certain way to receive a reward. Some of the mice were then allowed to explore a space they hadn't seen before, while others explored a familiar space. The mice then engaged in a second spatial task, which required that they switch to a new navigation strategy to get a reward.

As expected, all of the mice favored their original navigation strategy at first. But the mice that had explored a new space gradually overcame this bias and successfully learned the new navigation strategy about halfway through the 40-trial training session. When the researchers tested a subset of the mice on the first task again, they found that the novelty-exposed mice were able to switch back to the original strategy, indicating that they updated and chose their strategy according to the task demands.

Additional findings showed that the effects of novelty extended beyond new spaces: Encountering new mice before the second task also enhanced learning of the new reward strategy.

Changes in brain activity throughout training revealed the neuronal mechanisms that drive this novelty-enhanced learning. In rodents, there is a well-defined firing pattern in the hippocampus known as the theta wave, which is thought to play a central role in learning and memory. When Park and coauthors examined recordings from the ventral hippocampus, they found that the theta wave became stronger during exploration of the novel arena and the hour that followed; the theta wave decreased as the mice became familiar with the arena over the next two days. The researchers found that novelty exposure also disrupted encoding of the original navigation strategy, reorganizing the firing pattern of individual neurons in the ventral hippocampus to bring them in sync with the theta wave.

At the same time, neurons in the medial prefrontal cortex showed decreased theta wave synchrony, and correlations between hippocampal activity and prefrontal activity weakened. These and other findings suggest that novelty exposure dampened the synaptic connections between the ventral hippocampus and medial prefrontal cortex, resetting the circuit to allow for subsequent strengthening of connectivity associated with learning.

By triggering this reset, novelty appears to facilitate strategy updating in response to the task's specific reward structure. Machine learning analyses indicated that, following novelty exposure, ventral hippocampal neurons switched encoding from a strategy that predicted reward on the first task to one that predicted reward on the second task. The task-specific information was then relayed to the medial prefrontal neurons, which updated encoding accordingly.

On a chemical level, the neurotransmitter dopamine acts as a key mediator of this plasticity. Several experiments showed that activating dopamine D1-receptors in the ventral hippocampus led to novelty-like effects, including dampened hippocampal-prefrontal connectivity and enhanced learning. Blocking D1-receptors prevented these novelty-induced effects.

Read more at Science Daily

People with SARS-CoV-2 antibodies may have low risk of future infection, study finds

 People who have had evidence of a prior infection with SARS-CoV-2, the virus that causes COVID-19, appear to be well protected against being reinfected with the virus, at least for a few months, according to a newly published study from the National Cancer Institute (NCI). This finding may explain why reinfection appears to be relatively rare, and it could have important public health implications, including decisions about returning to physical workplaces, school attendance, the prioritization of vaccine distribution, and other activities.

For the study, researchers at NCI, part of the National Institutes of Health, collaborated with two health care data analytics companies (HealthVerity and Aetion, Inc.) and five commercial laboratories. The findings were published on Feb. 24 in JAMA Internal Medicine.

"While cancer research and cancer care remain?the?primary?focus of NCI's work, we were eager to lend our expertise in serological sciences to help address the global COVID-19 pandemic, at the request of Congress," said NCI Director Norman E. "Ned" Sharpless, M.D., who was one of the coauthors on the study. "We hope that these results, in combination with those of other studies, will inform future public health efforts and help in setting policy."

"The data from this study suggest that people who have a positive result from a commercial antibody test appear to have substantial immunity to SARS-CoV-2, which means they may be at lower risk for future infection," said Lynne Penberthy, M.D., M.P.H., associate director of NCI's Surveillance Research Program, who led the study. "Additional research is needed to understand how long this protection lasts, who may have limited protection, and how patient characteristics, such as comorbid conditions, may impact protection. We are nevertheless encouraged by this early finding."

Antibody tests -- also known as serology tests -- detect serum antibodies, which are immune system proteins made in response to a specific foreign substance or infectious agent, such as SARS-CoV-2.

This study was launched in an effort to better understand whether, and to what degree, detectable antibodies against SARS-CoV-2 protect people from reinfection with the virus. Working with HealthVerity and Aetion, NCI aggregated and analyzed patient information collected from multiple sources, including five commercial labs (including Quest Diagnostics and Labcorp), electronic medical records, and private insurers. This was done in a way that protects the privacy of an individual's health information and is compliant with relevant patient privacy laws.

The researchers ultimately obtained antibody test results for more than 3 million people who had a SARS-CoV-2 antibody test between Jan. 1 and Aug. 23, 2020. This represented more than 50% of the commercial SARS-CoV-2 antibody tests conducted in the United States during that time. Nearly 12% of these tests were antibody positive; most of the remaining tests were negative, and less than 1% were inconclusive.

About 11% of the seropositive individuals and 9.5% of the seronegative individuals later received a nucleic acid amplification test (NAAT) -- sometimes referred to as a PCR test -- for SARS-CoV-2. The research team looked at what fraction of individuals in each group subsequently had a positive NAAT result, which may indicate a new infection. The study team reviewed NAAT results at several intervals: 0-30 days, 31-60 days, 61-90 days, and >90 days because some people who have recovered from a SARS-CoV-2 infection can still shed viral material (RNA) for up to three months (although they likely do not remain infectious during that entire period).

The team found that, during each interval, between 3% and 4% of the seronegative individuals had a positive NAAT test. But among those who had originally been seropositive, the NAAT test positivity rate declined over time. When the researchers looked at test results 90 or more days after the initial antibody test (when any coronavirus detected by NAAT is likely to reflect a new infection rather than continued virus shedding from the original infection), only about 0.3% of those who had been seropositive had a positive NAAT result -- about one-tenth the rate in those who had been seronegative.

Although these results support the idea that having antibodies against SARS-CoV-2 is associated with protection from future infection, the authors note important limitations to this study. In particular, the findings come from a scientific interpretation of real-world data, which are subject to biases that may be better controlled for in a clinical trial. For example, it is not known why people who had tested antibody positive went on to have a PCR test. In addition, the duration of protection is unknown; studies with longer follow-up time are needed to determine if protection wanes over time.

To continue to comprehensively address this important research question, NCI is supporting clinical studies that monitor infection rates in large populations of people whose antibody status is known. These are known as "seroprotection" studies. NCI is also sponsoring ongoing studies using real-world data to assess the longer-term effect of antibody positivity on subsequent infection rates.

Read more at Science Daily

Feb 24, 2021

'Jumping genes' repeatedly form new genes over evolution

 In the same way that Lego pieces can be arranged in new ways to build a variety of structures, genetic elements can be mixed and matched to create new genes, according to new research.

A long-proposed mechanism for creating genes, called exon shuffling, works by shuffling functional blocks of DNA sequences into new genes that express proteins.

A study, "Recurrent Evolution of Vertebrate Transcription Factors by Transposase Capture," published Feb. 19 in Science, investigates how genetic elements called transposons, or "jumping genes," are added into the mix during evolution to assemble new genes through exon shuffling.

Transposons, first discovered in the 1940s by Cornell alum and Nobel Prize-winner Barbara McClintock '23, M.A. '25, Ph.D. '27, are abundant components of genomes -- they make up half of human DNA -- and have the ability to hop and replicate selfishly in the genome. Some transposons contain their own genes that code for enzymes called transposase proteins, which cut and paste genetic material from one chromosomal location to another.

The study, which focused on tetrapods (four-limbed vertebrates), is important because it shows that transposons represent an important force in the creation of new genes during evolution. The work also explains how genes critical for human development were born.

"We think it's very likely this mechanism may extend beyond vertebrates and could be more of a fundamental mechanism that occurs in non-vertebrates as well," said first author Rachel Cosby, Ph.D. '19, a postdoctoral researcher at the National Institutes of Health. Cosby is a former graduate student in the lab of senior author Cedric Feschotte, professor in the Department of Molecular Biology and Genetics in the College of Agriculture and Life Sciences.

"You are putting the bricks in in a different way and you construct a whole new thing," Feschotte said. "We are looking at the question of how genes are born. The originality is that we are looking at the role of transposons in creating proteins with novel function in evolution."

In the study, the researchers first mined existing databases for genomes of tetrapods, because genomes for more than 500 species have been fully sequenced. Cosby and colleagues searched for combinations of DNA sequences known to be characteristic of transposons fused to host sequences to find good candidates for study. They then chose genes that evolved relatively recently -- within tens of millions of years ago -- so they could trace the history of the gene's development through the vertebrate tree of life.

Though genes fused with these transposases are relatively rare, the researchers found them all over the vertebrate tree of life. The researchers identified more than 100 distinct genes fused with transposases born in the past 350 million years along different species lineages, including genes in birds, reptiles, frogs, bats and koalas, and a total of 44 genes born this way in the human genome.

Cosby and colleagues selected four recently evolved genes and performed a wide range of experiments in cell culture to understand their functions. They found the proteins derived from these genes are able to bind to specific DNA sequences and turn off gene expression. Such genes are known as transcription factors and act as master regulator genes for development and basic physiology. One such gene, PAX6, is well studied, plays a key role as a master regulator in the formation of eyes in all animals and is highly conserved throughout evolution.

"If you put a PAX6 gene from a mouse into a Drosophila [fruit fly], it works," Feschotte said. Though others have proposed before that PAX6 is derived from a transposase fusion, the researchers in this study further validated the hypothesis.

Cosby and colleagues isolated one of these recently evolved genes in bats, called KRABINER, and then used CRISPR gene-editing technology to delete it from the bat genome and see what genes were affected, before adding it back in. The experiment revealed that when KRABINER was removed, hundreds of genes were dysregulated, and when they restored it, normal functioning returned. The protein expressed by the KRABINER gene bound to other related transposons in the bat genome, Cosby said.

"The experiment revealed that it controls a large network of other genes wired through the past dispersion of related transposons throughout the bat genome -- creating not just a gene but what is known as a gene regulatory network," Feschotte said.

Read more at Science Daily

New study suggests supermassive black holes could form from dark matter

 A new theoretical study has proposed a novel mechanism for the creation of supermassive black holes from dark matter. The international team find that rather than the conventional formation scenarios involving 'normal' matter, supermassive black holes could instead form directly from dark matter in high density regions in the centres of galaxies. The result has key implications for cosmology in the early Universe, and is published in Monthly Notices of the Royal Astronomical Society.

Exactly how supermassive black holes initially formed is one of the biggest problems in the study of galaxy evolution today. Supermassive black holes have been observed as early as 800 million years after the Big Bang, and how they could grow so quickly remains unexplained.

Standard formation models involve normal baryonic matter -- the atoms and elements that that make up stars, planets, and all visible objects -- collapsing under gravity to form black holes, which then grow over time. However the new work investigates the potential existence of stable galactic cores made of dark matter, and surrounded by a diluted dark matter halo, finding that the centres of these structures could become so concentrated that they could also collapse into supermassive black holes once a critical threshold is reached.

According to the model this could have happened much more quickly than other proposed formation mechanisms, and would have allowed supermassive black holes in the early Universe to form before the galaxies they inhabit, contrary to current understanding.

Carlos R. Argüelles, the researcher at Universidad Nacional de La Plata and ICRANet who led the investigation comments: "This new formation scenario may offer a natural explanation for how supermassive black holes formed in the early Universe, without requiring prior star formation or needing to invoke seed black holes with unrealistic accretion rates."

Another intriguing consequence of the new model is that the critical mass for collapse into a black hole might not be reached for smaller dark matter halos, for example those surrounding some dwarf galaxies. The authors suggest that this then might leave smaller dwarf galaxies with a central dark matter nucleus rather than the expected black hole. Such a dark matter core could still mimic the gravitational signatures of a conventional central black hole, whilst the dark matter outer halo could also explain the observed galaxy rotation curves.

"This model shows how dark matter haloes could harbour dense concentrations at their centres, which may play a crucial role in helping to understand the formation of supermassive black holes," added Carlos.

"Here we've proven for the first time that such core-halo dark matter distributions can indeed form in a cosmological framework, and remain stable for the lifetime of the Universe."

Read more at Science Daily

How did dogs get to the Americas? An ancient bone fragment holds clues

 The history of dogs has been intertwined, since ancient times, with that of the humans who domesticated them.

But how far back does that history go in the Americas, and which route did dogs use to enter this part of the world?

A new study led by the University at Buffalo provides insight into these questions. The research reports that a bone fragment found in Southeast Alaska belongs to a dog that lived in the region about 10,150 years ago. Scientists say the remains -- a piece of a femur -- represent the oldest confirmed remains of a domestic dog in the Americas.

DNA from the bone fragment holds clues about early canine history in this part of the world.

Researchers analyzed the dog's mitochondrial genome, and concluded that the animal belonged to a lineage of dogs whose evolutionary history diverged from that of Siberian dogs as early as 16,700 years ago. The timing of that split coincides with a period when humans may have been migrating into North America along a coastal route that included Southeast Alaska.

The research will be published on Feb. 24 in the Proceedings of the Royal Society B. Charlotte Lindqvist, an evolutionary biologist from UB, was senior author of the study, which included scientists from UB and the University of South Dakota. The findings add to a growing body of knowledge about the migration of dogs into the Americas.

"We now have genetic evidence from an ancient dog found along the Alaskan coast. Because dogs are a proxy for human occupation, our data help provide not only a timing but also a location for the entry of dogs and people into the Americas. Our study supports the theory that this migration occurred just as coastal glaciers retreated during the last Ice Age," says Lindqvist, PhD, associate professor of biological sciences in the UB College of Arts and Sciences. "There have been multiple waves of dogs migrating into the Americas, but one question has been, when did the first dogs arrive? And did they follow an interior ice-free corridor between the massive ice sheets that covered the North American continent, or was their first migration along the coast?"

"The fossil record of ancient dogs in the Americas is incomplete, so any new remains that are found provide important clues," says Flavio Augusto da Silva Coelho, a UB PhD student in biological sciences, and one of the paper's first authors. "Before our study, the earliest ancient American dog bones that had their DNA sequenced were found in the U.S. Midwest."

A surprise finding from a large collection of bones

Lindqvist's team did not set out to study dogs. The scientists came across the femur fragment while sequencing DNA from a collection of hundreds of bones excavated years before in Southeast Alaska by researchers including Timothy Heaton, PhD, professor of earth sciences at the University of South Dakota.

"This all started out with our interest in how Ice Age climatic changes impacted animals' survival and movements in this region," Lindqvist says. "Southeast Alaska might have served as an ice-free stopping point of sorts, and now -- with our dog -- we think that early human migration through the region might be much more important than some previously suspected."

The bone fragment, originally thought to come from a bear, was quite small, but when the DNA was studied, the team realized it was from a dog, Lindqvist says.

After this surprise discovery, the scientists compared the bone's mitochondrial genome to those of other ancient and modern dogs. This analysis showed that the Southeast Alaskan dog shared a common ancestor about 16,000 years ago with American canines that lived before the arrival of European colonizers, Lindqvist says. (Mitochondrial DNA, inherited from the mother, represents a small fraction of an organism's complete DNA, so sequencing a complete nuclear genome could provide further details if that material can be extracted.)

Of interest, carbon isotope analysis on the bone fragment indicates that the ancient Southeast Alaskan dog likely had a marine diet, which may have consisted of foods such as fish and scraps from seals and whales.

The research adds depth to the layered history of how dogs came to populate the Americas. As Lindqvist notes, canines did not arrive all at once. For example, some Arctic dogs arrived later from East Asia with the Thule culture, while Siberian huskies were imported to Alaska during the Gold Rush. Other dogs were brought to the Americas by European colonizers.

The new study sharpens the debate on dog and human migration into the Americas.

"Our early dog from Southeast Alaska supports the hypothesis that the first dog and human migration occurred through the Northwest Pacific coastal route instead of the central continental corridor, which is thought to have become viable only about 13,000 years ago," Coelho notes.

Read more at Science Daily

'Walking' molecule superstructures could help create neurons for regenerative medicine

 Imagine if surgeons could transplant healthy neurons into patients living with neurodegenerative diseases or brain and spinal cord injuries. And imagine if they could "grow" these neurons in the laboratory from a patient's own cells using a synthetic, highly bioactive material that is suitable for 3D printing.

By discovering a new printable biomaterial that can mimic properties of brain tissue, Northwestern University researchers are now closer to developing a platform capable of treating these conditions using regenerative medicine.

A key ingredient to the discovery is the ability to control the self-assembly processes of molecules within the material, enabling the researchers to modify the structure and functions of the systems from the nanoscale to the scale of visible features. The laboratory of Samuel I. Stupp published a 2018 paper in the journal Science which showed that materials can be designed with highly dynamic molecules programmed to migrate over long distances and self-organize to form larger, "superstructured" bundles of nanofibers.

Now, a research group led by Stupp has demonstrated that these superstructures can enhance neuron growth, an important finding that could have implications for cell transplantation strategies for neurodegenerative diseases such as Parkinson's and Alzheimer's disease, as well as spinal cord injury.

"This is the first example where we've been able to take the phenomenon of molecular reshuffling we reported in 2018 and harness it for an application in regenerative medicine," said Stupp, the lead author on the study and the director of Northwestern's Simpson Querrey Institute. "We can also use constructs of the new biomaterial to help discover therapies and understand pathologies."

A pioneer of supramolecular self-assembly, Stupp is also the Board of Trustees Professor of Materials Science and Engineering, Chemistry, Medicine and Biomedical Engineering and holds appointments in the Weinberg College of Arts and Sciences, the McCormick School of Engineering and the Feinberg School of Medicine.

The paper was published today (Feb. 22) in the journal Advanced Science.

Walking molecules and 3D printing

The new material is created by mixing two liquids that quickly become rigid as a result of interactions known in chemistry as host-guest complexes that mimic key-lock interactions among proteins, and also as the result of the concentration of these interactions in micron-scale regions through a long scale migration of "walking molecules."

The agile molecules cover a distance thousands of times larger than themselves in order to band together into large superstructures. At the microscopic scale, this migration causes a transformation in structure from what looks like an uncooked chunk of ramen noodles into ropelike bundles.

"Typical biomaterials used in medicine like polymer hydrogels don't have the capabilities to allow molecules to self-assemble and move around within these assemblies," said Tristan Clemons, a research associate in the Stupp lab and co-first author of the paper with Alexandra Edelbrock, a former graduate student in the group. "This phenomenon is unique to the systems we have developed here."

Furthermore, as the dynamic molecules move to form superstructures, large pores open that allow cells to penetrate and interact with bioactive signals that can be integrated into the biomaterials.

Interestingly, the mechanical forces of 3D printing disrupt the host-guest interactions in the superstructures and cause the material to flow, but it can rapidly solidify into any macroscopic shape because the interactions are restored spontaneously by self-assembly. This also enables the 3D printing of structures with distinct layers that harbor different types of neural cells in order to study their interactions.

Signaling neuronal growth

The superstructure and bioactive properties of the material could have vast implications for tissue regeneration. Neurons are stimulated by a protein in the central nervous system known as brain-derived neurotrophic factor (BDNF), which helps neurons survive by promoting synaptic connections and allowing neurons to be more plastic. BDNF could be a valuable therapy for patients with neurodegenerative diseases and injuries in the spinal cord but these proteins degrade quickly in the body and are expensive to produce.

One of the molecules in the new material integrates a mimic of this protein that activates its receptor known as Trkb, and the team found that neurons actively penetrate the large pores and populate the new biomaterial when the mimetic signal is present. This could also create an environment in which neurons differentiated from patient-derived stem cells mature before transplantation.

Now that the team has applied a proof of concept to neurons, Stupp believes he could now break into other areas of regenerative medicine by applying different chemical sequences to the material. Simple chemical changes in the biomaterials would allow them to provide signals for a wide range of tissues.

"Cartilage and heart tissue are very difficult to regenerate after injury or heart attacks, and the platform could be used to prepare these tissues in vitro from patient-derived cells," Stupp said. "These tissues could then be transplanted to help restore lost functions. Beyond these interventions, the materials could be used to build organoids to discover therapies or even directly implanted into tissues for regeneration since they are biodegradable."

Read more at Science Daily

A memory without a brain

 Having a memory of past events enables us to take smarter decisions about the future. Researchers at the Max-Planck Institute for Dynamics and Self-Organization (MPI-DS) and the Technical University of Munich (TUM) have now identified how the slime mold Physarum polycephalum saves memories -- although it has no nervous system.

The ability to store and recover information gives an organism a clear advantage when searching for food or avoiding harmful environments. Traditionally it has been attributed to organisms that have a nervous system.

A new study authored by Mirna Kramar (MPI-DS) and Prof. Karen Alim (TUM and MPI-DS) challenges this view by uncovering the surprising abilities of a highly dynamic, single-celled organism to store and retrieve information about its environment.

Window into the past

The slime mold Physarum polycephalum has been puzzling researchers for many decades. Existing at the crossroads between the kingdoms of animals, plants and fungi, this unique organism provides insight into the early evolutionary history of eukaryotes -- to which also humans belong.

Its body is a giant single cell made up of interconnected tubes that form intricate networks. This single amoeba-like cell may stretch several centimeters or even meters, featuring as the largest cell on earth in the Guinness Book of World Records.

Decision making on the most basic levels of life

The striking abilities of the slime mold to solve complex problems, such as finding the shortest path through a maze, earned it the attribute "intelligent." It intrigued the research community and kindled questions about decision making on the most basic levels of life.

The decision-making ability of Physarum is especially fascinating given that its tubular network constantly undergoes fast reorganization -- growing and disintegrating its tubes -- while completely lacking an organizing center.

The researchers discovered that the organism weaves memories of food encounters directly into the architecture of the network-like body and uses the stored information when making future decisions.

The network architecture as a memory of the past

"It is very exciting when a project develops from a simple experimental observation," says Karen Alim, head of the Biological Physics and Morphogenesis group at the MPI-DS and professor on Theory of Biological Networks at the Technical University of Munich.

When the researchers followed the migration and feeding process of the organism and observed a distinct imprint of a food source on the pattern of thicker and thinner tubes of the network long after feeding.

"Given P. polycephalum's highly dynamic network reorganization, the persistence of this imprint sparked the idea that the network architecture itself could serve as memory of the past," says Karen Alim. However, they first needed to explain the mechanism behind the imprint formation.

Decisions are guided by memories

For this purpose the researchers combined microscopic observations of the adaption of the tubular network with theoretical modeling. An encounter with food triggers the release of a chemical that travels from the location where food was found throughout the organism and softens the tubes in the network, making the whole organism reorient its migration towards the food.

"The gradual softening is where the existing imprints of previous food sources come into play and where information is stored and retrieved," says first author Mirna Kramar. "Past feeding events are embedded in the hierarchy of tube diameters, specifically in the arrangement of thick and thin tubes in the network."

"For the softening chemical that is now transported, the thick tubes in the network act as highways in traffic networks, enabling quick transport across the whole organism," adds Mirna Kramar. "Previous encounters imprinted in the network architecture thus weigh into the decision about the future direction of migration."

Design based on universal principles

"Given the simplicity of this living network, the ability of Physarum to form memories is intriguing. It is remarkable that the organism relies on such a simple mechanism and yet controls it in such a fine-tuned manner," says Karen Alim.

Read more at Science Daily

Feb 23, 2021

Scientists repair injured spinal cord using patients' own stem cells

 Intravenous injection of bone marrow derived stem cells (MSCs) in patients with spinal cord injuries led to significant improvement in motor functions, researchers from Yale University and Japan report Feb. 18 in the Journal of Clinical Neurology and Neurosurgery.

For more than half of the patients, substantial improvements in key functions -- such as ability to walk, or to use their hands -- were observed within weeks of stem cell injection, the researchers report. No substantial side effects were reported.

The patients had sustained, non-penetrating spinal cord injuries, in many cases from falls or minor trauma, several weeks prior to implantation of the stem cells. Their symptoms involved loss of motor function and coordination, sensory loss, as well as bowel and bladder dysfunction. The stem cells were prepared from the patients' own bone marrow, via a culture protocol that took a few weeks in a specialized cell processing center. The cells were injected intravenously in this series, with each patient serving as their own control. Results were not blinded and there were no placebo controls.

Yale scientists Jeffery D. Kocsis, professor of neurology and neuroscience, and Stephen G. Waxman, professor of neurology, neuroscience and pharmacology, were senior authors of the study, which was carried out with investigators at Sapporo Medical University in Japan. Key investigators of the Sapporo team, Osamu Honmou and Masanori Sasaki, both hold adjunct professor positions in neurology at Yale.

Kocsis and Waxman stress that additional studies will be needed to confirm the results of this preliminary, unblinded trial. They also stress that this could take years. Despite the challenges, they remain optimistic.

"Similar results with stem cells in patients with stroke increases our confidence that this approach may be clinically useful," noted Kocsis. "This clinical study is the culmination of extensive preclinical laboratory work using MSCs between Yale and Sapporo colleagues over many years."

"The idea that we may be able to restore function after injury to the brain and spinal cord using the patient's own stem cells has intrigued us for years," Waxman said. "Now we have a hint, in humans, that it may be possible."

From Science Daily

Researchers learn that pregnant women pass along protective COVID antibodies to their babies

 Antibodies that guard against COVID-19 can transfer from mothers to babies while in the womb, according to a new study from Weill Cornell Medicine and NewYork-Presbyterian researchers published in the American Journal of Obstetrics and Gynecology.

This discovery, published Jan. 22, adds to growing evidence that suggests that pregnant women who generate protective antibodies after contracting the coronavirus often convey some of that natural immunity to their fetuses. The findings also lend support to the idea that vaccinating mothers-to-be may also have benefits for their newborns.

"Since we can now say that the antibodies pregnant women make against COVID-19 have been shown to be passed down to their babies, we suspect that there's a good chance they could pass down the antibodies the body makes after being vaccinated as well," said Dr. Yawei Jenny Yang, an assistant professor of pathology and laboratory medicine at Weill Cornell Medicine and the study's senior author.

Dr. Yang and her team analyzed blood samples from 88 women who gave birth at NewYork-Presbyterian/Weill Cornell Medical Center between March and May 2020, a time when New York City was the global epicenter of the pandemic. All of the women had COVID-19 antibodies in their blood, indicating that they had contracted the virus at some point even though 58 percent of those women had no symptoms. Furthermore, while antibodies were detected in both symptomatic and asymptomatic women, the researchers observed that the concentration of antibodies was significantly higher in symptomatic women. They also found that the general pattern of antibody response was similar to the response seen in other patients, confirming that pregnant women have the same kind of immune response to the virus as the larger patient population -- something that hadn't previously been known for sure, since a woman's immune system changes throughout pregnancy.

In addition, the vast majority of the babies born to these women -- 78 percent -- had detectable antibodies in their umbilical cord blood. There was no evidence that any of the infants had been directly infected with the virus and all were COVID negative at the time of birth, further indicating that the antibodies had crossed the placenta -- the organ that provides oxygen and nutrients to a growing baby during pregnancy -- into the fetal bloodstream. Newborns with symptomatic mothers also had higher antibody levels than those whose mothers had no COVID symptoms.

This data implies that pregnant women could pass along vaccine-generated antibodies in the same way, potentially shielding both mother and child from future infection. However it is not yet known exactly how protective these antibodies might be, or how long that protection might last. Dr. Laura Riley, chair of the Department of Obstetrics and Gynecology at Weill Cornell Medicine, obstetrician and gynecologist-in-chief at NewYork-Presbyterian/Weill Cornell and one of the study's co-authors, is still advising pregnant patients who decide to get vaccinated to continue to follow current safety guidelines to prevent the spread of the disease. Dr. Riley, Dr. Yang and their colleagues are leading follow-up investigations that are currently enrolling pregnant women who receive the vaccine, as well as vaccinated mothers who are breastfeeding, to assess the antibody response in those groups after vaccination. That information could help guide maternal vaccination strategies moving forward.

Read more at Science Daily

A speed limit also applies in the quantum world

 Even in the world of the smallest particles with their own special rules, things cannot proceed infinitely fast. Physicists at the University of Bonn have now shown what the speed limit is for complex quantum operations. The study also involved scientists from MIT, the universities of Hamburg, Cologne and Padua, and the Jülich Research Center. The results are important for the realization of quantum computers, among other things.

Suppose you observe a waiter (the lockdown is already history) who on New Year's Eve has to serve an entire tray of champagne glasses just a few minutes before midnight. He rushes from guest to guest at top speed. Thanks to his technique, perfected over many years of work, he nevertheless manages not to spill even a single drop of the precious liquid.

A little trick helps him to do this: While the waiter accelerates his steps, he tilts the tray a bit so that the champagne does not spill out of the glasses. Halfway to the table, he tilts it in the opposite direction and slows down. Only when he has come to a complete stop does he hold it upright again.

Atoms are in some ways similar to champagne. They can be described as waves of matter, which behave not like a billiard ball but more like a liquid. Anyone who wants to transport atoms from one place to another as quickly as possible must therefore be as skillful as the waiter on New Year's Eve. "And even then, there is a speed limit that this transport cannot exceed," explains Dr. Andrea Alberti, who led this study at the Institute of Applied Physics of the University of Bonn.

Cesium atom as a champagne substitute

In their study, the researchers experimentally investigated exactly where this limit lies. They used a cesium atom as a champagne substitute and two laser beams perfectly superimposed but directed against each other as a tray. This superposition, called interference by physicists, creates a standing wave of light: a sequence of mountains and valleys that initially do not move. "We loaded the atom into one of these valleys, and then set the standing wave in motion -- this displaced the position of the valley itself," says Alberti. "Our goal was to get the atom to the target location in the shortest possible time without it spilling out of the valley, so to speak."

The fact that there is a speed limit in the microcosm was already theoretically demonstrated by two Soviet physicists, Leonid Mandelstam and Igor Tamm more than 60 years ago. They showed that the maximum speed of a quantum process depends on the energy uncertainty, i.e., how "free" the manipulated particle is with respect to its possible energy states: the more energetic freedom it has, the faster it is. In the case of the transport of an atom, for example, the deeper the valley into which the cesium atom is trapped, the more spread the energies of the quantum states in the valley are, and ultimately the faster the atom can be transported. Something similar can be seen in the example of the waiter: If he only fills the glasses half full (to the chagrin of the guests), he runs less risk that the champagne spills over as he accelerates and decelerates. However, the energetic freedom of a particle cannot be increased arbitrarily. "We can't make our valley infinitely deep -- it would cost us too much energy," stresses Alberti.

Beam me up, Scotty!

The speed limit of Mandelstam and Tamm is a fundamental limit. However, one can only reach it under certain circumstances, namely in systems with only two quantum states. "In our case, for example, this happens when the point of origin and destination are very close to each other," the physicist explains. "Then the matter waves of the atom at both locations overlap, and the atom could be transported directly to its destination in one go, that is, without any stops in between -- almost like the teleportation in the Starship Enterprise of Star Trek."

However, the situation is different when the distance grows to several dozens of matter wave widths as in the Bonn experiment. For these distances, direct teleportation is impossible. Instead, the particle must go through several intermediate states to reach its final destination: The two-level system becomes a multi-level system. The study shows that a lower speed limit applies to such processes than that predicted by the two Soviet physicists: It is determined not only by the energy uncertainty, but also by the number of intermediate states. In this way, the work improves the theoretical understanding of complex quantum processes and their constraints.

The physicists' findings are important not least for quantum computing. The computations that are possible with quantum computers are mostly based on the manipulation of multi-level systems. Quantum states are very fragile, though. They last only a short lapse of time, which physicists call coherence time. It is therefore important to pack as many computational operations as possible into this time. "Our study reveals the maximum number of operations we can perform in the coherence time," Alberti explains. "This makes it possible to make optimal use of it."

Read more at Science Daily

The Milky Way may be swarming with planets with oceans and continents like here on Earth

 Astronomers have long been looking into the vast universe in hopes of discovering alien civilisations. But for a planet to have life, liquid water must be present. The chances of that finding scenario have seemed impossible to calculate because it has been the assumption that planets like Earth get their water by chance if a large, ice asteroid hits the planet.

Now, researchers from the GLOBE Institute at the University of Copenhagen have published an eye-opening study, indicating that water may be present during the very formation of a planet. According to the study's calculations, this is true for both Earth, Venus and Mars.

"All our data suggest that water was part of Earth's building blocks, right from the beginning. And because the water molecule is frequently occurring, there is a reasonable probability that it applies to all planets in the Milky Way. The decisive point for whether liquid water is present is the distance of the planet from its star," says Professor Anders Johansen from the Centre for Star and Planet Formation who has led the study that is published in the journal Science Advances.

Using a computer model, Anders Johansen and his team have calculated how quickly planets are formed, and from which building blocks. The study indicates that it was millimetre-sized dust particles of ice and carbon -- which are known to orbit around all young stars in the Milky Way -- that 4.5 billion years ago accreted in the formation of what would later become Earth.

"Up to the point where Earth had grown to one percent of its current mass, our planet grew by capturing masses of pebbles filled with ice and carbon. Earth then grew faster and faster until, after five million years, it became as large as we know it today. Along the way, the temperature on the surface rose sharply, causing the ice in the pebbles to evaporate on the way down to the surface so that, today, only 0.1 percent of the planet is made up of water, even though 70 percent of Earth's surface is covered by water," says Anders Johansen, who together with his research team in Lund ten years ago put forward the theory that the new study now confirms.

The theory, called 'pebble accretion', is that planets are formed by pebbles that are clumping together, and that the planets then grow larger and larger.

Anders Johansen explains that the water molecule H2O is found everywhere in our galaxy, and that the theory therefore opens up the possibility that other planets may have been formed in the same way as Earth, Mars and Venus.

"All planets in the Milky Way may be formed by the same building blocks, meaning that planets with the same amount of water and carbon as Earth -- and thus potential places where life may be present -- occur frequently around other stars in our galaxy, provided the temperature is right," he says.

If planets in our galaxy had the same building blocks and the same temperature conditions as Earth, there will also be good chances that they may have about the same amount of water and continents as our planet.

Professor Martin Bizzarro, co-author of the study, says: "With our model, all planets get the same amount of water, and this suggests that other planets may have not just the same amount of water and oceans, but also the same amount of continents as here on Earth. It provides good opportunities for the emergence of life."

If, on the other hand, it was random how much water was present on planets, the planets might look vastly different. Some planets would be too dry to develop life, while others would be completely covered by water.

"A planet covered by water would of course be good for maritime beings, but would offer less than ideal conditions for the formation of civilisations that can observe the universe," says Anders Johansen.

Anders Johansen and his research team are looking forward to the next generation of space telescopes, which will offer far better opportunities to observe exoplanets orbiting a star other than the Sun.

Read more at Science Daily

NASA's Mars Perseverance rover provides front-row seat to landing, first audio recording of Red Planet

 New video from NASA's Mars 2020 Perseverance rover chronicles major milestones during the final minutes of its entry, descent, and landing (EDL) on the Red Planet on Feb. 18 as the spacecraft plummeted, parachuted, and rocketed toward the surface of Mars. A microphone on the rover also has provided the first audio recording of sounds from Mars.

From the moment of parachute inflation, the camera system covers the entirety of the descent process, showing some of the rover's intense ride to Mars' Jezero Crater. The footage from high-definition cameras aboard the spacecraft starts 7 miles (11 kilometers) above the surface, showing the supersonic deployment of the most massive parachute ever sent to another world, and ends with the rover's touchdown in the crater.

A microphone attached to the rover did not collect usable data during the descent, but the commercial off-the-shelf device survived the highly dynamic descent to the surface and obtained sounds from Jezero Crater on Feb. 20. About 10 seconds into the 60-second recording, a Martian breeze is audible for a few seconds, as are mechanical sounds of the rover operating on the surface.

"For those who wonder how you land on Mars -- or why it is so difficult -- or how cool it would be to do so -- you need look no further," said acting NASA Administrator Steve Jurczyk. "Perseverance is just getting started, and already has provided some of the most iconic visuals in space exploration history. It reinforces the remarkable level of engineering and precision that is required to build and fly a vehicle to the Red Planet."

Also released Monday was the mission's first panorama of the rover's landing location, taken by the two Navigation Cameras located on its mast. The six-wheeled robotic astrobiologist, the fifth rover the agency has landed on Mars, currently is undergoing an extensive checkout of all its systems and instruments.

"This video of Perseverance's descent is the closest you can get to landing on Mars without putting on a pressure suit," said Thomas Zurbuchen, NASA associate administrator for science. "It should become mandatory viewing for young women and men who not only want to explore other worlds and build the spacecraft that will take them there, but also want to be part of the diverse teams achieving all the audacious goals in our future."

The world's most intimate view of a Mars landing begins about 230 seconds after the spacecraft entered the Red Planet's upper atmosphere at 12,500 mph (20,100 kph). The video opens in black, with the camera lens still covered within the parachute compartment. Within less than a second, the spacecraft's parachute deploys and transforms from a compressed 18-by-26 inch (46-by-66 centimeter) cylinder of nylon, Technora, and Kevlar into a fully inflated 70.5-foot-wide (21.5-meter-wide) canopy -- the largest ever sent to Mars. The tens of thousands of pounds of force that the parachute generates in such a short period stresses both the parachute and the vehicle.

"Now we finally have a front-row view to what we call 'the seven minutes of terror' while landing on Mars," said Michael Watkins, director of NASA's Jet Propulsion Laboratory in Southern California, which manages the mission for the agency. "From the explosive opening of the parachute to the landing rockets' plume sending dust and debris flying at touchdown, it's absolutely awe-inspiring."

The video also captures the heat shield dropping away after protecting Perseverance from scorching temperatures during its entry into the Martian atmosphere. The downward view from the rover sways gently like a pendulum as the descent stage, with Perseverance attached, hangs from the back shell and parachute. The Martian landscape quickly pitches as the descent stage -- the rover's free-flying "jetpack," which decelerates using rocket engines and then lowers the rover on cables to the surface -- breaks free, its eight thrusters engaging to put distance between it and the now-discarded back shell and the parachute.

Then, 80 seconds and 7,000 feet (2,130 meters) later, the cameras capture the descent stage performing the sky crane maneuver over the landing site -- the plume of its rocket engines kicking up dust and small rocks that have likely been in place for billions of years.

"We put the EDL camera system onto the spacecraft not only for the opportunity to gain a better understanding of our spacecraft's performance during entry, descent, and landing, but also because we wanted to take the public along for the ride of a lifetime -- landing on the surface of Mars," said Dave Gruel, lead engineer for Mars 2020 Perseverance's EDL camera and microphone subsystem at JPL. "We know the public is fascinated with Mars exploration, so we added the EDL Cam microphone to the vehicle because we hoped it could enhance the experience, especially for visually-impaired space fans, and engage and inspire people around the world."

The footage ends with Perseverance's aluminum wheels making contact with the surface at 1.61 mph (2.6 kph), and then pyrotechnically fired blades sever the cables connecting it to the still-hovering descent stage. The descent stage then climbs and accelerates away in the preplanned flyaway maneuver.

"If this were an old Western movie, I'd say the descent stage was our hero riding slowly into the setting Sun, but the heroes are actually back here on Earth," said Matt Wallace, Mars 2020 Perseverance deputy project manager at JPL. "I've been waiting 25 years for the opportunity to see a spacecraft land on Mars. It was worth the wait. Being able to share this with the world is a great moment for our team."

Five commercial off-the-shelf cameras located on three different spacecraft components collected the imagery. Two cameras on the back shell, which encapsulated the rover on its journey, took pictures of the parachute inflating. A camera on the descent stage provided a downward view -- including the top of the rover -- while two on the rover chassis offered both upward and downward perspectives.

The rover team continues its initial inspection of Perseverance's systems and its immediate surroundings. Monday, the team will check out five of the rover's seven instruments and take the first weather observations with the Mars Environmental Dynamics Analyzer instrument. In the coming days, a 360-degree panorama of Jezero by the Mastcam-Z should be transmitted down, providing the highest resolution look at the road ahead.

More About the Mission

A key objective of Perseverance's mission on Mars is astrobiology, including the search for signs of ancient microbial life. The rover will characterize the planet's geology and past climate, pave the way for human exploration of the Red Planet, and be the first mission to collect and cache Martian rock and regolith.

Subsequent NASA missions, in cooperation with ESA (European Space Agency), would send spacecraft to Mars to collect these sealed samples from the surface and return them to Earth for in-depth analysis.

The Mars 2020 Perseverance mission is part of NASA's Moon to Mars exploration approach, which includes Artemis missions to the Moon that will help prepare for human exploration of the Red Planet.

Read more at Science Daily

Feb 22, 2021

First black hole ever detected is more massive than we thought

 New observations of the first black hole ever detected have led astronomers to question what they know about the Universe's most mysterious objects.

Published today in the journal Science, the research shows the system known as Cygnus X-1 contains the most massive stellar-mass black hole ever detected without the use of gravitational waves.

Cygnus X-1 is one of the closest black holes to Earth. It was discovered in 1964 when a pair of Geiger counters were carried on board a sub-orbital rocket launched from New Mexico.

The object was the focus of a famous scientific wager between physicists Stephen Hawking and Kip Thorne, with Hawking betting in 1974 that it was not a black hole. Hawking conceded the bet in 1990.

In this latest work, an international team of astronomers used the Very Long Baseline Array -- a continent-sized radio telescope made up of 10 dishes spread across the United States -- together with a clever technique to measure distances in space.

"If we can view the same object from different locations, we can calculate its distance away from us by measuring how far the object appears to move relative to the background," said lead researcher, Professor James Miller-Jones from Curtin University and the International Centre for Radio Astronomy Research (ICRAR).

"If you hold your finger out in front of your eyes and view it with one eye at a time, you'll notice your finger appears to jump from one spot to another. It's exactly the same principle."

"Over six days we observed a full orbit of the black hole and used observations taken of the same system with the same telescope array in 2011," Professor Miller-Jones said. "This method and our new measurements show the system is further away than previously thought, with a black hole that's significantly more massive."

Co-author Professor Ilya Mandel from Monash University and the ARC Centre of Excellence in Gravitational Wave Discovery (OzGrav) said the black hole is so massive it's actually challenging how astronomers thought they formed.

"Stars lose mass to their surrounding environment through stellar winds that blow away from their surface. But to make a black hole this heavy, we need to dial down the amount of mass that bright stars lose during their lifetimes" he said.

"The black hole in the Cygnus X-1 system began life as a star approximately 60 times the mass of the Sun and collapsed tens of thousands of years ago," he said. "Incredibly, it's orbiting its companion star -- a supergiant -- every five and a half days at just one-fifth of the distance between the Earth and the Sun.

"These new observations tell us the black hole is more than 20 times the mass of our Sun -- a 50 per cent increase on previous estimates."

Xueshan Zhao is a co-author on the paper and a PhD candidate studying at the National Astronomical Observatories -- part of the Chinese Academy of Sciences (NAOC) in Beijing.

"Using the updated measurements for the black hole's mass and its distance away from Earth, I was able to confirm that Cygnus X-1 is spinning incredibly quickly -- very close to the speed of light and faster than any other black hole found to date," she said.

"I'm at the beginning of my research career, so being a part of an international team and helping to refine the properties of the first black hole ever discovered has been a great opportunity."

Next year, the world's biggest radio telescope -- the Square Kilometre Array (SKA) -- will begin construction in Australia and South Africa.

"Studying black holes is like shining a light on the Universe's best kept secret -- it's a challenging but exciting area of research," Professor Miller-Jones said.

"As the next generation of telescopes comes online, their improved sensitivity reveals the Universe in increasingly more detail, leveraging decades of effort invested by scientists and research teams around the world to better understand the cosmos and the exotic and extreme objects that exist.

Read more at Science Daily

Psychological 'signature' for the extremist mind uncovered

 Researchers have mapped an underlying "psychological signature" for people who are predisposed to holding extreme social, political or religious attitudes, and support violence in the name of ideology.

A new study suggests that a particular mix of personality traits and unconscious cognition -- the ways our brains take in basic information -- is a strong predictor for extremist views across a range of beliefs, including nationalism and religious fervour.

These mental characteristics include poorer working memory and slower "perceptual strategies" -- the unconscious processing of changing stimuli, such as shape and colour -- as well as tendencies towards impulsivity and sensation seeking.

This combination of cognitive and emotional attributes predicts the endorsement of violence in support of a person's ideological "group," according to findings published today in Philosophical Transactions of the Royal Society B.

The study also maps the psychological signatures that underpin fierce political conservatism, as well as "dogmatism": people who have a fixed worldview and are resistant to evidence.

Psychologists found that conservatism is linked to cognitive "caution": slow-and-accurate unconscious decision-making, compared to the fast-and-imprecise "perceptual strategies" found in more liberal minds.

Brains of more dogmatic people are slower to process perceptual evidence, but they are more impulsive personality-wise. The mental signature for extremism across the board is a blend of conservative and dogmatic psychologies.

Researchers from the University of Cambridge say that, while still in early stages, this research could help to better identify and support people most vulnerable to radicalisation across the political and religious spectrum.

Approaches to radicalisation policy mainly rely on basic demographic information such as age, race and gender. By adding cognitive and personality assessments, the psychologists created a statistical model that is between four and fifteen times more powerful at predicting ideological worldviews than demographics alone.

"I'm interested in the role that hidden cognitive functions play in sculpting ideological thinking," said Dr Leor Zmigrod, lead author from Cambridge's Department of Psychology.

"Many people will know those in their communities who have become radicalised or adopted increasingly extreme political views, whether on the left or right. We want to know why particular individuals are more susceptible."

"By examining 'hot' emotional cognition alongside the 'cold' unconscious cognition of basic information processing we can see a psychological signature for those at risk of engaging with an ideology in an extreme way," Zmigrod said.

"Subtle difficulties with complex mental processing may subconsciously push people towards extreme doctrines that provide clearer, more defined explanations of the world, making them susceptible to toxic forms of dogmatic and authoritarian ideologies."

The research is published as part of a special issue of the Royal Society journal dedicated to "the political brain" compiled and co-edited by Zmigrod.

It is the latest in a series of studies by Zmigrod investigating the relationship between ideology and cognition. She has previously published findings on links between cognitive "inflexibility" and religious extremism, willingness to self-sacrifice for a cause, and a vote for Brexit.

A 2019 study by Zmigrod showed that this cognitive inflexibility is found in those with extreme attitudes on both the far right and far left of the political divide.

The latest research builds on work from Stanford University in which hundreds of study participants performed 37 different cognitive tasks and took 22 different personality surveys in 2016 and 2017.

Zmigrod and colleagues, including Cambridge psychologist Professor Trevor Robbins, conducted a series of follow-up tests in 2018 on 334 of the original participants, using a further 16 surveys to determine attitudes and strength of feeling towards various ideologies.

Political conservatism and nationalism was related to "caution" in unconscious decision-making, as well as "temporal discounting" -- when rewards are seen to lose value if delayed -- and slightly reduced strategic information processing in the cognitive domain.

Personality traits for conservatism and nationalism included greater goal-directedness, impulsivity and reward sensitivity, and reduced social risk-taking. Demographics alone had a predictive power of less than 8% for these ideologies, but adding the psychological signature boosted it to 32.5%.

Dogmatism was linked to reduced speed of perceptual "evidence accumulation," and reduced social risk-taking and agreeableness but heightened impulsivity and ethical risk-taking in the personality domain. Religiosity was cognitively similar to conservatism, but with higher levels of agreeableness and "risk perception."

Adding the psychological signatures to demographics increased the predictive power for dogmatism from 1.53% to 23.6%, and religiosity from 2.9% to 23.4%.

Across all ideologies investigated by the researchers, people who endorsed "extreme pro-group action," including ideologically-motivated violence against others, had a surprisingly consistent psychological profile.

The extremist mind -- a mixture of conservative and dogmatic psychological signatures -- is cognitively cautious, slower at perceptual processing and has a weaker working memory. This is combined with impulsive personality traits that seek sensation and risky experiences.

Added Zmigrod: "There appear to be hidden similarities in the minds of those most willing to take extreme measures to support their ideological doctrines. Understanding this could help us to support those individuals vulnerable to extremism, and foster social understanding across ideological divides."

Study participants were all from the United States, 49.4% were female, and ages ranged from 22-63.

Part of the study used tests of "executive functions" that help us to plan, organise and execute tasks e.g. restacking coloured disks to match guidelines, and keeping a series of categorised words in mind as new ones are added.

Additionally, results from various rapid decision-making tests -- switching between visual stimuli based on evolving instructions, for example -- were fed into computational models, allowing analyses of small differences in perceptual processing.

Researchers took the results of the in-depth, self-reported personality tests and boiled them down to 12 key factors ranging from goal-directedness and emotional control to financial risk-taking.

The examination of social and political attitudes took in a host of ideological positions including patriotism, religiosity and levels of authoritarianism on the left and right.

Read more at Science Daily

Dogs synchronize their behavior with children, but not as much as with adults, study finds

 Dogs synchronize their behavior with the children in their family, but not as much as they do with adults, a new study from Oregon State University researchers found.

The findings are important because there is a growing body of evidence that dogs can help children in many ways, including with social development, increasing physical activity, managing anxiety or as a source of attachment in the face of changing family structures, the researchers said. Yet, very little research has focused on how dogs perceive and socially engage with children.

"The great news is that this study suggests dogs are paying a lot of attention to the kids that they live with," said Oregon State animal behaviorist Monique Udell, the lead author of the study. "They are responsive to them and, in many cases, behaving in synchrony with them, indicators of positive affiliation and a foundation for building strong bonds.

"One interesting thing we have observed is that dogs are matching their child's behavior less frequently than what we have seen between dogs and adult caretakers, which suggests that while they may view children as social companions, there are also some differences that we need to understand better."

The paper was recently published in the journal Animal Cognition. Co-authors were Shelby Wanser, a faculty research assistant in Udell's lab, and Megan MacDonald, an associate professor in Oregon State's College of Public Health and Human Sciences, who studies how motor skills and physically active lifestyles improve the lives of children with and without disabilities

The researchers recruited 30 youth between the ages of 8 and 17 years old -- 83% of which had a developmental disability -- to take part in the study with their family dog. The experiments took place in a large empty room. Color-coded taped lines were placed on the floor, and the children were given instructions on how to walk the lines in a standardized way with their off-leash dog.

The researchers videotaped the experiments and analyzed behavior based on three things: (1) activity synchrony, which means how much time the dog and child were moving or stationary at the same time; (2) proximity, or how much time the dog and child were within 1 meter of each other; and (3) orientation, how much time the dog was oriented in the same direction as the child.

They found that dogs exhibited behavioral synchronization with the children at a higher rate than would be expected by chance for all three variables. During their assessments, they found:
 

  • Active synchrony for an average of 60.2% of the time. Broken down further, the dogs were moving an average of 73.1% of the time that the children were moving and were stationary an average of 41.2% of the time the children were stationary.
  • Proximity within 1 meter of each other for an average of 27.1% of the time.
  • Orientation in the same direction for an average of 33.5% of the time.


While child-dog synchrony occurred more often that what would be expected by chance, those percentages are all lower than what other researchers have found when studying interactions between dogs and adults in their household. Those studies found "active synchrony" 81.8% of the time, but at 49.1% with shelter dogs. They found "proximity" 72.9% of the time and 39.7% with shelter dogs. No studies on dog-human behavioral synchronization have previously assessed body orientation.

The Oregon State researchers are conducting more research to better understand factors that contribute to differences in levels of synchrony and other aspects of bond quality between dogs and children compared to dogs and adults, including participation in animal assisted interventions and increasing the child's responsibility for the dog's care.

While research has found dogs can have a lot of positive impacts on a child's life, there are also risks associated with the dog-child relationship, the researchers said. For example, other studies have found dogs are more apt to bite children versus adults.

"We still have a lot to learn about the dog-child relationship" Udell said. "We're hoping this research can inform the best ways to shape positive outcomes and mitigate risks by helping children interact with dogs in a manner that improves the relationship and ultimately the welfare of both individuals."

Read more at Science Daily

Advancing understanding of hop genome to aid brewers, medical researchers

 Oregon State University and U.S. Department of Agriculture researchers have significantly expanded the understanding of the hop genome, a development with important implications for the brewing industry and scientists who study the potential medical benefits of hops.

"This research has the unique ability to impact several different fields," said David Hendrix, an associate professor in the Department of Biochemistry and Biophysics and the School of Electrical Engineering and Computer Science at Oregon State. "If you're talking to beer drinkers, they will be excited about the brewing side. If you are talking to the medical field, they are going to be excited about the pharmaceutical potential."

The findings are outlined in a paper just published in the journal The Plant Genome. Hendrix and John Henning, a hop geneticist with the U.S. Department of Agriculture who has an appointment in the Oregon State College of Agricultural Sciences, are co-corresponding authors of the paper.

Demand for hops has surged in recent years as the craft beer industry has grown, fueled by beers, such as India pale ales, that are brewed with a lot of hops. This has led brewers to seek out new varieties of hops. With a better understanding of the hop genome, scientists will have an easier time developing new varieties, which may have qualities such as different flavor profiles or resistance to diseases that infect and damage hop plants.

"This really opens the door wide for breeding hops at the molecular level," Henning said. "We now have a much better understanding of how traits are being controlled and what genes are involved."

Compounds founds in hops are also increasingly of interest to medical researchers. For example, scientists at Oregon State have shown that xanthohumol, a natural flavonoid found in hops, may aid in combating cancer and metabolic syndrome. Knowing more about hop genes and how they are regulated creates potential for better understanding how compounds are produced and finding other hop compounds that could improve people's health.

Hops are part of the Cannabaceae family of plants, which also includes hemp and marijuana. In the just-published paper, the Oregon State researchers found gene structures in the hop genome that were similar to cannabidolic acid synthase, or CBDAS, which produces the precursor structure to CBD, the compound in cannabis plants that has surged in popularity in recent years because of its potential health benefits.

The Oregon State researchers stressed that their finding doesn't necessarily mean that hops produce CBDA, but it raises questions about the potential to identify new genes involved in the production of different compounds associated with flavoring or therapeutic benefit, and the potential to uncover new compounds in hops.

The researchers sequenced the genome of Cascade, a hop cultivar developed by USDA Agricultural Research Service in the 1960s and credited with helping to launch the craft beer movement. It is the second most widely grown hop variety in the United States today.

The United States is the top hop producing country in the world and Washington, Oregon and Idaho account for nearly all the hop acreage in the United States. In 2019, production of hops in the United States was worth more than $600 million.

Other scientists have attempted to sequence the hop genome, but they have had limited success because it is large -- similar in size to the human genome -- and complex, Hendrix said. The current research was made possible in part by new genome sequencing and assembly technology developed by Pacific Biosciences of California.

"The previous genomes were basically broken up," Hendrix said. "They were sequencing a lot of the genes, but they were isolated islands of the genome and they were not really getting the full context of what was going on in those islands. We were able to reveal a more complete and continuous genomic sequence."

In addition to Hendrix and Henning, the lead author on this paper was Lillian K. Padgitt-Cobb and co-authors of the paper are: Jackson Wells, Brent Kronmiller, Justin Elser and Pankaj Jaiswal, all of Oregon State; Daniel Moore of USDA Agricultural Research Service; Sarah B. Kingan, Gregory Concepcion, Paul Peluso and David Rank, all of Pacific Biosciences of California.

Read more at Science Daily

Sewage study shows which countries like to party hard

 Despite deaths and hospitalisations linked to many new psychoactive substances (NPS), an international wastewater study led by the University of South Australia shows just how prevalent 'party pills' and 'bath salts' are in different parts of the world.

In a new paper published in Water Research, the world's most comprehensive wastewater analysis of NPS shows the pattern of designer drug use in the 2019/2020 New Year in 14 sites across Australia, New Zealand, China, The Netherlands, Spain, Italy, Norway and the United States.

UniSA analytical chemist Dr Richard Bade says samples were collected over the New Year in each country and shipped to South Australia for analysis.

More than 200 synthetic drugs across all countries were monitored and 16 substances found.

"Of the eight countries studied, only Norway showed no traces of NPS," he says.

New psychoactive substances (NPS) are a range of drugs that have been designed to mimic established illicit drugs, such as cannabis, cocaine, MDMA and LSD.

"The Netherlands recorded the highest usage, followed by Australia, New Zealand and the United States. Spain, Italy and China had the lowest incidence of designer drug use in cities participating in the study."

N-ethylpentylone, which is known to cause fatalities, was seen in Australia, New Zealand and the US. It has previously been detected in forensic samples and at music festivals in Australia and New Zealand.

Another designer drug called mephedrone (often referred to as drone, M-CAT, White Magic and meow meow), was found only in Australia and New Zealand, with the latter country recording a 20-fold spike in usage on New Year's Eve.

"It is a very powerful drug that produces effects similar to those of cocaine and MDMA and is popular among ecstasy and simulant users in Australia and NZ," Dr Bade says.

The Netherlands recorded traces of six of 10 quantifiable drugs. Seven additional recreational drugs were also identified in the samples after screening.

Of these, ketamine (a human and veterinary anaesthetic) and its metabolite, norketamine, were found in every country.

A newer drug on the market -- eutylone -- was seen in Australia, New Zealand, the US and The Netherlands. Warnings were issued in 2020 that this designer drug was being incorrectly marketed as MDMA in New Zealand due to its visual similarity to the latter. High doses of it have been linked to intense and particularly dangerous side effects.

Traces of mitragynine, a drug involved in almost half of NPS-related deaths in 2019, were found only in the United States.

Another synthetic opioid, acetyl fentanyl, was also restricted to US wastewater samples. It is also linked to high fatalities in that country.

Of all the drugs, methcathinone was detected in seven countries, followed by N- ethylpentylone and 3-MMC (in three countries each).

"What makes the NPS so dangerous is that they were originally sold as legal alternatives to conventional illicit drugs such as ecstasy and cannabis, suggesting they were safe when, in fact, there was very little information about their toxicity," Dr Bade says.

"Governments soon intervened after hospitalisations and fatalities were linked to these class of drugs with some countries enforcing blanket bans. However, despite these bans, NPS are still synthesised, transported and consumed across the world, often with fatal consequences."

Read more at Science Daily

Feb 21, 2021

Friends fur life help build skills for life

 A new UBC Okanagan study finds children not only reap the benefits of working with therapy dogs-they enjoy it too.

"Dog lovers often have an assumption that canine-assisted interventions are going to be effective because other people are going to love dogs," says Nicole Harris, who conducted this research while a master's student in the School of Education. "While we do frequently see children improve in therapy dog programs, we didn't have data to support that they enjoyed the time as well."

Harris was the lead researcher in the study that explored how children reacted while participating in a social skill-training program with therapy dogs.

The research saw 22 children from the Okanagan Boys and Girls Club take part in a series of sessions to help them build their social skills. Over six weeks, the children were accompanied by therapy dogs from UBC Okanagan's Building Academic Retention through K9s (BARK) program as they completed lessons.

Each week the children were taught a new skill, such as introducing themselves or giving directions to others. The children would first practice with their assigned therapy dog before running through the exercise with the rest of the group. In the final phase, the children -- accompanied by their new furry friend and volunteer handler -- would practice their new skills with university students located in the building.

"Therapy dogs are often able to reach children and facilitate their growth in surprising ways. We saw evidence of this in the social skills of children when they were paired with a therapy dog," says Dr. John-Tyler Binfet, associate professor in the School of Education and director of BARK. "The dogs helped create a non-threatening climate while the children were learning these new skills. We saw the children practice and hone their social skills with and alongside the dogs."

While the children were learning and practising their new skills, the research team collected data.

"Findings from our observations suggested that canine-assisted social and emotional learning initiatives can provide unique advantages," says Harris. "Our team saw that by interacting with the therapy dogs, the children's moods improved and their engagement in their lessons increased."

In fact, 87 per cent of the team rated the children's engagement level as very or extremely engaged during the sessions.

At the end of the six weeks, Harris interviewed eight children, aged 5 to 11 years old, who regularly attended the sessions. Each child indicated the social skill-training program was an enjoyable and positive experience and the dogs were a meaningful and essential part of the program.

One participant noticed that the children behaved better at the sessions than at their regular after-school care program, and they thought it was because the children liked being around the dogs.

Half of the children mentioned ways that they felt the dogs helped with their emotional well-being, with one participant crediting a dog with helping him "become more responsible and control his silliness."

As a full-time elementary school teacher, Harris notes that schools have become increasingly important in helping students develop social and emotional skills, and this research could contribute to the development of future school-based or after-school programs.

"Dogs have the ability to provide many stress-reducing and confidence-boosting benefits to children," says Harris. "It was really heartwarming to see the impact the program had on the kids."

Read more at Science Daily

Boys who play video games have lower depression risk

 Boys who regularly play video games at age 11 are less likely to develop depressive symptoms three years later, finds a new study led by a UCL researcher.

The study, published in Psychological Medicine, also found that girls who spend more time on social media appear to develop more depressive symptoms.

Taken together, the findings demonstrate how different types of screen time can positively or negatively influence young people's mental health, and may also impact boys and girls differently.

Lead author, PhD student Aaron Kandola (UCL Psychiatry) said: "Screens allow us to engage in a wide range of activities. Guidelines and recommendations about screen time should be based on our understanding of how these different activities might influence mental health and whether that influence is meaningful.

"While we cannot confirm whether playing video games actually improves mental health, it didn't appear harmful in our study and may have some benefits. Particularly during the pandemic, video games have been an important social platform for young people.

"We need to reduce how much time children -- and adults -- spend sitting down, for their physical and mental health, but that doesn't mean that screen use is inherently harmful."

Kandola has previously led studies finding that sedentary behaviour (sitting still) appeared to increase the risk of depression and anxiety in adolescents. To gain more insight into what drives that relationship, he and colleagues chose to investigate screen time as it is responsible for much of sedentary behaviour in adolescents. Other studies have found mixed results, and many did not differentiate between different types of screen time, compare between genders, or follow such a large group of young people over multiple years.

The research team from UCL, Karolinska Institutet (Sweden) and the Baker Heart and Diabetes Institute (Australia) reviewed data from 11,341 adolescents who are part of the Millennium Cohort Study, a nationally representative sample of young people who have been involved in research since they were born in the UK in 2000-2002.

The study participants had all answered questions about their time spent on social media, playing video games, or using the internet, at age 11, and also answered questions about depressive symptoms, such as low mood, loss of pleasure and poor concentration, at age 14. The clinical questionnaire measures depressive symptoms and their severity on a spectrum, rather than providing a clinical diagnosis.

In the analysis, the research team accounted for other factors that might have explained the results, such as socioeconomic status, physical activity levels, reports of bullying, and prior emotional symptoms.

The researchers found that boys who played video games most days had 24% fewer depressive symptoms, three years later, than boys who played video games less than once a month, although this effect was only significant among boys with low physical activity levels, and was not found among girls. The researchers say this might suggest that less active boys could derive more enjoyment and social interaction from video games.

While their study cannot confirm if the relationship is causal, the researchers say there are some positive aspects of video games which could support mental health, such as problem-solving, and social, cooperative and engaging elements.

There may also be other explanations for the link between video games and depression, such as differences in social contact or parenting styles, which the researchers did not have data for. They also did not have data on hours of screen time per day, so they cannot confirm whether multiple hours of screen time each day could impact depression risks.

The researchers found that girls (but not boys) who used social media most days at age 11 had 13% more depressive symptoms three years later than those who used social media less than once a month, although they did not find an association for more moderate use of social media. Other studies have previously found similar trends, and researchers have suggested that frequent social media use could increase feelings of social isolation.

Screen use patterns between boys and girls may have influenced the findings, as boys in the study played video games more often than girls and used social media less frequently.

The researchers did not find clear associations between general internet use and depressive symptoms in either gender.

Senior author Dr Mats Hallgren (Karolinska Institutet) has conducted other studies in adults finding that mentally-active types of screen time, such as playing video games or working at a computer, might not affect depression risk in the way that more passive forms of screen time appear to do.

Read more at Science Daily