Mar 27, 2021

Ancient genomes trace the origin and decline of the Scythians

 Because of their interactions and conflicts with the major contemporaneous civilizations of Eurasia, the Scythians enjoy a legendary status in historiography and popular culture. The Scythians had major influences on the cultures of their powerful neighbors, spreading new technologies such as saddles and other improvements for horse riding. The ancient Greek, Roman, Persian and Chinese empires all left a multitude of sources describing, from their perspectives, the customs and practices of the feared horse warriors that came from the interior lands of Eurasia.

Still, despite evidence from external sources, little is known about Scythian history. Without a written language or direct sources, the language or languages they spoke, where they came from and the extent to which the various cultures spread across such a huge area were in fact related to one another, remain unclear.

The Iron Age transition and the formation of the genetic profile of the Scythians

A new study published in Science Advances by an international team of geneticists, anthropologists and archeologists lead by scientists from the Archaeogenetics Department of the Max Planck Institute for the Science of Human History in Jena, Germany, helps illuminate the history of the Scythians with 111 ancient genomes from key Scythian and non-Scythian archaeological cultures of the Central Asian steppe. The results of this study reveal that substantial genetic turnovers were associated with the decline of the long-lasting Bronze Age sedentary groups and the rise of Scythian nomad cultures in the Iron Age. Their findings show that, following the relatively homogenous ancestry of the late Bronze Age herders, at the turn of the first millennium BCE, influxes from the east, west and south into the steppe formed new admixed gene pools.

The diverse peoples of the Central Asian Steppe

The study goes even further, identifying at least two main sources of origin for the nomadic Iron Age groups. An eastern source likely originated from populations in the Altai Mountains that, during the course of the Iron Age, spread west and south, admixing as they moved. These genetic results match with the timing and locations found in the archeological record and suggest an expansion of populations from the Altai area, where the earliest Scythian burials are found, connecting different renowned cultures such as the Saka, the Tasmola and the Pazyryk found in southern, central and eastern Kazakhstan respectively. Surprisingly, the groups located in the western Ural Mountains descend from a second separate, but simultaneous source. Contrary to the eastern case, this western gene pool, characteristic of the early Sauromatian-Sarmatian cultures, remained largely consistent through the westward spread of the Sarmatian cultures from the Urals into the Pontic-Caspian steppe.

The decline of the Scythian cultures associated with new genetic turnovers

The study also covers the transition period after the Iron Age, revealing new genetic turnovers and admixture events. These events intensified at the turn of the first millennium CE, concurrent with the decline and then disappearance of the Scythian cultures in the Central Steppe. In this case, the new far eastern Eurasian influx is plausibly associated with the spread of the nomad empires of the Eastern steppe in the first centuries CE, such as the Xiongnu and Xianbei confederations, as well as minor influxes from Iranian sources likely linked to the expansion of Persian-related civilization from the south.

Although many of the open questions on the history of the Scythians cannot be solved by ancient DNA alone, this study demonstrates how much the populations of Eurasia have changed and intermixed through time. Future studies should continue to explore the dynamics of these trans-Eurasian connections by covering different periods and geographic regions, revealing the history of connections between west, central and east Eurasia in the remote past and their genetic legacy in present day Eurasian populations.

From Science Daily

Measurable changes in brain activity during first few months of studying a new language

 A study with first-time learners of Japanese has measured how brain activity changes after just a few months of studying a new language. The results show that acquiring a new language initially boosts brain activity, which then reduces as language skills improve.

"In the first few months, you can quantitatively measure language-skill improvement by tracking brain activations," said Professor Kuniyoshi L. Sakai, a neuroscientist at the University of Tokyo and first author of the research recently published in Frontiers in Behavioral Neuroscience.

Researchers followed 15 volunteers as they moved to Tokyo and completed introductory Japanese classes for at least three hours each day. All volunteers were native speakers of European languages in their 20s who had previously studied English as children or teenagers, but had no prior experience studying Japanese or traveling to Japan.

Volunteers took multiple choice reading and listening tests after at least eight weeks of lessons and again six to fourteen weeks later. Researchers chose to assess only the "passive" language skills of reading and listening because those can be more objectively scored than the "active" skills of writing and speaking. Volunteers were inside a magnetic resonance imaging (MRI) scanner while taking the tests so that researchers could measure local blood flow around their brain regions, an indicator of neuronal activity.

"In simple terms, there are four brain regions specialized for language . Even in a native, second or third language, the same regions are responsible," said Sakai.

Those four regions are the grammar center and comprehension area in the left frontal lobe as well as the auditory processing and vocabulary areas in the temporo-parietal lobe. Additionally, the memory areas of the hippocampus and the vision areas of the brain, the occipital lobes, also become active to support the four language-related regions while taking the tests.

During the initial reading and listening tests, those areas of volunteers' brains showed significant increases in blood flow, revealing that the volunteers were thinking hard to recognize the characters and sounds of the unfamiliar language. Volunteers scored about 45% accuracy on the reading tests and 75% accuracy on the listening tests (random guessing on the multiple choice tests would produce 25% accuracy).

Researchers were able to distinguish between two subregions of the hippocampus during the listening tests. The observed activation pattern fits previously described roles for the anterior hippocampus in encoding new memories and for the posterior hippocampus in recalling stored information.

At the second test several weeks later, volunteers' reading test scores improved to an average of 55%. Their accuracy on the listening tests was unchanged, but they were faster to choose an answer, which researchers interpret as improved comprehension.

Comparing results from the first tests to the second tests, after additional weeks of study, researchers found decreased brain activation in the grammar center and comprehension area during listening tests, as well as in the visual areas of the occipital lobes during the reading tests.

"We expect that brain activation goes down after successfully learning a language because it doesn't require so much energy to understand," said Sakai.

Notably during the second listening test, volunteers had slightly increased activation of the auditory processing area of their temporal lobes, likely due to an improved "mind's voice" while hearing.

"Beginners have not mastered the sound patterns of the new language, so cannot hold in memory and imagine them well. They are still expending a lot of energy to recognize the speech in contrast to letters or grammar rules," said Sakai.

This pattern of brain activation changes -- a dramatic initial rise during the learning phase and a decline as the new language is successfully acquired and consolidated -- can give experts in the neurobiology of language a biometric tool to assess curricula for language learners or potentially for people regaining language skills lost after a stroke or other brain injury.

"In the future, we can measure brain activations to objectively compare different methods to learn a language and select a more effective technique," said Sakai.

Until an ideal method can be identified, researchers at UTokyo recommend acquiring a language in an immersion-style natural environment like studying abroad, or any way that simultaneously activates the brain's four language regions.

This pattern of brain activation over time in individual volunteers' brains mirrors results from previous research where Sakai and his collaborators worked with native Japanese-speaking 13- and 19-year-olds who learned English in standard Tokyo public school lessons. Six years of study seemed to allow the 19-year-olds to understand the second language well enough that brain activation levels reduced to levels similar to those of their native language.

The recent study confirmed this same pattern of brain activation changes over just a few months, not years, potentially providing encouragement for anyone looking to learn a new language as an adult.

Read more at Science Daily

Mar 26, 2021

Warm water has overlooked importance for cold-water fish, like salmon and trout

 Warm river habitats appear to play a larger than expected role supporting the survival of cold-water fish, such as salmon and trout, a new Oregon State University-led study published today found.

The research has important implications for fish conservation strategies. A common goal among scientists and policymakers is to identify and prioritize habitat for cold-water fish that remains suitably cool during the summer, especially as the climate warms.

This implicitly devalues areas that are seasonally warm, even if they are suitable for fish most of the year, said Jonny Armstrong, lead author of the paper and an ecologist at Oregon State. He called this a "potentially severe blind spot for climate change adaptation."

"Coldwater fish like trout and salmon are the polar bears of river ecosystems -- iconic species that are among the most vulnerable to climate change," Armstrong said. "A huge challenge for conservation is to figure out how to help these fish survive a warmer future. The conclusion is that we should not waste money on warm habitats and instead focus on saving the coldest places, such as high mountain streams, which are already the most pristine parts of basins. Most people agree we should give up on places that are warm in summer, but forget that these places are actually optimal for much of the year."

In the new paper, published in Nature Climate Change, Armstrong and collaborators at Oregon State and several federal agencies, show that warm river habitats, typically lower in basins, provide pulses of growth potential during the spring and fall, so-called shoulder seasons, when the rivers are not at peak summer temperatures. Foraging in these warm habitats can provide fish the needed energy to travel to cooler parts of the river during the summer and to reproduce.

"The synergy between cold water and warm water is really important," said Armstrong, an assistant professor in the Department of Fisheries and Wildlife in the College of Agricultural Sciences. "We're not saying cold water is not important. We're saying that warm portions of basins are also important because they grow fish during the shoulder seasons. Conserving this habitat is critical for unlocking the full potential of rivers to support fisheries.

"In a warmer future, many fish will need fish to take a summer vacation and move to cold places to survive the hottest months of the year. Their ability to do that could often depend on how much energy they can get in the spring and how well they can feed in the fall to bounce back. The places that are stressfully warm in summer are just right in spring and fall, and there is growing evidence that they can fuel fisheries"

For the study, the researchers used data from another team of scientists that used remote sensing technology to obtain river water temperature data across entire landscapes throughout the year. That team compiled data for 14 river basins in Oregon, Washington and Idaho.

The OSU-led team plugged these temperature data into a "bioenergetics model" that predicts fish growth potential based on equations derived from lab studies. This provided new insights into how growth opportunities shift across river basins throughout the year, and how a large fraction of total growth potential can accrue during the spring and autumn in places that are too hot during summer.

To explore how these warm habitats could contribute to fisheries, the team created a simulation model in which virtual rainbow trout were given simple behavior rules and allowed to forage throughout the year in a basin with cold tributaries and a warm, productive main-stem river. Their simulations showed the majority of fish moved into cooler waters in the summer and exhibited meager growth rates. However, outside summer, the simulation showed the fish resided primarily in seasonally warm downstream habitats, which fueled the vast majority of their growth.

"In conservation, we often judge streams by their summer conditions; this is when we traditionally do field work, and this is the season we focus on when planning for climate change," Armstrong said. "We place value on places that hold fish during summer and devalue those that don't. Our simulation showed why this can be a problem -- the portions of rivers that contribute most to growth may not be the places where fish are found during summer, so they get written off."

The simulations reveal the synergy between seasonally warm and perennially cool habitats and that fish that lived in these two types of habitats grew much more than fish that were restricted to either habitat alone, Armstrong said.

"We think of things in this binary way -- it's either warm-water habitat or its cold-water habitat," Armstrong said. "And we have definitions for fish -- it's either a warm-water fish or a cold-water fish. But the places we think of as warm are, in fact, cold way more than they are warm."

He then mentioned an example using rivers in Oregon, including the Willamette, a tributary of the Columbia River that runs nearly 200 miles from Eugene to Portland.

"When it's warm enough for humans to swim, it's bad for cold-water fish. But there's only like six weeks of the year where it is comfortable to go swimming in Oregon," Armstrong said. "That speaks to the fact that we write off places because they get too hot through the lens of August. They're actually pretty nice for most of the year if you're a cold-water fish. And fish don't necessarily have to live there in August, just like you don't have to go swimming in the Willamette in December."

This research is continuing in the field at Upper Klamath Lake in Southern Oregon, where Armstrong and a team of researchers are tracking the movement and feeding behavior of redband trout as water temperature changes.

Read more at Science Daily

Dogs (not) gone wild: DNA tests show most 'wild dogs' in Australia are pure dingoes

 Almost all wild canines in Australia are genetically more than half dingo, a new study led by UNSW Sydney shows -- suggesting that lethal measures to control 'wild dog' populations are primarily targeting dingoes.

The study, published today in Australian Mammalogy, collates the results from over 5000 DNA samples of wild canines across the country, making it the largest and most comprehensive dingo data set to date.

The team found that 99 per cent of wild canines tested were pure dingoes or dingo-dominant hybrids (that is, a hybrid canine with more than 50 per cent dingo genes).

Of the remaining one per cent, roughly half were dog-dominant hybrids and the other half feral dogs.

"We don't have a feral dog problem in Australia," says Dr Kylie Cairns, a conservation biologist from UNSW Science and lead author of the study. "They just aren't established in the wild.

"There are rare times when a dog might go bush, but it isn't contributing significantly to the dingo population."

The study builds on a 2019 paper by the team that found most wild canines in NSW are pure dingoes or dingo-dominant hybrids. The newer paper looked at DNA samples from past studies across Australia, including more than 600 previously unpublished data samples.

Pure dingoes -- dingoes with no detectable dog ancestry -- made up 64 per cent of the wild canines tested, while an additional 20 per cent were at least three-quarters dingo.

The findings challenge the view that pure dingoes are virtually extinct in the wild -- and call to question the widespread use of the term 'wild dog'.

"'Wild dog' isn't a scientific term -- it's a euphemism," says Dr Cairns.

"Dingoes are a native Australian animal, and many people don't like the idea of using lethal control on native animals.

"The term 'wild dog' is often used in government legislation when talking about lethal control of dingo populations."

The terminology used to refer to a species can influence our underlying attitudes about them, especially when it comes to native and culturally significant animals.

This language can contribute to other misunderstandings about dingoes, like being able to judge a dingo's ancestry by the colour of its coat -- which can naturally be sandy, black, white, brindle, tan, patchy, or black and tan.

"There is an urgent need to stop using the term 'wild dog' and go back to calling them dingoes," says Mr Brad Nesbitt, an Adjunct Research Fellow at the University of New England and a co-author on the study.

"Only then can we have an open public discussion about finding a balance between dingo control and dingo conservation in the Australian bush."

Tracing the cause of hybridisation

While the study found dingo-dog hybridisation isn't widespread in Australia, it also identified areas across the country with higher traces of dog DNA than the national average.

Most hybridisation is taking place in southeast Australia -- and particularly in areas that use long-term lethal control, like aerial baiting. This landscape-wide form of lethal control involves dropping meat baits filled with the pesticide sodium fluoroacetate (commonly known as 1080) into forests via helicopter or airplane.

"The pattern of hybridisation is really stark now that we have the whole country to look at," says Dr Cairns.

"Dingo populations are more stable and intact in areas that use less lethal control, like western and northern Australia. In fact, 98 per cent of the animals tested here are pure dingoes.

"But areas of the country that used long-term lethal control, like NSW, Victoria and southern Queensland, have higher rates of dog ancestry."

The researchers suggest that higher human densities (and in turn, higher domestic dog populations) in southeast Australia are likely playing a key part in this hybridisation.

But the contributing role of aerial baiting -- which fractures the dingo pack structure and allows dogs to integrate into the breeding packs -- is something that can be addressed.

"If we're going to aerial bait the dingo population, we should be thinking more carefully about where and when we use this lethal control," she says.

"Avoiding baiting in national parks, and during dingoes' annual breeding season, will help protect the population from future hybridisation."

Protecting the ecosystem

Professor Mike Letnic, senior author of the study and professor of conservation biology, has been researching dingoes and their interaction with the ecosystem for 25 years.

He says they play an important role in maintaining the biodiversity and health of the ecosystem.

"As apex predators, dingoes play a fundamental role in shaping ecosystems by keeping number of herbivores and smaller predators in check," says Prof. Letnic.

"Apex predators' effects can trickle all the way through ecosystems and even extend to plants and soils."

Prof. Letnic's previous research has shown that suppressing dingo populations can lead to a growth in kangaroo numbers, which has repercussions for the rest of the ecosystem.

For example, high kangaroo populations can lead to overgrazing, which in turn damages the soil, changes the face of the landscape and can jeopardise land conservation.

A study published last month found the long-term impacts of these changes are so pronounced they are visible from space.

But despite the valuable role they play in the ecosystem, dingoes are not being conserved across Australia -- unlike many other native species.

"Dingoes are a listed threatened species in Victoria, so they're protected in national parks," says Dr Cairns. "They're not protected in NSW and many other states."

The need for consultation

Dr Cairns, who is also a scientific advisor to the Australian Dingo Foundation, says the timing of this paper is important.

"There is a large amount of funding currently going towards aerial baiting inside national parks," she says. "This funding is to aid bushfire recovery, but aerial wild dog baiting doesn't target invasive animals or 'wild dogs' -- it targets dingoes.

"We need to have a discussion about whether killing a native animal -- which has been shown to have benefits for the ecosystem -- is the best way to go about ecosystem recovery."

Dingoes are known to negatively impact farming by preying on livestock, especially sheep.

The researchers say it's important that these impacts are minimised, but how we manage these issues is deserving of wider consultation -- including discussing non-lethal methods to protect livestock.

"There needs to be a public consultation about how we balance dingo management and conservation," says Dr Cairns. "The first step in having these clear and meaningful conversations is to start calling dingoes what they are.

Read more at Science Daily

The very first structures in the Universe

The very first moments of the Universe can be reconstructed mathematically even though they cannot be observed directly. Physicists from the Universities of Göttingen and Auckland (New Zealand) have greatly improved the ability of complex computer simulations to describe this early epoch. They discovered that a complex network of structures can form in the first trillionth of a second after the Big Bang. The behaviour of these objects mimics the distribution of galaxies in today's Universe. In contrast to today, however, these primordial structures are microscopically small. Typical clumps have masses of only a few grams and fit into volumes much smaller than present-day elementary particles. The results of the study have been published in the journal Physical Review D.

The researchers were able to observe the development of regions of higher density that are held together by their own gravity. "The physical space represented by our simulation would fit into a single proton a million times over," says Professor Jens Niemeyer, head of the Astrophysical Cosmology Group at the University of Göttingen. "It is probably the largest simulation of the smallest area of the Universe that has been carried out so far." These simulations make it possible to calculate more precise predictions for the properties of these vestiges from the very beginnings of the Universe.

Although the computer-simulated structures would be very short-lived and eventually "vaporise" into standard elementary particles, traces of this extreme early phase may be detectable in future experiments. "The formation of such structures, as well as their movements and interactions, must have generated a background noise of gravitational waves," says Benedikt Eggemeier, a PhD student in Niemeyer's group and first author of the study. "With the help of our simulations, we can calculate the strength of this gravitational wave signal, which might be measurable in the future."

It is also conceivable that tiny black holes could form if these structures undergo runaway collapse. If this happens they could have observable consequences today, or form part of the mysterious dark matter in the Universe. "On the other hand," says Professor Easther, "If the simulations predict black holes form, and we don't see them, then we will have found a new way to test models of the infant Universe."

From Science Daily

Controlled scar formation in the brain

When the brain suffers injury or infection, glial cells surrounding the affected site act to preserve the brain's sensitive nerve cells and prevent excessive damage. A team of researchers from Charité -- Universitätsmedizin Berlin have been able to demonstrate the important role played by the reorganization of the structural and membrane elements of glial cells. The researchers' findings, which have been published in Nature Communications, shed light on a new neuroprotective mechanism which the brain could use to actively control damage following neurological injury or disease.

The nervous system lacks the ability to regenerate nerve cells and is therefore particularly vulnerable to injury. Following brain injury or infection, various cells have to work together in a coordinated manner in order to limit damage and enable recovery. 'Astrocytes', the most common type of glial cell found in the central nervous system, play a key role in the protection of surrounding tissues. They form part of a defense mechanism known as 'reactive astrogliosis', which facilitates scar formation, thereby helping to contain inflammation and control tissue damage. Astrocytes can also ensure the survival of nerve cells located immediately adjacent to a site of tissue injury, thereby preserving the function of neuronal networks. The researchers were able to elucidate a new mechanism which explains what processes happen inside the astrocytes and how these are coordinated.

"We were able to show for the first time that the protein 'drebrin' controls astrogliosis," says study lead Prof. Dr. Britta Eickholt, Director of Charité's Institute of Biochemistry and Molecular Biology. "Astrocytes need drebrin in order to form scars and protect the surrounding tissue." By switching off the production of drebrin inside astrocytes, the researchers were able to study its role in brain injury in an animal model. They used electron microscopy and high-resolution light microscopy to investigate cellular changes in the brain, in addition to undertaking real-time investigations using isolated astrocytes in cell culture. "Loss of drebrin results in the suppression of normal astrocyte activation," explains Prof. Eickholt. She adds: "Instead of engaging in defensive reactions, these astrocytes suffer complete loss of function and abandon their cellular identity." Without protective scar formation, normally harmless injuries will spread, and more and more nerve cells will die.

To enable scar formation, drebrin controls the reorganization of the actin cytoskeleton, an internal scaffold responsible for maintaining astrocyte mechanical stability. By doing so, drebrin also induces the formation of long cylindrical membrane structures known as tubular endosomes, which are used in the uptake, sorting and redistribution of surface receptors and are needed for the defensive measures of astrocytes. Summing up the researchers' findings, Prof. Eickholt says: "Our findings also show how drebrin uses the dynamic and versatile cytoskeleton as well as membrane structures to control astrocyte functions which are fundamental to the defense mechanism against injury." She continues: "In particular, the membrane tubules which are formed during this process have not previously been described in this manner, neither in cultured astrocytes nor in the brain."

Read more at Science Daily

Mar 25, 2021

New images reveal magnetic structures near supermassive black hole

 A new view of the region closest to the supermassive black hole at the center of the galaxy Messier 87 (M87) has shown important details of the magnetic fields close to the black hole and hints about how powerful jets of material can originate in that region.

A worldwide team of astronomers using the Event Horizon Telescope, a collection of eight telescopes, including the Atacama Large Millimeter/submillimeter Array (ALMA) in Chile, measured a signature of magnetic fields -- called polarization -- around the black hole. Polarization is the orientation of the electric fields in light and radio waves and it can indicate the presence and alignment of magnetic fields.

"We are now seeing the next crucial piece of evidence to understand how magnetic fields behave around black holes, and how activity in this very compact region of space can drive powerful jets," said Monika Mo?cibrodzka, Coordinator of the EHT Polarimetry Working Group and Assistant Professor at Radboud University in the Netherlands.

New images with the EHT and ALMA allowed scientists to map magnetic field lines near the edge of M87's black hole. That same black hole is the first ever to be imaged -- by the EHT in 2019. That image revealed a bright ring-like structure with a dark central region -- the black hole's shadow. The newest images are a key to explaining how M87, 50 million light-years from Earth, can launch energetic jets from its core.

The black hole at M87's center is more than 6 billion times more massive than the Sun. Material drawn inward forms a rotating disk -- called an accretion disk -- closely orbiting the black hole. Most of the material in the disk falls into the black hole, but some surrounding particles escape and are ejected far out into space in jets moving at nearly the speed of light.

"The newly published polarized images are key to understanding how the magnetic field allows the black hole to 'eat' matter and launch powerful jets," said Andrew Chael, a NASA Hubble Fellow at the Princeton Center for Theoretical Science and the Princeton Gravity Initiative in the U.S.

The scientists compared the new images that showed the magnetic field structure just outside the black hole with computer simulations based on different theoretical models. They found that only models featuring strongly magnetized gas can explain what they are seeing at the event horizon.

"The observations suggest that the magnetic fields at the black hole's edge are strong enough to push back on the hot gas and help it resist gravity's pull. Only the gas that slips through the field can spiral inwards to the event horizon," explained Jason Dexter, Assistant Professor at the University of Colorado Boulder and Coordinator of the EHT Theory Working Group.

To make the new observations, the scientists linked eight telescopes around the world -- including ALMA -- to create a virtual Earth-sized telescope, the EHT. The impressive resolution obtained with the EHT is equivalent to that needed to measure the length of a credit card on the surface of the Moon.

This resolution allowed the team to directly observe the black hole shadow and the ring of light around it, with the new image clearly showing that the ring is magnetized. The results are published in two papers in the Astrophysical Journal Letters by the EHT collaboration. The research involved more than 300 researchers from multiple organizations and universities worldwide.

A third paper also was published in the same volume of the Astrophysical Journal Letters, based on data from ALMA, lead by Ciriaco Goddi, a scientist at Radboud University and Leiden Observatory, the Netherlands.

"The combined information from the EHT and ALMA allowed scientists to investigate the role of magnetic fields from the vicinity of the event horizon to far beyond the core of the galaxy, along its powerful jets extending thousands of light-years," Goddi said.

The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.

The EHT collaboration involves more than 300 researchers from Africa, Asia, Europe, North and South America. The international collaboration is working to capture the most detailed black hole images ever obtained by creating a virtual Earth-sized telescope. Supported by considerable international investment, the EHT links existing telescopes using novel systems -- creating a fundamentally new instrument with the highest angular resolving power that has yet been achieved.

The individual telescopes involved are: ALMA, APEX, the Institut de Radioastronomie Millimetrique (IRAM) 30-meter Telescope, the IRAM NOEMA Observatory, the James Clerk Maxwell Telescope (JCMT), the Large Millimeter Telescope (LMT), the Submillimeter Array (SMA), the Submillimeter Telescope (SMT), the South Pole Telescope (SPT), the Kitt Peak Telescope, and the Greenland Telescope (GLT).

The EHT consortium consists of 13 stakeholder institutes: the Academia Sinica Institute of Astronomy and Astrophysics, the University of Arizona, the University of Chicago, the East Asian Observatory, Goethe-Universitaet Frankfurt, Institut de Radioastronomie Millimétrique, Large Millimeter Telescope, Max Planck Institute for Radio Astronomy, MIT Haystack Observatory, National Astronomical Observatory of Japan, Perimeter Institute for Theoretical Physics, Radboud University and the Smithsonian Astrophysical Observatory.

Read more at Science Daily

New study finds false memories can be reversed

 Rich false memories of autobiographical events can be planted -- and then reversed, a new paper has found.

The study highlights -- for the first time -- techniques that can correct false recollections without damaging true memories. It is published by researchers from the University of Portsmouth, UK, and the Universities of Hagen and Mainz, Germany.

There is plenty of psychological research which shows that memories are often reconstructed and therefore fallible and malleable. However, this is the first time research has shown that false memories of autobiographical events can be undone.

Studying how memories are created, identified and reversed could be a game changer in police and legal settings, where false memories given as evidence in a courtroom can lead to wrongful convictions.

According to Dr Hartmut Blank, co-author of the research from the University of Portsmouth's Department of Psychology, "believing, or even remembering something that never happened may have severe consequences. In police interrogations or legal proceedings, for instance, it may lead to false confessions or false allegations, and it would be highly desirable, therefore, to reduce the risk of false memories in such settings.

"In this study, we made an important step in this direction by identifying interview techniques that can empower people to retract their false memories."

The researchers recruited 52 participants for a study on 'childhood memories' and with the help of parents, they implanted two false negative memories that definitely didn't happen, but were plausible. For example getting lost, running away or being involved in a car accident.

Along with two true events, which had actually happened, participants were persuaded by their parents that all four events were part of their autobiographical memory.

The participants were then asked to recall each event in multiple interview sessions. By the third session, most believed the false events had happened and -- similar to previous research -- about 40 per cent had developed actual false memories of them.

The researchers then attempted to undo the false memories by using two strategies.

The first involved reminding participants that memories may not always be based on people's own experience, but also on other sources such as a photograph or a family member's narrative. They were then asked about the source of each of the four events.

The second strategy involved explaining to them that being asked to repeatedly recall something can elicit false memories. They were asked to revisit their event memories with this in mind.

The result, according to Dr Blank, was that "by raising participants' awareness of the possibility of false memories, urging them to critically reflect on their recollections and strengthening their trust in their own perspective, we were able to significantly reduce their false memories. Moreover, and importantly, this did not affect their ability to remember true events.

Read more at Science Daily

Penguin hemoglobin evolved to meet oxygen demands of diving

 Call it the evolutionary march of the penguins.

More than 50 million years ago, the lovable tuxedoed birds began leaving their avian relatives at the shoreline by waddling to the water's edge and taking a dive in the pursuit of seafood.

Webbed feet, flipper-like wings and unique feathers all helped penguins adapt to their underwater excursions. But new research from the University of Nebraska-Lincoln has shown that the evolution of diving is also in their blood, which optimized its capture and release of oxygen to ensure that penguins wouldn't waste their breath while holding it.

Relative to land-dwelling birds, penguin blood is known to contain more hemoglobin: the protein that picks up oxygen from the lungs and transports it through the bloodstream before dropping it off at various tissues. That abundance could partly explain the underwater proficiency of, say, the emperor penguin, which dives deeper than any bird and has been documented holding its breath for more than 30 minutes while preying on krill, fish and squid.

Still, the particulars of their hemoglobin -- and how much it actually evolved to help penguins become fish-gobbling torpedoes that spend up to half of their lives underwater -- remained open questions. So Nebraska biologists Jay Storz and Anthony Signore, who often study the hemoglobin of birds that survive miles above sea level, decided to investigate the birds most adept at diving beneath it.

"There just wasn't a lot of comparative work on blood-oxygen transport as it relates to diving physiology in penguins and their non-diving relatives," said Signore, a postdoctoral researcher in Storz's lab.

Answering those questions meant sketching in the genetic blueprints of two ancient hemoglobins. One belonged to the common ancestor of all penguin species, which began branching from that ancestor about 20 million years ago. The other, dating back roughly 60 million years, resided in the common ancestor of penguins and their closest non-diving relatives -- albatrosses, shearwaters and other flying seabirds. The thinking was simple: Because one hemoglobin originated before the emergence of diving in the lineage, and the other after, any major differences between the two would implicate them as important to the evolution of diving in penguins.

Actually comparing the two was less simple. To start, the researchers literally resurrected both proteins by relying on models that factored in the gene sequences of modern hemoglobins to estimate the sequences of their two ancient counterparts. Signore spliced those resulting sequences into E. coli bacteria, which churned out the two ancient proteins. The researchers then ran experiments to evaluate the performance of each.

They found that the hemoglobin from the common ancestor of penguins captured oxygen more readily than did the version present in the blood of the older, non-diving ancestor. That stronger affinity for oxygen would mean less chance of leaving behind traces in the lungs, an especially vital issue among semi-aquatic birds needing to make the most of a single breath while hunting or traveling underwater.

Unfortunately, the very strength of that affinity can present difficulties when hemoglobin arrives at tissues starved for the oxygen it's carrying.

"Having a greater hemoglobin-oxygen affinity sort of acts like a stronger magnet to pull more oxygen from the lungs," Signore said. "It's great in that context. But then you're at a loss when it's time to let go."

Any breath-holding benefits gained by picking up extra oxygen, in other words, can be undone if the hemoglobin struggles to relax its iron-clad grip and release its prized cargo. The probability that it will is dictated in part by acidity and carbon dioxide in the blood. Higher levels of either make hemoglobins more likely to loosen up.

As Storz and Signore expected, the hemoglobin of the recent penguin ancestor was more sensitive to its surrounding pH, with its biochemical grip on oxygen loosening more in response to elevated acidity. And that, Signore said, made the hemoglobin more biochemically attuned to the exertion and oxygen needs of the tissues it served.

"It really is a beautiful system, because tissues that are working hard are becoming acidic," he said. "They need more oxygen, and hemoglobin's oxygen affinity is able to shift in response to that acidity to provide more oxygen.

"If pH drops by, say, 0.2 units, the oxygen affinity of penguin hemoglobin is going to decrease by more than would the hemoglobin of their non-diving relatives."

Together, the findings indicate that as penguins took to the seas, their hemoglobin evolved to maximize both the pick-up and drop-off of available oxygen -- especially when it was last inhaled five, or 10, or even 20 minutes earlier. They also illustrate the value of resurrecting proteins that last existed 20, or 40, or even 60 million years ago.

"These results demonstrate how the experimental analysis of ancestral proteins can reveal the mechanisms of biochemical adaptation," Storz said, "and also shed light on how organismal physiology evolved in response to new environmental challenges."

Read more at Science Daily

Older than expected: Teeth reveal the origin of the tiger shark

With a total length of up to 5.5m, the tiger shark is one of the largest predatory sharks known today. This shark is a cosmopolitan species occurring in all oceans worldwide. It is characterized by a striped pattern on its back, which is well marked in juveniles but usually fades in adults.

An international team of researchers led by Julia Türtscher from the University of Vienna examined the fossil record of these apex predators and found out that modern tiger sharks are older than previously thought and that several tiger shark species existed in past compared to the single species living today. The results of this study are published in the journal Paleobiology.

The fossil history of modern sharks reaches back to the Permian, about 295 million years ago. Complete fossil shark skeletons are very rare -- the skeleton, which consists almost entirely of cartilage, is only preserved under very special circumstances during the fossilization processes. Due to the lifelong continuous tooth replacement, most extinct sharks are therefore only known by their well-mineralized teeth, which, nonetheless, can provide deep insights into their evolutionary history.

The teeth of the modern tiger shark are unique: they have a broad, double-serrated cutting edge which even allows them to cut through sea turtle shells with ease. Tiger shark teeth are known in the fossil record since about 56 million years. Based on these fossil teeth, over 22 extinct tiger shark species have been described.

An international team of researchers led by Julia Türtscher from the University of Vienna has now examined the fossil history of the tiger shark and its extinct relatives. With the help of geometric morphometrics, the scientists were able to show that only 5 of the 22 known fossil tiger sharks actually represent valid species. Nevertheless, tiger sharks were more diverse in the past and only a single species survived until today.

Read more at Science Daily

Mar 24, 2021

Scaled, armored or naked: How does the skin of fish evolve?

Usually scaled, the skin of fish can also be naked or made up of bony plates that form an armour, sometimes even covered with teeth. But how has this skin evolved over the ages? To answer this question, researchers at the University of Geneva (UNIGE), Switzerland, have reconstructed the evolution of the protective skin structures in fish, going back to the common ancestor of ray-finned fish, more than 420 million years ago. They found that only fish that had lost their scales were able to develop a bony armour, and that the protective state of their skin influenced their choice of open water or sea floor habitats. This study, published in the journal Evolution Letters, provides a new explanation for the incredible diversity of this lineage of fish, which includes more than 25,000 species.

Ray-finned fish, such as catfish or goldfish, constitute the most diverse lineage of vertebrates on Earth, with no less than 25,000 species, i.e. half of the planet's vertebrates. "Far from being limited to scales, these fish species can also have completely naked skin or a bony armour, sometimes covered with teeth, as is the case with certain catfish," notes Juan Montoya, a researcher in the Department of Genetics and Evolution at the UNIGE Faculty of Science. But how did the protective structure of the skin evolve in these fish?

A family tree that goes back 420 million years

The researchers used an evolutionary tree of fish that lists 11,600 species. "In order to reconstruct the ancestral characteristics of the species, we worked in parallel with a second tree of 304 species, which precisely establishes the links of relationship," explains Alexandre Lemopoulos, a researcher in the Department of Genetics and Evolution at the Faculty of Science of the UNIGE. They asked themselves two questions: What type of protection do the fish have on their skin? And do they live in the open water or on the seabed?

Using mathematical models, they reconstructed the most likely ancestral state and, as they went up the family tree, they reconstructed the transitions between the three skin types and observed whether these had conditioned their habitat. "We were able to go back to the first ancestor of ray-finned fish, more than 420 million years ago, who had scales," enthuses Juan Montoya.

Only naked fish can develop armour

By analysing the transitional stages, the Geneva researchers found several lineages of fish that lost their scales, but at different positions in the tree. "There is therefore no temporal coincidence in this evolution," emphasises Alexandre Lemopoulos. Moreover, once a lineage of fish has lost its scales, it cannot find them again. "On the other hand, some of these naked fish subsequently developed bony plates covering part or all of their body, forming a solid armour," points out Juan Montoya. "We now need to discover the underlying genetic mechanism, which probably no longer allows a return to the scale stage, but makes it possible to build a compensatory external skeleton." Thus, only naked fish were able to build up this armour. "It does not seem possible to go directly from a scaly skin to a cuirassed skin, nor to have a mixture of these two structures," he says.

Skin conditions the place of residence

The researchers also observed that the change in skin condition conditioned the place of habitation. "Several species of fish that have lost their scales have left the open waters in which they lived for the seabed, certainly finding an advantage in this new environment," explains Alexandre Lemopoulos. This is a pre-adaptation: the fish lose their scales, change environment and find advantages. As this sequence was repeated independently in several groups of fish, the researchers deduce that a skin without scales offers a real advantage for living on the bottom. "It should be noted that once a lineage of fish establishes itself on the seabed, it no longer returns to the open water, even if it subsequently develops a bony armour," he continues.

Two hypotheses seem to explain this 'move': respiration and immune defence. "Fish breathe through their gills, but also through their skin. Bare skin improves gas exchange in poorly oxygenated water by increasing the respiratory surface," suggests Alexandre Lemopoulos. Furthermore, recent studies have shown that the immune defence against viruses and bacteria, which are very present in the seabed, was more effective when the skin had no scales.

Read more at Science Daily

Photosynthesis could be as old as life itself

 Researchers find that the earliest bacteria had the tools to perform a crucial step in photosynthesis, changing how we think life evolved on Earth.

The finding also challenges expectations for how life might have evolved on other planets. The evolution of photosynthesis that produces oxygen is thought to be the key factor in the eventual emergence of complex life. This was thought to take several billion years to evolve, but if in fact the earliest life could do it, then other planets may have evolved complex life much earlier than previously thought.

The research team, led by scientists from Imperial College London, traced the evolution of key proteins needed for photosynthesis back to possibly the origin of bacterial life on Earth. Their results are published and freely accessible in BBA -- Bioenergetics.

Lead researcher Dr Tanai Cardona, from the Department of Life Sciences at Imperial, said: "We had previously shown that the biological system for performing oxygen-production, known as Photosystem II, was extremely old, but until now we hadn't been able to place it on the timeline of life's history. Now, we know that Photosystem II show patterns of evolution that are usually only attributed to the oldest known enzymes, which were crucial for life itself to evolve."

Photosynthesis, which converts sunlight into energy, can come in two forms: one that produces oxygen, and one that doesn't. The oxygen-producing form is usually assumed to have evolved later, particularly with the emergence of cyanobacteria, or blue-green algae, around 2.5 billion years ago.

While some research has suggested pockets of oxygen-producing (oxygenic) photosynthesis may have been around before this, it was still considered to be an innovation that took at least a couple of billion years to evolve on Earth.

The new research finds that enzymes capable of performing the key process in oxygenic photosynthesis -- splitting water into hydrogen and oxygen -- could actually have been present in some of the earliest bacteria. The earliest evidence for life on Earth is over 3.4 billion years old and some studies have suggested that the earliest life could well be older than 4.0 billion years old.

Like the evolution of the eye, the first version of oxygenic photosynthesis may have been very simple and inefficient; as the earliest eyes sensed only light, the earliest photosynthesis may have been very inefficient and slow.

On Earth, it took more than a billion years for bacteria to perfect the process leading to the evolution of cyanobacteria, and two billion years more for animals and plants to conquer the land. However, that oxygen production was present at all so early on means in other environments, such as on other planets, the transition to complex life could have taken much less time.

The team made their discovery by tracing the 'molecular clock' of key photosynthesis proteins responsible for splitting water. This method estimates the rate of evolution of proteins by looking at the time between known evolutionary moments, such as the emergence of different groups of cyanobacteria or land plants, which carry a version of these proteins today. The calculated rate of evolution is then extended back in time, to see when the proteins first evolved.

They compared the evolution rate of these photosynthesis proteins to that of other key proteins in the evolution of life, including those that form energy storage molecules in the body and those that translate DNA sequences into RNA, which is thought to have originated before the ancestor of all cellular life on Earth. They also compared the rate to events known to have occurred more recently, when life was already varied and cyanobacteria had appeared.

The photosynthesis proteins showed nearly identical patterns of evolution to the oldest enzymes, stretching far back in time, suggesting they evolved in a similar way.

First author of the study Thomas Oliver, from the Department of Life Sciences at Imperial, said: "We have used a technique called Ancestral Sequence Reconstruction to predict the protein sequences of ancestral photosynthetic proteins. These sequences give us information on how the ancestral Photosystem II would have worked and we were able to show that many of the key components required for oxygen evolution in Photosystem II can be traced to the earliest stages in the evolution of the enzyme."

Knowing how these key photosynthesis proteins evolve is not only relevant for the search for life on other planets, but could also help researchers find strategies to use photosynthesis in new ways through synthetic biology.

Read more at Science Daily

Effective Field Theories and the nature of the universe

Effective Field Theories were introduced to simplify the mathematics involved in unifying interactions into the Standard Model of particle physics. An article in EPJ H presents Nobel Laureate Steven Weinberg's recent lecture on the development of these theories.

What is the world made of? This question, which goes back millennia, was revisited by theoretical physicist Steven Weinberg from the University of Texas in Austin, TX, USA in the first of an international seminar series, 'All Things EFT'. Weinberg's seminar has now been published as an article in the journal EPJ H.

And Weinberg is well placed to discuss both Effective Field Theories (EFTs) and the nature of the Universe, as he shared the 1979 Nobel Prize for Physics for developing a theory to unify the weak and electromagnetic interactions between elementary particles. This fed into the development of the widely used Standard Model of particle physics that unifies these two forces with the strong interaction.

The introduction to the article describes Weinberg as the 'pioneer' of EFTs. In his wide-ranging talk, Weinberg sets out the early history of EFTs from a personal perspective and describes some implications for future research.

Briefly, an EFT is a type of theory or approximation that describes a physical phenomenon at given length or energy scales, while averaging over shorter length or higher energy scales. Weinberg describes how the unifying Standard Model came to be seen as a valid approximation to a more fundamental theory that will likely take over at the highest energies, such as string theory.

He remembers how physicists of 1950s and 1960s had difficulty linking quantum field theory to the strong interaction. Eventually, he and others produced a standardised methodology that could fit observed data at least as well as the rather cumbersome mathematics that was being used. These ideas can be generalised; eventually, he states, 'all [physicists'] theories will survive as approximations to a future theory'.

Read more at Science Daily

How humans develop larger brains than other apes

 A new study is the first to identify how human brains grow much larger, with three times as many neurons, compared with chimpanzee and gorilla brains. The study, led by researchers at the Medical Research Council (MRC) Laboratory of Molecular Biology in Cambridge, UK, identified a key molecular switch that can make ape brain organoids grow more like human organoids, and vice versa.

The study, published in the journal Cell, compared 'brain organoids' -- 3D tissues grown from stem cells which model early brain development -- that were grown from human, gorilla and chimpanzee stem cells.

Similar to actual brains, the human brain organoids grew a lot larger than the organoids from other apes.

Dr Madeline Lancaster, from the MRC Laboratory of Molecular Biology, who led the study, said: "This provides some of the first insight into what is different about the developing human brain that sets us apart from our closest living relatives, the other great apes. The most striking difference between us and other apes is just how incredibly big our brains are."

During the early stages of brain development, neurons are made by stem cells called neural progenitors. These progenitor cells initially have a cylindrical shape that makes it easy for them to split into identical daughter cells with the same shape.

The more times the neural progenitor cells multiply at this stage, the more neurons there will be later.

As the cells mature and slow their multiplication, they elongate, forming a shape like a stretched ice-cream cone.

Previously, research in mice had shown that their neural progenitor cells mature into a conical shape and slow their multiplication within hours.

Now, brain organoids have allowed researchers to uncover how this development happens in humans, gorillas and chimpanzees.

They found that in gorillas and chimpanzees this transition takes a long time, occurring over approximately five days.

Human progenitors were even more delayed in this transition, taking around seven days. The human progenitor cells maintained their cylinder-like shape for longer than other apes and during this time they split more frequently, producing more cells.

This difference in the speed of transition from neural progenitors to neurons means that the human cells have more time to multiply. This could be largely responsible for the approximately three-fold greater number of neurons in human brains compared with gorilla or chimpanzee brains.

Dr Lancaster said: "We have found that a delayed change in the shape of cells in the early brain is enough to change the course of development, helping determine the numbers of neurons that are made.

"It's remarkable that a relatively simple evolutionary change in cell shape could have major consequences in brain evolution. I feel like we've really learnt something fundamental about the questions I've been interested in for as long as I can remember -- what makes us human."

To uncover the genetic mechanism driving these differences, the researchers compared gene expression -- which genes are turned on and off -- in the human brain organoids versus the other apes.

They identified differences in a gene called 'ZEB2', which was turned on sooner in gorilla brain organoids than in the human organoids.

To test the effects of the gene in gorilla progenitor cells, they delayed the effects of ZEB2. This slowed the maturation of the progenitor cells, making the gorilla brain organoids develop more similarly to human -- slower and larger.

Conversely, turning on the ZEB2 gene sooner in human progenitor cells promoted premature transition in human organoids, so that they developed more like ape organoids.

The researchers note that organoids are a model and, like all models, do not to fully replicate real brains, especially mature brain function. But for fundamental questions about our evolution, these brain tissues in a dish provide an unprecedented view into key stages of brain development that would be impossible to study otherwise.

Read more at Science Daily

BMI1, a promising gene to protect against Alzheimer's disease

Another step towards understanding Alzheimer's disease has been taken at the Maisonneuve-Rosemont Hospital Research Centre. Molecular biologist Gilbert Bernier, and professor of neurosciences at Université de Montréal, has discovered a new function for the BMI1 gene, which is known to inhibit brain aging. The results of his work have just been published in Nature Communications.

In his laboratory, Bernier was able to establish that BMI1 was required to prevent the DNA of neurons from disorganizing in a particular way called G4 structures. This phenomenon occurs in the brains of people with Alzheimer's disease, but not in healthy elderly people. Thus, BMI1 would protect against Alzheimer's by preventing, among other things, the excessive formation of G4s that disrupt the functioning of neurons.

"This discovery adds to our knowledge of the fundamental mechanisms leading to Alzheimer's," said Bernier. "There is still no cure for this disease, which now affects nearly one million Canadians. Any advance in the field brings hope to all these people and their families."

In previous articles published in the journals Cell Reports and Scientific Reports, Bernier demonstrated that the expression of the BMI1 gene is specifically reduced in the brains of people with Alzheimer's disease. He also showed that inactivation of BMI1 in cultured human neurons or in mice was sufficient to recapitulate all the pathological markers associated with Alzheimer's disease.

From Science Daily

Mar 23, 2021

Last Ice Age: Precipitation caused maximum advance of Alpine Glaciers

 The last glacial period, which lasted about 100,000 years, reached its peak about 20,000 to 25,000 years ago: Huge ice sheets covered large parts of northern Europe, North America and northern Asia, some of them kilometres thick, and the sea level was about 125 metres below today's level. The Earth looked very different during this so-called Last Glacial Maximum than it does today. This relatively recent period of the last maximum ice extent has long been of interest to researchers and subject to intensive research.

What actually led to this extreme glacier growth, however, has remained unclear until now. Through findings of special cave deposits in the Obir Caves in Bad Eisenkappel located in the Austrian state of Carinthia Christoph Spötl, head of the Quaternary Research Group at the Department of Geology at the University of Innsbruck, together with his colleague Gabriella Koltai, made an interesting observation for an interval within the Last Glacial Maximum that lasted about 3100 years. During this period, the ice volume in the Alps reached its maximum. The data are based on small, inconspicuous crystals, so-called cryogenic cave carbonates (CCC): "These calcite crystals formed when the Obir Caves were ice caves with temperatures just below zero. CCC are reliable indicators of thawing permafrost. These findings mean that, paradoxically, during one of the coldest periods of the last glacial period, the permafrost above these caves slowly warmed up," says Christoph Spötl. Since climate warming can be ruled out at this time, there is only one way for geologists to explain this phenomenon. "There must have been a major increase in solid precipitation in the Alps between 26,500 and 23,500 years ago: There is no permafrost in places with a stable thick snow cover."

Föhn wind caused large amounts of snow

Cold periods are typically also dry, but in the Alpine region this was not the case during this interval, which lasted about 3100 years. "The largest advance of Alpine glaciers in the entire last glacial period took place during this time interval. Precipitation was the key source for the growth of the ice giants -- and there must have been a lot of it, especially in autumn and early winter, as the CCC show," says Spötl. "A snow cover of about half a metre has already a strong insulating effect, shields the ground below from the very cold winter air and thus leads to an increased temperature in the subsurface. The permafrost above the Obir caves gradually thawed at that time. This thermal phenomenon, triggered by the shift from an Arctic-dry to a significantly wetter climate, remained preserved in the underground in the form of the CCC until today." Since the North Atlantic -- today a major source of precipitation -- was ice-covered in winter at the time, the team assumes a strong southerly flow from the Mediterranean that brought the moisture to the Alps, driven by pronounced southerly föhn conditions. "We consider massive snowfall due to this strong southerly flow as the cause of the growth of glaciers in the Alpine region at the peak of the Last Glacial Maximum. And our data allow us to even pin down the season: autumn and early winter," concludes Christoph Spötl.

Read more at Science Daily

Algorithms inspired by social networks reveal lifecycle of substorms, a key element of space weather

 Space weather often manifests as substorms, where a beautiful auroral display such as the Northern Lights is accompanied by an electrical current in space which has effects at earth that can interfere with and damage power distribution and electrical systems. Now, the lifecycle of these auroral substorms has been revealed using social media-inspired mathematical tools to analyse space weather observations across the Earth's surface.

Analysis by researchers led by the University of Warwick has revealed that these substorms manifest as global-scale electrical current systems associated with the spectacular aurora, reaching across over a third of the globe at high latitudes.

New research which involves the University of Warwick, John Hopkins University -- Applied Physics Laboratory, University of Bergen and Cranfield University, and published today (23 March) in the journal Nature Communications processes data on disturbances in the Earth's magnetic field from over a hundred magnetometers in the Northern hemisphere using a new technique that enables them to find 'like-minded friends'.

Magnetometers register changes in the Earth's magnetic field. When charged particles from our Sun bombard the Earth's magnetic field, it stores up energy like a battery. Eventually, this energy is released leading to large-scale electrical currents in the ionosphere which generate disturbances of magnetic fields on the ground. At extremes, this can disrupt power lines, electronic and communications systems and technologies such as GPS.

Using historical data from the SuperMAG collaboration of magnetometers, the researchers applied algorithms from network science to find correlations between magnetometer signals during 41 known substorms that occurred between 1997-2001. These use the same principles that allow a social networking site to recommend new friends, or to push relevant advertisements to you as you browse the internet.

Magnetometers detecting coherent signals were linked into communities, regardless of where they were located on the globe. As time progressed, they saw each substorm develop from many smaller communities into a single large correlated system or community at its peak. This led the authors to conclude that substorms are one coherent current system which extends over most of the nightside high latitude globe, rather than a number of individual small and disjointed current systems.

Dr Lauren Orr, who led the research as part of her PhD at the University of Warwick Department of Physics and is now based at Lancaster University, said: "We used a well-established method within network science called community detection and applied it to a space weather problem. The idea is that if you have lots of little subgroups within a big group, it can pick out the subgroups.

"We applied this to space weather to pick out groups within magnetometer stations on the Earth. From that, we were trying to find out whether there was one large current system or lots of separate individual current systems.

"This is a good way of letting the data tell us what's going on, instead of trying to fit observations to what we think is occurring."

Some recent work has suggested that auroral substorms are composed of a number of smaller electrical current systems and remain so throughout their lifecycle. This new research demonstrates that while the substorm begins as lots of smaller disturbances, it quite rapidly becomes a large system over the course of around ten minutes. The lack of correlation in its early stages may also suggest that there is no single mechanism at play in how these substorms evolve.

The results have implications for models designed to predict space weather. Space weather was included in the UK National Risk Register in 2012 and updated in 2017 with a recommendation for more investment in forecasting.

Read more at Science Daily

Babies prefer baby talk, whether they're learning one language or two

 It can be hard to resist lapsing into an exaggerated, singsong tone when you talk to a cute baby. And that's with good reason. Babies will pay more attention to baby talk than regular speech, regardless of which languages they're used to hearing, according to a study by UCLA's Language Acquisition Lab and 16 other labs around the globe.

The study found that babies who were exposed to two languages had a greater interest in infant-directed speech -- that is, an adult speaking baby talk -- than adult-directed speech. Research has already shown that monolingual babies prefer baby talk.

Some parents worry that teaching two languages could mean an infant won't learn to speak on time, but the new study shows bilingual babies are developmentally right on track. The peer-reviewed study, published today by Advances in Methods and Practices in Psychological Science, found bilingual babies became interested in baby talk at the same age as those learning one language.

"Crucially for parents, we found that development of learning and attention is similar in infants, whether they're learning one or two languages," said Megha Sundara, a UCLA linguistics professor and director of the Language Acquisition Lab. "And, of course, learning a language earlier helps you learn it better, so bilingualism is a win-win."

In the study, which took place at 17 labs on four continents, researchers observed 333 bilingual babies and 384 monolingual babies, ranging in age from 6 to 9 months and 12 to 15 months. UCLA's lab was the only one to provide data on bilingual babies who grew up hearing both English and Spanish. Sundara and Victoria Mateu, a UCLA assistant professor of Spanish and Portuguese, observed babies who were 12 to 15 months old.

Each baby would sit on a parent's lap while recordings of an English-speaking mother, using either infant-directed speech or adult-directed speech, played from speakers on the left or the right. Computer tracking measured how long each baby looked in the direction of each sound.

"The longer they looked, the stronger their preference," Mateu said. "Babies tend to pay more attention to the exaggerated sounds of infant-directed speech."

Infants' interest in English baby talk was very fine-tuned, the study noted. Bilingual parents indicated the percent of time English was spoken at home compared to Spanish. The more English the bilingual babies had been exposed to, the stronger their preference for infant-directed speech compared to adult-directed speech. However, even babies with no exposure to English preferred the English baby talk to the grown-up talk, Mateu said.

Baby talk is found across most languages and cultures, but English has one of the most exaggerated forms, Sundara said.

"Baby talk has a slower rate of speech across all languages, with more variable pitch, and it's more animated and happy," she said. "It varies mainly in how exaggerated it is."

Led by Krista Byers-Heinlein, a psychology professor at Concordia University in Montreal, the study involved labs in the United States, Canada, Europe, Australia and Singapore. The study's global reach strengthened the results, Sundara said.

"When you do language research, you want to know that the results aren't just some quirk of the language you're studying," she said.

According to the study, 6- to 9-month-old babies who had mothers with higher levels of education preferred baby talk more than babies whose mothers had less education.

"We suspect that perhaps the mothers with higher education levels spoke more to the babies and used infant-directed speech more often," Mateu said.

This study is one of the first published by the ManyBabies Consortium, a multi-lab group of researchers. Byers-Heinlein believes the unusual international, multilingual collaboration creates a model for future studies that include a similar breadth of languages and cultures.

"We can really make progress in understanding bilingualism, and especially the variability of bilingualism, thanks to our access to all these different communities," she said.

Read more at Science Daily

'Zombie' genes? Research shows some genes come to life in the brain after death

 In the hours after we die, certain cells in the human brain are still active. Some cells even increase their activity and grow to gargantuan proportions, according to new research from the University of Illinois Chicago.

In a newly published study in the journal Scientific Reports, the UIC researchers analyzed gene expression in fresh brain tissue -- which was collected during routine brain surgery -- at multiple times after removal to simulate the post-mortem interval and death. They found that gene expression in some cells actually increased after death.

These 'zombie genes' -- those that increased expression after the post-mortem interval -- were specific to one type of cell: inflammatory cells called glial cells. The researchers observed that glial cells grow and sprout long arm-like appendages for many hours after death.

"That glial cells enlarge after death isn't too surprising given that they are inflammatory and their job is to clean things up after brain injuries like oxygen deprivation or stroke," said Dr. Jeffrey Loeb, the John S. Garvin Professor and head of neurology and rehabilitation at the UIC College of Medicine and corresponding author on the paper.

What's significant, Loeb said, is the implications of this discovery -- most research studies that use postmortem human brain tissues to find treatments and potential cures for disorders such as autism, schizophrenia and Alzheimer's disease, do not account for the post-mortem gene expression or cell activity.

"Most studies assume that everything in the brain stops when the heart stops beating, but this is not so," Loeb said. "Our findings will be needed to interpret research on human brain tissues. We just haven't quantified these changes until now."

Loeb and his team noticed that the global pattern of gene expression in fresh human brain tissue didn't match any of the published reports of postmortem brain gene expression from people without neurological disorders or from people with a wide variety of neurological disorders, ranging from autism to Alzheimer's.

"We decided to run a simulated death experiment by looking at the expression of all human genes, at time points from 0 to 24 hours, from a large block of recently collected brain tissues, which were allowed to sit at room temperature to replicate the postmortem interval," Loeb said.

Loeb and colleagues are at a particular advantage when it comes to studying brain tissue. Loeb is director of the UI NeuroRepository, a bank of human brain tissues from patients with neurological disorders who have consented to having tissue collected and stored for research either after they die, or during standard of care surgery to treat disorders such as epilepsy. For example, during certain surgeries to treat epilepsy, epileptic brain tissue is removed to help eliminate seizures. Not all of the tissue is needed for pathological diagnosis, so some can be used for research. This is the tissue that Loeb and colleagues analyzed in their research.

They found that about 80% of the genes analyzed remained relatively stable for 24 hours -- their expression didn't change much. These included genes often referred to as housekeeping genes that provide basic cellular functions and are commonly used in research studies to show the quality of the tissue. Another group of genes, known to be present in neurons and shown to be intricately involved in human brain activity such as memory, thinking and seizure activity, rapidly degraded in the hours after death. These genes are important to researchers studying disorders like schizophrenia and Alzheimer's disease, Loeb said.

A third group of genes -- the 'zombie genes' -- increased their activity at the same time the neuronal genes were ramping down. The pattern of post-mortem changes peaked at about 12 hours.

"Our findings don't mean that we should throw away human tissue research programs, it just means that researchers need to take into account these genetic and cellular changes, and reduce the post-mortem interval as much as possible to reduce the magnitude of these changes," Loeb said. "The good news from our findings is that we now know which genes and cell types are stable, which degrade, and which increase over time so that results from postmortem brain studies can be better understood."

Read more at Science Daily

Rare genetic variant puts some younger men at risk of severe COVID-19

 A study of young men with COVID-19 has revealed a genetic variant linked to disease severity.

The discovery, published recently in eLife, means that men with severe disease could be genetically screened to identify who has the variant and may benefit from interferon treatment.

For most people, COVID-19, the disease caused by the virus SARS-CoV-2, causes only mild or no symptoms. However, severe cases can rapidly progress towards respiratory distress syndrome.

"Although older age and the presence of long-term conditions such as cardiovascular disease or diabetes are known risk factors, they alone do not fully explain differences in severity," explains first author Chiara Fallerini, Research Fellow in Medical Genetics at the Department of Medical Biotechnologies, University of Siena, Italy. "Some younger men without pre-existing medical conditions are more likely to be hospitalised, admitted to intensive care and to die of COVID-19, which suggests that some factors must cause a deficiency in their immune system."

Recent research has suggested that genes controlling interferon are important in regulating the immune response to COVID-19. Interferon is produced by immune cells during viral infection. It works alongside molecules on the surface of immune cells called Toll-like receptors (TLR) which detect viruses and kickstart the immune response. "When a recent study identified rare mutations in a TLR gene, TLR7, in young men with severe COVID-19, we wanted to investigate whether this was an ultra-rare situation or just the tip of the iceberg," says co-senior author Mario Mondelli, Professor of Infectious Diseases at the Division of Clinical Immunology and Infectious Diseases, Fondazione IRCCS Policlinico San Matteo and University of Pavia, Italy.

The team studied a subset of 156 male COVID-19 patients younger than 60 years old, selected from a large multicentre study in Italy, called GEN-COVID, which started its activity on March 16, 2020. GEN-COVID is a network of more than 40 Italian hospitals coordinated by co-senior author Alessandra Renieri, Full Professor of Medical Genetics at the University of Siena, and Director of Medical Genetics at Azienda Ospedaliero-Universitaria Senese, Siena, Italy.

The team first analysed all the genes on the X chromosome of men with both mild and severe cases of COVID-19, and identified the TLR7 gene as one of the most important genes linked to disease severity. They then searched the entire GEN-COVID database, and selected for younger men (less than 60 years old). This identified rare TLR7 missense mutations in five of 79 patients (6.3%) with life-threatening COVID-19 and no similar mutations in the 77 men who had few symptoms. They also found the same mutation in three men aged over 60: two who had severe COVID-19 and one who had few symptoms -- although the mutation found in the man with few symptoms had little effect on TLR function.

To link these mutations to the immune cell response, they treated white blood cells from recovered patients with a drug that switches TLR7 genes on. They found that the TLR7 genes were dampened down in immune cells from patients with mutations, compared to the TLR7 activity seen in normal immune cells. They also found lower levels of interferon in the cells containing the mutation compared to normal white blood cells. This confirmed that the mutations identified directly affect the control of interferon as part of the innate immune response.

To confirm the impact of the mutations on COVID-19 response, the team studied two brothers, one with a mutation in an interferon gene and one without. The levels of interferon gene activity were much lower in the man with the missense mutation, compared with his brother. Moreover, the brother with the mutation had severe COVID-19, while his brother with normal interferon genes was asymptomatic.

"Our results show that young men with severe COVID-19 who have lost function in their interferon-regulating genes represent a small but important subset of more vulnerable COVID-19 patients," says co-senior author Elisa Frullanti, Researcher of Medical Genetics at the University of Siena.

Read more at Science Daily

Mar 22, 2021

Researchers create map of potential undiscovered life

 Less than a decade after unveiling the "Map of Life," a global database that marks the distribution of known species across the planet, Yale researchers have launched an even more ambitious and perhaps important project -- creating a map of where life has yet to be discovered.

For Walter Jetz, a professor of ecology and evolutionary biology at Yale who spearheaded the Map of Life project, the new effort is a moral imperative that can help support biodiversity discovery and preservation around the world.

"At the current pace of global environmental change, there is no doubt that many species will go extinct before we have ever learned about their existence and had the chance to consider their fate," Jetz said. "I feel such ignorance is inexcusable, and we owe it to future generations to rapidly close these knowledge gaps."

The new map of undiscovered species was published March 22 in the journal Nature Ecology & Evolution.

Lead author Mario Moura, a former Yale postdoctoral associate in Jetz's lab and now professor at Federal University of Paraiba, said the new study shifts the focus from questions like "How many undiscovered species exist?" to more applied ones such as "Where and what?"

"Known species are the 'working units' in many conservation approaches, thus unknown species are usually left out of conservation planning, management, and decision-making," Moura said. "Finding the missing pieces of the Earth's biodiversity puzzle is therefore crucial to improve biodiversity conservation worldwide."

According to conservative scientific estimates, only some 10 to 20 percent of species on earth have been formally described. In an effort to help find some of these missing species, Moura and Jetz compiled exhaustive data that included the location, geographical range, historical discovery dates, and other environmental and biological characteristics of about 32,000 known terrestrial vertebrates. Their analysis allowed them to extrapolate where and what kinds of unknown species of the four main vertebrate groups are most likely to yet be identified.

They looked at 11 key factors which allowed the team to better predict locations where undiscovered species might be located. For instance, large animals with wide geographical ranges in populated areas are more likely to have already been discovered. New discoveries of such species are likely to be rare in the future. However, smaller animals with limited ranges who live in more inaccessible regions are more likely to have avoided detection so far.

"The chances of being discovered and described early are not equal among species," Moura said. For instance, the emu, a large bird in Australia, was discovered in 1790 soon after taxonomic descriptions of species began. However, the small, elusive frog species Brachycephalus guarani wasn't discovered in Brazil until 2012, suggesting more such amphibians remain to be found.

Moura and Jetz show that the chances of new species discovery varies widely across the globe. Their analysis suggests Brazil, Indonesia, Madagascar, and Colombia hold the greatest opportunities for identifying new species overall, with a quarter of all potential discoveries. Unidentified species of amphibians and reptiles are most likely to turn up in neotropical regions and Indo-Malayan forests.

Moura and Jetz also focused on another key variable in uncovering missing species -- the number of taxonomists who are looking for them.

"We tend to discover the 'obvious' first and the 'obscure' later," Moura said. "We need more funding for taxonomists to find the remaining undiscovered species."

But the global distribution of taxonomists is greatly uneven and a map of undiscovered life can help focus new efforts, Jetz noted. That work will become increasingly important as nations worldwide gather to negotiate a new Global Biodiversity Framework under the Convention of Biological Diversity later this year and make commitments to halting biodiversity loss.

"A more even distribution of taxonomic resources can accelerate species discoveries and limit the number of 'forever unknown' extinctions," Jetz said.

With partners worldwide, Jetz and colleagues plan to expand their map of undiscovered life to plant, marine, and invertebrate species in the coming years. Such information will be help governments and science institutions grapple with where to concentrate efforts on documenting and preserving biodiversity, Jetz said.

Read more at Science Daily

Researchers tackle the 'spiders' from Mars

 Researchers at Trinity College Dublin have been shedding light on the enigmatic "spiders from Mars," providing the first physical evidence that these unique features on the planet's surface can be formed by the sublimation of CO2 ice.

Spiders, more formally referred to as araneiforms, are strange-looking negative topography radial systems of dendritic troughs; patterns that resemble branches of a tree or fork lightning. These features, which are not found on Earth, are believed to be carved into the Martian surface by dry ice changing directly from solid to gas (sublimating) in the spring. Unlike Earth, Mars' atmosphere comprises mainly of CO2 and as temperatures decrease in winter, this deposits onto the surface as CO2 frost and ice.

The Trinity team, along with colleagues at Durham University and the Open University, conducted a series of experiments funded by the Irish Research Council and Europlanet at the Open University Mars Simulation Chamber, under Martian atmospheric pressure, in order to investigate whether patterns similar to Martian spiders could form by dry ice sublimation.

Its findings are detailed in a paper recently published in the Nature Journal, Scientific Reports.

Dr Lauren McKeown, who led this work during her PhD at Trinity and is now at the Open University, said:

"This research presents the first set of empirical evidence for a surface process that is thought to modify the polar landscape on Mars. Kieffer's hypothesis [explained below] has been well accepted for over a decade, but until now, it has been framed in a purely theoretical context.

"The experiments show directly that the spider patterns we observe on Mars from orbit can be carved by the direct conversion of dry ice from solid to gas. It is exciting because we are beginning to understand more about how the surface of Mars is changing seasonally today."

The research team drilled holes in the centres of CO2 ice blocks and suspended them with a claw similar to those found in arcades, above granular beds of different grain sizes. They lowered the pressure inside a vacuum chamber to Martian atmospheric pressure (6mbar) and then used a lever system to place the CO2 ice block on the surface.

They made use of an effect known as the Leidenfrost Effect, whereby if a substance comes in contact with a surface much hotter than its sublimation point, it will form a gaseous layer around itself. When the block reached the sandy surface, CO2 turned directly from solid to gas and material was seen escaping through the central hole in the form of a plume.

In each case, once the block was lifted, a spider pattern had been eroded by the escaping gas. The spider patterns were more branched when finer grain sizes were used and less branched when coarser grain sizes were used. This is the first set of empirical evidence for this extant surface process.

Dr Mary Bourke, of Trinity's Department of Geography, who supervised the PhD research, said:

"This innovative work supports the emergent theme that the current climate and weather on Mars has an important influence not only on dynamic surface processes, but also for any future robotic and/or human exploration of the planet."

The main hypothesis proposed for spider formation (Kieffer's hypothesis) suggests that in spring, sunlight penetrates this translucent ice and heats the terrain beneath it. The ice will sublimate from its base, causing pressure to build up and eventually the ice will rupture, allowing pressurised gas to escape through a crack in the ice. The paths of the escaping gas will leave behind the dendritic patterns observed on Mars today and the sandy/dusty material will be deposited on top of the ice in the form of a plume.

However, until now, it has not been known if such a theoretical process is possible and this process has never been directly observed on Mars.

Additionally, the researchers observed that when CO2 blocks were released and allowed to sublimate within the sand bed, sublimation was much more vigorous than expected and material was thrown all over the chamber.

This observation will be useful in understanding models of other CO2 sublimation-related processes on Mars, such as the formation of lateral Recurring Diffusive Flows surrounding linear dune gullies on Mars.

Read more at Science Daily

Health declining in Gen X and Gen Y, US study shows

 Recent generations show a worrying decline in health compared to their parents and grandparents when they were the same age, a new national study reveals.

Researchers found that, compared to previous generations, members of Generation X and Generation Y showed poorer physical health, higher levels of unhealthy behaviors such as alcohol use and smoking, and more depression and anxiety.

The results suggest the likelihood of higher levels of diseases and more deaths in younger generations than we have seen in the past, said Hui Zheng, lead author of the study and professor of sociology at The Ohio State University.

"The worsening health profiles we found in Gen X and Gen Y is alarming," Zheng said.

"If we don't find a way to slow this trend, we are potentially going to see an expansion of morbidity and mortality rates in the United States as these generations get older."

Zheng conducted the study with Paola Echave, a graduate student in sociology at Ohio State. The results were published yesterday (March 18, 2021) in the American Journal of Epidemiology.

The researchers used data from the National Health and Nutrition Examination Survey 1988-2016 (62,833 respondents) and the National Health Interview Survey 1997-2018 (625,221 respondents), both conducted by the National Center for Health Statistics.

To measure physical health, the researchers used eight markers of a condition called metabolic syndrome, a constellation of risk factors for heart disease, stroke, kidney disease and diabetes. Some of the markers include waist circumference, blood pressure, cholesterol level and body mass index (BMI). They also used one marker of chronic inflammation, low urinary albumin, and one additional marker of renal function, creatinine clearance.

The researchers found that the measures of physical health have worsened from the Baby Boomer generation through Gen X (born 1965-80) and Gen Y (born 1981-99). For whites, increases in metabolic syndrome were the main culprit, while increases in chronic inflammation were seen most in Black Americans, particularly men.

"The declining health trends in recent generations is a shocking finding," Zheng said. "It suggests we may have a challenging health prospect in the United State in coming years."

Zheng said it is beyond the scope of the study to comprehensively explain the reasons behind the health decline. But the researchers did check two factors. They found smoking couldn't explain the decline. Obesity could help explain the increase in metabolic syndrome, but not the increases seen in chronic inflammation.

It wasn't just the overall health markers that were concerning for some members of the younger generations, Zheng said.

Results showed that levels of anxiety and depression have increased for each generation of whites from the War Babies generation (born 1943-45) through Gen Y.

While levels of these two mental health indicators did increase for Blacks up through the early Baby Boomers, the rate has been generally flat since then.

Health behaviors also show worrying trends.

The probability of heavy drinking has continuously increased across generations for whites and Black males, especially after late-Gen X (born 1973-80).

For whites and Blacks, the probability of using street drugs peaked at late-Boomers (born 1956-64), decreased afterward, then rose again for late-Gen X. For Hispanics, it has continuously increased since early-Baby Boomers.

Surprisingly, results suggest the probability of having ever smoked has continuously increased across generations for all groups.

How can this be true with other research showing a decline in overall cigarette consumption since the 1970s?

"One possibility is that people in older generations are quitting smoking in larger numbers while younger generations are more likely to start smoking," Zheng said. "But we need further research to see if that is correct."

Zheng said these results may be just an early warning of what is to come.

"People in Gen X and Gen Y are still relatively young, so we may be underestimating their health problems," he said. "When they get older and chronic diseases become more prevalent, we'll have a better view of their health status."

Zheng noted that the United States has already seen recent decreases in life expectancy and increases in disability and morbidity.

Read more at Science Daily

Bacteria may aid anti-cancer immune response

 Cancer immunotherapy may get a boost from an unexpected direction: bacteria residing within tumor cells. In a new study published in Nature, researchers at the Weizmann Institute of Science and their collaborators have discovered that the immune system "sees" these bacteria and shown they can be harnessed to provoke an immune reaction against the tumor. The study may also help clarify the connection between immunotherapy and the gut microbiome, explaining the findings of previous research that the microbiome affects the success of immunotherapy.

Immunotherapy treatments of the past decade or so have dramatically improved recovery rates from certain cancers, particularly malignant melanoma; but in melanoma, they still work in only about 40% of the cases. Prof. Yardena Samuels of Weizmann's Molecular Cell Biology Department studies molecular "signposts" -- protein fragments, or peptides, on the cell surface -- that mark cancer cells as foreign and may therefore serve as potential added targets for immunotherapy. In the new study, she and colleagues extended their search for new cancer signposts to those bacteria known to colonize tumors.

Using methods developed by departmental colleague Dr. Ravid Straussman, who was one of the first to reveal the nature of the bacterial "guests" in cancer cells, Samuels and her team, led by Dr. Shelly Kalaora and Adi Nagler (joint co-first authors), analyzed tissue samples from 17 metastatic melanoma tumors derived from nine patients. They obtained bacterial genomic profiles of these tumors and then applied an approach known as HLA-peptidomics to identify tumor peptides that can be recognized by the immune system.

The research was conducted in collaboration with Dr. Jennifer A. Wargo of the University of Texas MD Anderson Cancer Center, Houston, Texas; Prof Scott N. Peterson of Sanford Burnham Prebys Medical Discovery Institute, La Jolla, California; Prof Eytan Ruppin of the National Cancer Institute, USA; Prof Arie Admon of the Technion -- Israel Institute of Technology and other scientists.

The HLA peptidomics analysis revealed nearly 300 peptides from 41 different bacteria on the surface of the melanoma cells. The crucial new finding was that the peptides were displayed on the cancer cell surfaces by HLA protein complexes -- complexes that are present on the membranes of all cells in our body and play a role in regulating the immune response. One of the HLA's jobs is to sound an alarm about anything that's foreign by "presenting" foreign peptides to the immune system so that immune T cells can "see" them. "Using HLA peptidomics, we were able to reveal the HLA-presented peptides of the tumor in an unbiased manner," Kalaora says. "This method has already enabled us in the past to identify tumor antigens that have shown promising results in clinical trials."

It's unclear why cancer cells should perform a seemingly suicidal act of this sort: presenting bacterial peptides to the immune system, which can respond by destroying these cells. But whatever the reason, the fact that malignant cells do display these peptides in such a manner reveals an entirely new type of interaction between the immune system and the tumor.

This revelation supplies a potential explanation for how the gut microbiome affects immunotherapy. Some of the bacteria the team identified were known gut microbes. The presentation of the bacterial peptides on the surface of tumor cells is likely to play a role in the immune response, and future studies may establish which bacterial peptides enhance that immune response, enabling physicians to predict the success of immunotherapy and to tailor a personalized treatment accordingly.

Moreover, the fact that bacterial peptides on tumor cells are visible to the immune system can be exploited for enhancing immunotherapy. "Many of these peptides were shared by different metastases from the same patient or by tumors from different patients, which suggests that they have a therapeutic potential and a potent ability to produce immune activation," Nagler says.

In a series of continuing experiments, Samuels and colleagues incubated T cells from melanoma patients in a laboratory dish together with bacterial peptides derived from tumor cells of the same patient. The result: T cells were activated specifically toward the bacterial peptides.

Read more at Science Daily