Mar 3, 2021

Neanderthal and early modern human stone tool culture co-existed for over 100,000 years

 Research from the University of Kent's School of Anthropology and Conservation has discovered that one of the earliest stone tool cultures, known as the Acheulean, likely persisted for tens of thousands of years longer than previously thought.

The Acheulean was estimated to have died out around 200,000 years ago but the new findings suggest it may have persisted for much longer, creating over 100,000 years of overlap with more advanced technologies produced by Neanderthals and early modern humans.

The research team, led by Dr Alastair Key (Kent) alongside Dr David Roberts (Kent) and Dr Ivan Jaric (Biology Centre of the Czech Academy of Sciences), made the discovery whilst studying stone tool records from different regions across the world. Using statistical techniques new to archaeological science, the archaeologists and conservation experts were able to reconstruct the end of the Acheulean period and re-map the archaeological record.

Previously, a more rapid shift between the earlier Acheulean stone tool designs often associated with Homo heidelbergensis -- the common ancestor of modern humans and Neanderthals -- and more advanced 'Levallois' technologies created by early modern humans and Neanderthals, was assumed. However, the study has shed new light on the transition between these two technologies, suggesting substantial overlap between the two.

Acheulean stone tool technologies are the longest-lived cultural tradition practiced by early humans. Originating in East Africa 1.75 million years ago, handaxes and cleavers -- the stone tool types which characterise the period -- went on to be used across Africa, Europe and Asia by several different species of early human. Prior to this discovery, it was widely assumed that the Acheulean period ended between 300-150,000 year ago. However, the record was lacking in specific dates, and the timing of its demise has been heavily debated. The Kent and Czech team discovered that the tradition likely ended at different times around the world, varying from as early as 170,000 years ago in Sub-Saharan Africa through to as late as 57,000 years ago in Asia.

To understand when the Acheulean ended, the team collected information on different archaeological sites from around the world to find the latest known stone tool assemblages. A statistical technique known as optimal linear estimation -- commonly used in conservation studies to estimate species extinctions -- was used to predict how much longer the stone tool tradition continued after the most recent known sites. In effect, the technique was able to model the portion of the archaeological record yet to be discovered.

Dr Alastair Key, a Palaeolithic Archaeologist and the lead author of the study, said: "The earliest archaeological record will always be an incomplete picture of early human behaviour, so we know that the youngest known Acheulean sites are unlikely to actually represent the final instances of these technologies being produced. By allowing us to reconstruct these missing portions of the archaeological record, this technique not only gives us a more accurate understanding of when the tradition ended, but it gives us an indication of where we can expect to find new archaeological discoveries in the future."

Dr Roberts added: "This technique was originally developed by myself and a colleague to date extinctions, as the last sighting of a species is unlikely to be the date when it actually became extinct. It is exciting to see it applied in a new context."

Read more at Science Daily

Cutting-edge analysis of prehistoric teeth sheds new light on the diets of lizards and snakes

 New research has revealed that the diets of early lizards and snakes, which lived alongside dinosaurs around 100 million years ago, were more varied and advanced than previously thought.

The study, led by the University of Bristol and published in Royal Society Open Science, showed lizards, snakes, and mosasaurs in the Cretaceous period already had the full spectrum of diet types, including flesh-eating and plant-based, which they have today.

There are currently some 10,000 species of lizards and snakes, known collectively as squamates. It was originally understood their great diversity was acquired only after the extinction of dinosaurs, but the findings demonstrate for the first time that squamates had modern levels of dietary specialisation 100 million years ago.

Fossils of lizards and snakes are quite rare in the Mesozoic, the age of dinosaurs and reptiles. This could simply be because their skeletons are small and delicate so hard to preserve, or it could show that lizards and snakes were in fact quite rare in the first half of their history.

The researchers studied 220 Mesozoic squamates, comprising lizards, snakes and mosasaurs, a group of extinct large marine reptiles. They measured their jaws and teeth and allocating them to dietary classes by comparison with modern forms. Some have long peg-like teeth and feed on insects, others have flat teeth used for chopping plant food. Predators have sharp, pointed teeth, and snakes have hooked teeth to grasp their prey.

All the fossil forms were allocated to one of eight feeding categories, and then their diversity through time was assessed. To the researchers' surprise, it turned out that the rather sparse Cretaceous squamates included examples of all modern feeding strategies.

"We don't know for sure how diverse squamates were in the Cretaceous," said lead author Dr Jorge Herrera-Flores, who is now a Research Fellow at the National Autonomous University of Mexico.

"But we do know they had already achieved the full modern-type diversity of feeding modes by 100 million years ago, in the middle of the Cretaceous. Before that, squamates had already existed for more than 100 million years, but they seemed to be mainly insect-eaters up to that point."

"Studying teeth and jaws provides excellent insights into dietary and ecological variety," said co-author Dr Tom Stubbs, Senior Research Associate at the University of Bristol School of Earth Sciences. "Fossil teeth and jaws give us the best insight into squamate evolution in the past. It would be easy to read the fossil record wrongly because of incomplete preservation. However, more fossil finds could only increase the number of feeding modes we identify in the Cretaceous, not reduce them."

The explanation for this early rush of dietary experimentation may be related to diversification in other areas. For instance, at this point in the Cretaceous, flowering plants had just begun to flourish and were already transforming ecosystems on land, while squamates also prevailed in the oceans.

"The Cretaceous Terrestrial Revolution made forests more complex," said co-author Professor Michael Benton, Professor of Vertebrate Palaeontology at the School of Earth Sciences. "The new flowering plants provided opportunities for insects and other creepy crawlies to feed on leaves, pollen and nectar, and to hide in the canopy. It's likely this burst of diversity gave a stimulus to mammals, birds and squamates, all of which diversified about this time, probably feeding on the insects, spiders and other bugs, as well as on the new plant food."

Read more at Science Daily

Human instinct can be as useful as algorithms in detecting online 'deception'

 Travellers looking to book a hotel should trust their gut instinct when it comes to online reviews rather than relying on computer algorithms to weed out the fake ones, a new study suggests.

Research, led by the University of York in collaboration with Nanyang Technological University, Singapore, shows the challenges of online 'fake' reviews for both users and computer algorithms. It suggests that a greater awareness of the linguistic characteristics of 'fake' reviews can allow online users to spot the 'real' from the 'fake' for themselves.

Dr Snehasish Banerjee, Lecturer in Marketing from the University of York's Management School, said: "Reading and writing online reviews of hotels, restaurants, venues and so on, is a popular activity for online users, but alongside this, 'fake' reviews have also increased.

"Companies can now use computer algorithms to distinguish the 'fake' from the 'real' with a good level of accuracy, but the extent to which company websites use these algorithms is unclear and so some 'fake' reviews slip through the net.

"We wanted to understand whether human analysis was capable of filling this gap and whether more could be done to educate online users on how to approach these reviews."

The researchers tasked 380 people to respond to questions about three hotel reviews -- some authentic, others fake -- based on their perception of the reviews. The users could rely on the same cues that computer algorithm use to discern 'fake' reviews, which includes the number of superlatives in the review, the level of details, if it was easy to read, and appeared noncommittal.

For those who already sceptical of online reviews this was a relatively straightforward task, but most could not identity factors such as 'easy to read' and 'non-committal' like a computer algorithm could. In the absence of this skill, the participants relied on 'gut instinct'.

Dr Banerjee said: "The outcomes were surprisingly effective. We often assume that the human brain is no match for a computer, but in actual fact there are certain things we can do to train the mind in approaching some aspects of life differently.

"Following this study, we are recommending that people need to curb their instincts on truth and deception bias -- the tendency to either approach online content with the assumption that it is all true or all fake respectively -- as neither method works in the online environment.

"Online users often fail to detect fake reviews because they do not proactively look for deception cues. There is a need to change this default review reading habit, and if reading habit is practised long enough, they will eventually be able to rely on their gut instinct for fake review detection."

Read more at Science Daily

New study gives the most detailed look yet at the neuroscience of placebo effects

 A large proportion of the benefit that a person gets from taking a real drug or receiving a treatment to alleviate pain is due to an individual's mindset, not to the drug itself. Understanding the neural mechanisms driving this placebo effect has been a longstanding question. A meta-analysis published in Nature Communications finds that placebo treatments to reduce pain, known as placebo analgesia, reduce pain-related activity in multiple areas of the brain.

Previous studies of this kind have relied on small-scale studies, so until now, researchers did not know if the neural mechanisms underlying placebo effects observed to date would hold up across larger samples. This study represents the first large-scale mega-analysis, which looks at individual participants' whole brain images. It enabled researchers to look at parts of the brain that they did not have sufficient resolution to look at in the past. The analysis was comprised of 20 neuroimaging studies with 600 healthy participants. The results provide new insight on the size, localization, significance and heterogeneity of placebo effects on pain-related brain activity.

The research reflects the work of an international collaborative effort by the Placebo Neuroimaging Consortium, led by Tor Wager , the Diana L. Taylor Distinguished Professor in Neuroscience at Dartmouth and Ulrike Bingel, a professor at the Center for Translational Neuro- and Behavioral Sciences in the department of neurology at University Hospital Essen, for which Matthias Zunhammer and Tamás Spisák at the University Hospital Essen, served as co-authors. The meta-analysis is the second with this sample and builds on the team's earlier research using an established pain marker developed earlier by Wager's lab.

"Our findings demonstrate that the participants who showed the most pain reduction with the placebo also showed the largest reductions in brain areas associated with pain construction," explains co-author Wager, who is also the principal investigator of the Cognitive and Affective Neuroscience Lab at Dartmouth. "We are still learning how the brain constructs pain experiences, but we know it's a mix of brain areas that process input from the body and those involved in motivation and decision-making. Placebo treatment reduced activity in areas involved in early pain signaling from the body, as well as motivational circuits not tied specifically to pain."

Across the studies in the meta-analysis, participants had indicated that they felt less pain; however, the team wanted to find out if the brain responded to the placebo in a meaningful way. Is the placebo changing the way a person constructs the experience of pain or is it changing the way a person thinks about it after the fact? Is the person really feeling less pain?

With the large sample, the researchers were able to confidently localize placebo effects to specific zones of the brain, including the thalamus and the basal ganglia. The thalamus serves as a gateway for sights and sounds and all kinds of sensory motor input. It has lots of different nuclei, which act like processing stations for different kinds of sensory input. The results showed that parts of the thalamus that are most important for pain sensation were most strongly affected by the placebo. In addition, parts of the somatosensory cortex that are integral to the early processing of painful experiences were also affected. The placebo effect also impacted the basal ganglia, which are important for motivation and connecting pain and other experiences to action. "The placebo can affect what you do with the pain and how it motivates you, which could be a larger part of what's happening here," says Wager. "It's changing the circuitry that's important for motivation."

The findings revealed that placebo treatments reduce activity in the posterior insula, which is one of the areas that are involved in early construction of the pain experience. This is the only site in the cortex that you can stimulate and invoke the sense of pain. The major ascending pain pathway goes from parts of the thalamus to the posterior insula. The results provide evidence that the placebo affects that pathway for how pain is constructed.

Prior research has illustrated that with placebo effects, the prefrontal cortex is activated in anticipation of pain. The prefrontal cortex helps keep track of the context of the pain and maintain the belief that it exists. When the prefrontal cortex is activated, there are pathways that trigger opioid release in the midbrain that can block pain and pathways that can modify pain signaling and construction.

The team found that activation of the prefrontal cortex is heterogeneous across studies, meaning that no particular areas in this region were activated consistently or strongly across the studies. These differences across studies are similar to what is found in other areas of self-regulation, where different types of thoughts and mindsets can have different effects. For example, other work in Wager's laboratory has found that rethinking pain by using imagery and storytelling typically activates the prefrontal cortex, but mindful acceptance does not. Placebo effects likely involve a mix of these types of processes, depending on the specifics of how it is given and people's predispositions.

"Our results suggest that placebo effects are not restricted solely to either sensory/nociceptive or cognitive/affective processes, but likely involves a combination of mechanisms that may differ depending on the placebo paradigm and other individual factors," explains Bingel. "The study's findings will also contribute to future research in the development of brain biomarkers that predict an individual's responsiveness to placebo and help distinguish placebo from analgesic drug responses, which is a key goal of the new collaborative research center, Treatment Expectation ."

Read more at Science Daily

Vaccine shows signs of protection against dozen-plus flu strains

 Ask Eric Weaver about pandemics, and he's quick to remind you of a fact that illustrates the fleeting nature of human memory and the proximal nature of human attention: The first pandemic of the 21st century struck not in 2019, but 2009.

That's when the H1N1/09 swine flu emerged, eventually infecting upwards of 1.4 billion people -- nearly one of every five on the planet at the time. True to the name, swine flus jump to humans from pigs. It's a phenomenon that has been documented more than 400 times since the mid-2000s in the United States alone.

"They're considered the great mixing vessel," said Weaver, associate professor of biological sciences at the University of Nebraska-Lincoln. "They're susceptible to their own circulating influenzas, as well as many of the avian and human influenzas.

"If you put an avian, a swine and a human virus into the same cell, they can swap genome segments. When you mix those viruses in the swine, what pops out could be all swine, or a little human and swine, or a little avian and swine, or a little of all three. And you never know: You might get the perfect combination of parts that makes for a very high-fitness virus that is highly transmissible and new to humans, meaning that people don't have immunity to it."

All of it helps explain why Weaver has spent years researching how to develop a vaccine that protects against as many strains of influenza as possible, including those that have yet to emerge. In a new study, Weaver, doctoral candidate Brianna Bullard and colleagues have debuted the results of an approach that demonstrates promising signs of protection against more than a dozen swine flu strains -- and more than a leading, commercially available vaccine.

"This is the best data I've ever seen in the (research) literature," Weaver said of the team's findings, recently published in the journal Nature Communications.

The "H" and "N" in H1N1 refer to two crucial proteins, hemagglutinin and neuraminidase, that reside on the surface of influenza viruses and allow them to enter and exit cells. But it's the H3 subtype of influenza -- H3N2, specifically -- that has accounted for more than 90% of swine-to-human infections in the United States since 2010, making it the target of Weaver's most recent research.

In his efforts to combat multiple strains of swine H3N2, Weaver employed a computational program, Epigraph, that was co-developed by Bette Korber of Los Alamos National Laboratory. The "epi" is short for epitope: the bit of a viral protein, such as hemagglutinin, that draws the attention of an immune system. Any one epitope, if administered as a vaccine, will stimulate an immune response against only a limited number of closely related viral strains.

So Weaver put Epigraph to work analyzing data on every known and available mutational variant of hemagglutinin, which it then used to predict which collection of epitopes would grant immunity against the broadest, most diverse range of strains. Those hemagglutinin proteins are usually composed of around 560 amino acids, whose type and sequence determine the structure and function of the epitopes. Starting at the start of an amino acid string, Epigraph analyzed the sequence of amino acids No. 1 through No. 9 before sliding down to analyze Nos. 2-10, then 3-11, and so on. After doing the same for every epitope, the program determined the most common nine-acid sequences from the entire batch -- the entire catalogue of known H3N2 strains in pigs.

"So what you end up with are the most common epitopes that exist in nature linked together, then the second-most common, and then the third-most common," Weaver said. "When you look at it from an evolutionary standpoint, the first resembles what most of the viruses look like. The second starts to look a bit different, and the third looks even more different.

"But all three of these make a contribution to the vaccine itself, and they work through slightly different mechanisms."

When testing the resulting three-epitope cocktail in mice and pigs, the team found that it yielded immune response signatures and physiological protection against a much wider variety of strains than did FluSure, a commercial swine vaccine.

In mice, the team tested its vaccine against 20 strains of swine-derived H3 flu. The vaccine generated clinically relevant concentrations of antibodies -- the molecules that neutralize a virus before it enters a cell -- against 14 of those 20 strains. FluSure managed the same feat against just four of the 20. A separate experiment presented the mice with four strains that represented a cross-section of H3 diversity. In all four cases, Epigraph-vaccinated mice produced notable levels of T-cells, which, among other responsibilities, instruct infected cells to die for the sake of avoiding further viral transmission. FluSure-vaccinated mice, by contrast, showed little T-cell response to any of the four strains.

Those cellular-level responses appeared to scale up, too. When challenged with flu viruses, Epigraph-vaccinated mice generally lost less weight, and exhibited fewer viral particles in the lungs, than did their FluSure-vaccinated counterparts. And when mice were challenged with a lethal H3 strain derived from humans, only the Epigraph vaccine protected all of the specimens that received it.

That performance carried over to pigs. Cells taken from swine injected with just one dose of the Epigraph vaccine produced substantial antibodies in response to 13 of 20 H3 strains, including 15 of 16 that originated in North America or were derived from humans. A single dose of FluSure, meanwhile, generated significant antibodies against none of the 20. Though a second dose of FluSure did elevate those antibody concentrations, they remained about four times lower, on average, than the Epigraph-induced responses. T-cell responses, too, remained higher in Epigraph-vaccinated pigs.

More, and more-generalizable, experiments will be needed to verify the Epigraph vaccine's performance, Weaver said. For one, the team is looking to test whether the vaccine candidate generates actual immunity in living pigs, beyond the promising immune responses from their cells in a lab. There's also the matter of determining how long any immunity might last.

But Weaver has already developed a human equivalent of the swine flu vaccine cocktail that he's likewise preparing to test. Considering the similarities between flu infections in humans and pigs -- susceptibilities to subtypes, clinical symptoms, even viral receptors in respiratory tracts -- he said the recent findings bode well for those future, human-centric efforts. Success on that front could eventually mean pivoting away from the current approach to flu vaccinations, whereby virologists are forced to predict which strains will dominate a flu season -- and, despite their best efforts, sometimes miss the mark.

"This study is equivalent to a bench-to-bedside study, where the positive results in the preclinical mouse study are confirmed by positive results in a clinical pig study," Weaver said. "This gives us confidence that when the concept is applied to human influenza virus, we'll see the same translation from preclinical studies to clinical studies in humans."

Read more at Science Daily

Mar 2, 2021

Origin of life: The chicken-and-the-egg problem

 A Ludwig-Maximilians-Universitaet (LMU) in Munich team has shown that slight alterations in transfer-RNA molecules (tRNAs) allow them to self-assemble into a functional unit that can replicate information exponentially. tRNAs are key elements in the evolution of early life-forms.

Life as we know it is based on a complex network of interactions, which take place at microscopic scales in biological cells, and involve thousands of distinct molecular species. In our bodies, one fundamental process is repeated countless times every day. In an operation known as replication, proteins duplicate the genetic information encoded in the DNA molecules stored in the cell nucleus -- before distributing them equally to the two daughter cells during cell division. The information is then selectively copied ('transcribed') into what are called messenger RNA molecules (mRNAs), which direct the synthesis of the many different proteins required by the cell type concerned. A second type of RNA -- transfer RNA (tRNA) -- plays a central role in the 'translation' of mRNAs into proteins. Transfer RNAs act as intermediaries between mRNAs and proteins: they ensure that the amino-acid subunits of which each particular protein consists are put together in the sequence specified by the corresponding mRNA.

How could such a complex interplay between DNA replication and the translation of mRNAs into proteins have arisen when living systems first evolved on the early Earth? We have here a classical example of the chicken-and-the-egg problem: Proteins are required for transcription of the genetic information, but their synthesis itself depends on transcription.

LMU physicists led by Professor Dieter Braun have now demonstrated how this conundrum could have been resolved. They have shown that minor modifications in the structures of modern tRNA molecules permit them to autonomously interact to form a kind of replication module, which is capable of exponentially replicating information. This finding implies that tRNAs -- the key intermediaries between transcription and translation in modern cells -- could also have been the crucial link between replication and translation in the earliest living systems. It could therefore provide a neat solution to the question of which came first -- genetic information or proteins?

Strikingly, in terms of their sequences and overall structure, tRNAs are highly conserved in all three domains of life, i.e. the unicellular Archaea and Bacteria (which lack a cell nucleus) and the Eukaryota (organisms whose cells contain a true nucleus). This fact in itself suggests that tRNAs are among the most ancient molecules in the biosphere.

Like the later steps in the evolution of life, the evolution of replication and translation -- and the complex relationship between them -- was not the result of a sudden single step. It is better understood as the culmination of an evolutionary journey. "Fundamental phenomena such as self-replication, autocatalysis, self-organization and compartmentalization are likely to have played important roles in these developments," says Dieter Braun. "And on a more general note, such physical and chemical processes are wholly dependent on the availability of environments that provide non-equilibrium conditions."

In their experiments, Braun and his colleagues used a set of reciprocally complementary DNA strands modeled on the characteristic form of modern tRNAs. Each was made up of two 'hairpins' (so called because each strand could partially pair with itself and form an elongated loop structure), separated by an informational sequence in the middle. Eight such strands can interact via complementary base-pairing to form a complex. Depending on the pairing patterns dictated by the central informational regions, this complex was able to encode a 4-digit binary code.

Each experiment began with a template -- an informational structure made up of two types of the central informational sequences that define a binary sequence. This sequence dictated the form of the complementary molecule with which it can interact in the pool of available strands. The researchers went on to demonstrate that the templated binary structure can be repeatedly copied, i.e. amplified, by applying a repeating sequence of temperature fluctuations between warm and cold. "It is therefore conceivable that such a replication mechanism could have taken place on a hydrothermal microsystem on the early Earth," says Braun. In particular, aqueous solutions trapped in porous rocks on the seafloor would have provided a favorable environment for such reaction cycles, since natural temperature oscillations, generated by convection currents, are known to occur in such settings.

During the copying process, complementary strands (drawn from the pool of molecules) pair up with the informational segment of the template strands. In the course of time, the adjacent hairpins of these strands also pair up to form a stable backbone, and temperature oscillations continue to drive the amplification process. If the temperature is increased for a brief period, the template strands are separated from the newly formed replicate, and both can then serve as template strands in the next round of replication.

The team was able to show that the system is capable of exponential replication. This is an important finding, as it shows that the replication mechanism is particularly resistant to collapse owing to the accumulation of errors. The fact that the structure of the replicator complex itself resembles that of modern tRNAs suggests that early forms of tRNA could have participated in molecular replication processes, before tRNA molecules assumed their modern role in the translation of messenger RNA sequences into proteins. "This link between replication and translation in an early evolutionary scenario could provide a solution to the chicken-and-the-egg problem," says Alexandra Kühnlein. It could also account for the characteristic form of proto-tRNAs, and elucidate the role of tRNAs before they were co-opted for use in translation.

Read more at Science Daily

Atmospheric rivers increase snow mass in West Antarctica

 A new study published today in the journal Geophysical Research Letters used NASA's ice-measuring laser satellite to identify atmospheric river storms as a key driver of increased snowfall in West Antarctica during the 2019 austral winter.

These findings from scientists at Scripps Institution of Oceanography at the University of California San Diego and colleagues will help improve overall understanding of the processes driving change in Antarctica, and lead to better predictions of sea-level rise. The study was funded by NASA, with additional support from the Rhodium Group's Climate Impact Lab, a consortium of leading research institutions examining the risks of climate change.

Atmospheric rivers are phenomena that transport large amounts of water vapor in long, narrow "rivers" in the sky. They are known to be the main driver of precipitation along the West Coast of the United States, accounting for 25-50 percent of annual precipitation in key parts of the West. Increasing research on atmospheric rivers finds that they dominantly impact the western coasts of most continents, due to oceans evaporating and storms building high levels of moisture into the atmosphere.

NASA's Ice, Cloud, and land Elevation Satellite-2 (ICESat-2), launched into orbit in September 2018, is providing a detailed look at the height of ice and snow on the vast, frozen continent. The satellite works by sending 10,000 laser pulses per second to Earth's surface that measure the height of ice sheets, glaciers, and more by calculating the time it takes a handful of those pulses to return to the satellite. Each photon of light has a time tag, and these tags combine with the GPS location to pinpoint its exact place and height on the ground. It measures a detailed set of tracks over the Antarctic ice sheet every three months.

"ICESat-2 is the first satellite to be able to measure snowfall over the Antarctic continent in such a precise way," said Helen Amanda Fricker, a glaciologist at Scripps Oceanography and co-author of the study. "In winter, weather conditions prohibit having a field team there making observations on the ground. ICESat-2 is filling in this lack of data over the vast ice sheets, and giving us a greater understanding of snow mass gain and loss on a seasonal scale."

Looking at ICESat-2 data, scientists found increases in height over the Antarctic Ice Sheet between April 2019 and June 2020 due to increased snowfall. Using a computational model of the atmosphere and snow, they found that 41 percent of height increases over West Antarctica during the 2019 winter occurred because intermittent extreme precipitation events delivered large quantities of snow during short periods of time. Of those events, 63 percent were identified as landfalling atmospheric rivers. These systems were distinguished from other storms by the much higher moisture levels measured in the lower portions of the atmosphere.

The atmospheric rivers making landfall in Antarctica originate in the sub-tropical, mid-latitudes of the Southern Hemisphere. They travel long distances with no continents to stop them, eventually making landfall in West Antarctica.

"We know the frequency of atmospheric rivers is expected to increase, so it's important that scientists are able to measure how much they are contributing to snow mass increase or surface melting," said Susheel Adusumilli, lead author and PhD candidate at Scripps Oceanography. "Knowing how much snow is being accumulated across the continent helps us better understand how mass is changing as a whole, and informs our understanding of sea-level rise potential from the Antarctic Ice Sheet."

More than one hundred gigatons of ice are being lost to the ocean from Antarctica each year, contributing to ongoing sea-level rise. Most of this ice loss is driven by increased ice flow into the ocean from the melting of the floating ice shelves that surround Antarctica. Understanding the balance of mass gains from snowfall in the interior of Antarctica and mass loss from ocean warming is key to improving projections of sea-level rise.

While this study tracked ice mass in the short term, atmospheric rivers in Antarctica can also drive large amounts of snowmelt. In fact, this study found that around 90 percent of summer atmospheric rivers and 10 percent of winter atmospheric rivers coincided with potential surface melt over the West Antarctic Ice Sheet. Atmospheric river-driven melting is due to the low clouds from these systems, which can absorb and re-emit heat back to the surface. Scientists will need further study to understand if these events will be snow makers or melters, looking at factors such as seasonality, moisture level, cloud coverage, or if each is storm dependent.

Read more at Science Daily

Hurricane resembling those in lower atmosphere observed over Earth's polar ionosphere

 The first observations of a space hurricane have been revealed in Earth's upper atmosphere, confirming their existence and shedding new light on the relationship between planets and space.

Hurricanes in the Earth's low atmosphere are known, but they had never before been detected in the upper atmosphere.

An international team of scientists led by Shandong University in China analysed observations made by satellites in 2014 to reveal a long-lasting hurricane, resembling those in the lower atmosphere, in the polar ionosphere and magnetosphere with surprisingly large energy and momentum deposition despite otherwise extremely quiet geomagnetic conditions.

The analysis allowed a 3D image to be created of the 1,000km-wide swirling mass of plasma several hundred kilometres above the North Pole, raining electrons instead of water.

Professor Qing-He Zhang, lead author of the research at Shandong University, said: "These features also indicate that the space hurricane leads to large and rapid deposition of energy and flux into the polar ionosphere during an otherwise extremely quiet geomagnetic condition, suggesting that current geomagnetic activity indicators do not properly represent the dramatic activity within space hurricanes, which are located further poleward than geomagnetic index observatories."

Professor Mike Lockwood, space scientist at the University of Reading, said: "Until now, it was uncertain that space plasma hurricanes even existed, so to prove this with such a striking observation is incredible."

"Tropical storms are associated with huge amounts of energy, and these space hurricanes must be created by unusually large and rapid transfer of solar wind energy and charged particles into the Earth's upper atmosphere.

"Plasma and magnetic fields in the atmosphere of planets exist throughout the universe, so the findings suggest space hurricanes should be a widespread phenomena."

Hurricanes often cause loss of life and property through high winds and flooding resulting from the coastal storm surge of the ocean and the torrential rains. They are characterised by a low-pressure centre (hurricane eye), strong winds and flow shears, and a spiral arrangement of towering clouds with heavy rains.

In space, astronomers have spotted hurricanes on Mars, and Saturn, and Jupiter, which are similar to terrestrial hurricanes in the low atmosphere. There are also solar gases swirling in monstrous formations deep within the sun's atmosphere, called solar tornadoes. However, hurricanes had not been reported in the upper atmosphere of the planets in our heliosphere.

The space hurricane analysed by the team in Earth's ionosphere was spinning in an anticlockwise direction, had multiple spiral arms, and lasted almost eight hours before gradually breaking down.

The team of scientists from China, the USA, Norway and the UK used observations made by four DMSP (Defense Meteorological Satellite Program) satellites and a 3D magnetosphere modelling to produce the image. Their findings were published in Nature Communications.

Professor Zhang added: "This study suggests that there are still existing local intense geomagnetic disturbance and energy depositions which is comparable to that during super storms. This will update our understanding of the solar wind-magnetosphere-ionosphere coupling process under extremely quiet geomagnetic conditions.

Read more at Science Daily

Astrophysicist's 2004 theory confirmed: Why the Sun's composition varies

 About 17 years ago, J. Martin Laming, an astrophysicist at the U.S. Naval Research Laboratory, theorized why the chemical composition of the Sun's tenuous outermost layer differs from that lower down. His theory has recently been validated by combined observations of the Sun's magnetic waves from the Earth and from space.

His most recent scientific journal article describes how these magnetic waves modify chemical composition in a process completely new to solar physics or astrophysics, but already known in optical sciences, having been the subject of Nobel Prizes awarded to Steven Chu in 1997 and Arthur Ashkin in 2018.

Laming began exploring these phenomena in the mid-1990s, and first published the theory in 2004.

"It's satisfying to learn that the new observations demonstrate what happens "under the hood" in the theory, and that it actually happens for real on the Sun," he said.

The Sun is made up of many layers. Astronomers call its outermost layer the solar corona, which is only visible from earth during a total solar eclipse. All solar activity in the corona is driven by the solar magnetic field. This activity consists of solar flares, coronal mass ejections, high-speed solar wind, and solar energetic particles. These various manifestations of solar activity are all propagated or triggered by oscillations or waves on the magnetic field lines.

"The very same waves, when they hit the lower solar regions, cause the change in chemical composition, which we see in the corona as this material moves upwards," Laming said. "In this way, the coronal chemical composition offers a new way to understand waves in the solar atmosphere, and new insights into the origins of solar activity."

Christoph Englert, head of the U.S. Naval Research Laboratory's Space Science Division, points out the benefits for predicting the Sun's weather and how Laming's theory could help predict changes in our ability to communicate on Earth.

"We estimate that the Sun is 91 percent hydrogen but the small fraction accounted for by minor ions like iron, silicon, or magnesium dominates the radiative output in ultraviolet and X-rays from the corona," he said. "If the abundance of these ions is changing, the radiative output changes."

"What happens on the Sun has significant effects on the Earth's upper atmosphere, which is important for communication and radar technologies that rely on over-the-horizon or ground-to-space radio frequency propagation," Englert said.

It also has an impact on objects in orbit. The radiation is absorbed in the Earth's upper atmospheric layers, which causes the upper atmosphere to form plasma, the ionosphere, and to expand and contract, influencing the atmospheric drag on satellites and orbital debris.

"The Sun also releases high energy particles," Laming said. "They can cause damage to satellites and other space objects. The high energy particles themselves are microscopic, but it's their speed that causes them to be dangerous to electronics, solar panels, and navigation equipment in space."

Englert said that reliably forecasting solar activity is a long-term goal, which requires us to understand the inner workings of our star. This latest achievement is a step in this direction.

Read more at Science Daily

Covering metal catalyst surfaces with thin two-dimensional oxide materials can enhance chemical reactions

 Physically confined spaces can make for more efficient chemical reactions, according to recent studies led by scientists from the U.S. Department of Energy's (DOE) Brookhaven National Laboratory. They found that partially covering metal surfaces acting as catalysts, or materials that speed up reactions, with thin films of silica can impact the energies and rates of these reactions. The thin silica forms a two-dimensional (2-D) array of hexagonal-prism-shaped "cages" containing silicon and oxygen atoms.

"These porous silica frameworks are the thickness of only three atoms," explained Samuel Tenney, a chemist in the Interface Science and Catalysis Group of Brookhaven Lab's Center for Functional Nanomaterials (CFN). "If the pores were too tall, certain branches of molecules wouldn't be able to reach the interface. There's a particular geometry in which molecules can come in and bind, sort of like the way an enzyme and a substrate lock together. Molecules with the appropriate size can slip through the pores and interact with the catalytically active metal surface."

"The bilayer silica is not actually anchored to the metal surface," added Calley Eads, a research associate in the same group. "There are weak forces in between. This weak interaction allows molecules not only to penetrate the pores but also to explore the catalytic surface and find the most reactive sites and optimized reaction geometry by moving horizontally in the confined space in between the bilayer and metal. If it was anchored, the bilayer would only have one pore site for each molecule to interact with the metal."

The scientists are discovering that the confined spaces modify different types of reactions, and they are working to understand why.

Tenney and Eads are co-corresponding authors on recently published research in Angewandte Chemie demonstrating this confinement effect for an industrially important reaction: carbon monoxide oxidation. Carbon monoxide is a toxic component of engine exhaust from vehicles and thus must be removed. With the help of an appropriate precious metal catalyst such as palladium, platinum, or rhodium, catalytic converters in vehicles combine carbon monoxide with oxygen to form carbon dioxide.

Tenney, Eads, and colleagues at the CFN and Brookhaven's National Synchrotron Light Source II (NSLS-II) showed that covering palladium with silica boosts the amount of carbon dioxide produced by 20 percent, as compared to the reaction on bare palladium.

To achieve this performance enhancement, the scientists first had to get a full bilayer structure across the palladium surface. To do so, they heated a calibrated amount of silicon to sublimation temperatures in a high-pressure oxygen environment. In sublimation, a solid directly transforms into a gas. As the thin film of silica was being created, they probed its structure with low-energy electron diffraction. In this technique, electrons striking a material diffract in a pattern characteristic of the material's crystalline structure.

"We continue heating until we get highly crystalline structures with well-defined pore sizes that we can use to explore the chemistry we're interested in," said Eads.

Here, the team tracked reactants and products and the chemical bonding environment in the 2-D confined space during oxidation of carbon monoxide, incrementally increasing the temperature. To track this information, they simultaneously conducted ambient-pressure x-ray photoelectron spectroscopy (AP-XPS) and mass spectrometry (MS) at the NSLS-II and infrared reflection-absorption spectroscopy (IRAAS) at the CFN.

"AP-XPS tells us what elements are present, whether they're on the surface or in the gas phase," said Tenney. "It can also give us information about the chemical oxidation state or binding geometry of the atoms -- whether a carbon is bound to one or two oxygen atoms, for example. MS helps us identify the gas-phase molecules we're seeing evolve in our system on the basis of their weight and charge. IRRAS is a fingerprint of the type of chemical bonds present between atoms and shows the conformation and orientation of carbon monoxide molecules adsorbed on the surface."

According to co-author Dario Stacchiola, leader of the CFN Interface Science and Catalysis Group, one of the team's unique capabilities is the ability to use complementary surface characterization tools to analyze the same sample without exposing it to air, which could cause contamination.

"Reproducibility is often a problem in catalysis," said Stacchiola. "But we have a setup that allows us to prepare a sample in very pristine ultrahigh-vacuum conditions and expose the same sample to industrially relevant pressures of gases."

The experimental results showed a sharp rise in the amount of carbon dioxide above a critical temperature. Below this temperature, carbon monoxide "poisons" the surface, preventing the reaction from proceeding. However, once the temperature threshold is met, molecular oxygen begins to split into two individual oxygen atoms on the palladium surface and form a surface oxide. These oxygen atoms combine with carbon monoxide to form carbon dioxide, thereby preventing poisoning.

"The confined space is changing the energetics and kinetics of the reaction to produce more carbon dioxide," said Eads, who led the recent implementation of this new multimodal surface analysis approach for studying nanoporous films under operational conditions.

"By applying thin films on top of a traditional catalyst that has been studied for decades, we've introduced a "knob" to tailor the chemistry for certain reactions," said Tenney. "Even a one-percent improvement in catalyst efficiency can translate into economic savings in large-scale production."

"We found that a very thin layer of an inexpensive oxide can significantly boost catalytic activity without increasing the amount of the expensive precious metal used as the catalyst," added Stacchiola.

Previously, the team studied the dynamics of the furfuryl alcohol reaction on a palladium surface covered by bilayer silica. Furfuryl alcohol is a biomass-derived molecule that can be converted into biofuel. Compared to carbon monoxide oxidation, which only makes a single product, reactions with larger and more complex biomolecules such as furfuryl alcohol can generate many undesired byproducts. Their preliminary data showed the potential for tuning the selectivity of the furfuryl alcohol reaction with the bilayer silica cover.

"Changing catalytic activity is great -- that's what we see in the carbon monoxide oxidation study," said Stacchiola. "The next step is to prove that we can use the oxide covers to tune the selectivity for particular reactions. We think our approach can be applied broadly in catalysis."

Last year, other members of Stacchiola's group -- along with colleagues from the CFN Theory and Computation Group, Stony Brook University (SBU), and University of Wisconsin-Milwaukee -- published a related study in ACS Catalysis, a journal of the American Chemical Society (ACS). Combining experiment and theory, they discovered why the water formation reaction catalyzed by ruthenium metal is accelerated under confinement with bilayer silica.

"Chemistry in confined spaces is quite a new area of research," said co-corresponding author Deyu Lu, a physicist in the CFN Theory and Computation Group. "In the last decade, there have been many reports that confinement impacts the chemistry, but a mechanistic understanding on the atomic scale has been largely lacking."

In the ACS Catalysis study, the CFN team demonstrated that confinement can change the pathway by which the reaction occurs. Water formation can proceed by two possible reaction pathways: direct hydrogenation and disproportionation. The main difference is how the first hydroxyl group -- oxygen bonded to hydrogen -- is made. According to calculations by Lu and first author and SBU student Mengen Wang -- this reaction step costs the most energy.

In the direct pathway, hydrogen molecules dissociate on the surface into two hydrogen atoms, which combine with a chemically absorbed oxygen on the surface. These hydroxyl groups combine with another hydrogen atom to make water. For the disproportionation pathway, water -- which may still initially come from the direct pathway -- first needs to be stabilized on the surface. Then, water can combine with a surface oxygen to make two hydroxyl groups on the surface. These hydroxyl groups can join with two hydrogen atoms to form two water molecules. These water molecules can then make more hydroxyl groups, forming a loop in the disproportionation pathway.

In lab-based AP-XPS experiments at the CFN, the team found that the temperature needed to activate the water formation reaction was much lower when silica was covering ruthenium, as compared to the metal by itself.

"The fact that the reaction takes place at lower temperatures in confinement is partially related to its lower activation energy," explained co-corresponding author Anibal Boscoboinik, a chemist in the CFN Interface Science and Catalysis Group. "From the AP-XPS data on surface oxygen, we can indirectly derive the energy required to activate the reaction. We see that this activation energy is much lower when silica is on top of ruthenium."

Applying a popular computational method called density functional theory, the team used supercomputers to study the energetics of the reaction. Initially, the experimentalists hypothesized that the lowered activation energy for the rate-limiting step of the reaction (making the first hydroxyl group) was due to silica pressing down on the reaction complex. However, the calculations showed that the presence of silica didn't change this energy significantly. Rather, it changed the reaction pathway. On the bare ruthenium surface, the direct pathway was favored; in the presence of silica, water molecules stabilized on the surface, activating the disproportionation pathway.

"Without the silica cover, the water molecules desorb, and the reaction follows the direct pathway," said Lu. "Under the silica cover, water needs to cross several kinetic energy barriers in order to leave the surface. These kinetic barriers trap water molecules on the metal surface and activate the disproportionation pathway, enabling the hydroxyl groups to be made at a much lower energy barrier, as compared to the case without the confinement effects."

Though water formation isn't industrially relevant, the scientists say studying this model reaction can help them understand how to leverage the confinement effects to favor certain reaction pathways for more relevant reactions. In other words, the same fundamental principle can be applied to other systems. For example, silica could be coated onto electrodes to evoke particular pathways at liquid-solid interfaces in electrochemical cells. In that case, the reaction would be the opposite -- water would be dissociated into oxygen and hydrogen, a clean fuel.

"Understanding this reaction helps us to understand the reverse reaction," said Boscoboinik, who recently published a summary of initial studies on confinement effects with 2-D porous thin films. "If we were guided by experiment alone, we would have attributed the wrong explanation. Theory proved that our initial hypothesis was incorrect and played a key role in revealing the correct reaction mechanism at the microscopic level."

Yet, the scientists have seen other examples where silica has a pressure-related effect. In 2019, they found that bilayer silica presses down on the noble gas xenon at the interface between bilayer silica and ruthenium, inducing stronger bonding between xenon and ruthenium.

Read more at Science Daily

Mar 1, 2021

Bottling the world's coldest plasma

 Rice University physicists have discovered a way to trap the world's coldest plasma in a magnetic bottle, a technological achievement that could advance research into clean energy, space weather and astrophysics.

"To understand how the solar wind interacts with the Earth, or to generate clean energy from nuclear fusion, one has to understand how plasma -- a soup of electrons and ions -- behaves in a magnetic field," said Rice Dean of Natural Sciences Tom Killian, the corresponding author of a published study about the work in Physical Review Letters.

Using laser-cooled strontium, Killian and graduate students Grant Gorman and MacKenzie Warrens made a plasma about 1 degree above absolute zero, or approximately -272 degrees Celsius, and trapped it briefly with forces from surrounding magnets. It is the first time an ultracold plasma has been magnetically confined, and Killian, who's studied ultracold plasmas for more than two decades, said it opens the door for studying plasmas in many settings.

"This provides a clean and controllable testbed for studying neutral plasmas in far more complex locations, like the sun's atmosphere or white dwarf stars," said Killian, a professor of physics and astronomy. "It's really helpful to have the plasma so cold and to have these very clean laboratory systems. Starting off with a simple, small, well-controlled, well-understood system allows you to strip away some of the clutter and really isolate the phenomenon you want to see."

That's important for study co-author Stephen Bradshaw, a Rice astrophysicist who specializes in studying plasma phenomena on the sun.

"Throughout the sun's atomosphere, the (strong) magnetic field has the effect of altering everything relative to what you would expect without a magnetic field, but in very subtle and complicated ways that can really trip you up if you don't have a really good understanding of it," said Bradshaw, an associate professor of physics and astronomy.

Solar physicists rarely get a clear observation of specific features in the sun's atmosphere because part of the atmosphere lies between the camera and those features, and unrelated phenomena in the intervening atmosphere obscures what they'd like to observe.

"Unfortunately, because of this line-of-sight problem, observational measurements of plasma properties are associated with quite a lot of uncertainty," Bradshaw said. "But as we improve our understanding of the phenomena, and crucially, use the laboratory results to test and calibrate our numerical models, then hopefully we can reduce the uncertainty in these measurements."

Plasma is one of four fundamental states of matter, but unlike solids, liquids and gases, plasmas aren't generally part of everyday life because they tend to occur in very hot places like the sun, a lightning bolt or candle flame. Like those hot plasmas, Killian's plasmas are soups of electrons and ions, but they're made cold by laser-cooling, a technique developed a quarter century ago to trap and slow matter with light.

Killian said the quadrupole magnetic setup that was used to trap the plasma is a standard part of the ultracold setup that his lab and others use to make ultracold plasmas. But finding out how to trap plasma with the magnets was a thorny problem because the magnetic field plays havoc with the optical system that physicists use to look at ultracold plasmas.

"Our diagnostic is laser-induced fluorescence, where we shine a laser beam onto the ions in our plasma, and if the frequency of the beam is just right, the ions will scatter photons very effectively," he said. "You can take a picture of them and see where the ions are, and you can even measure their velocity by looking at the Doppler shift, just like using a radar gun to see how fast a car is moving. But the magnetic fields actually shift around the resonant frequencies, and we have to disentangle the shifts in the spectrum that are coming from the magnetic field from the Doppler shifts we're interested in observing."

That complicates experiments significantly, and to make matters even more complicated, the magnetic fields change dramatically throughout the plasma.

"So we have to deal with not just a magnetic field, but a magnetic field that's varying in space, in a reasonably complicated way, in order to understand the data and figure out what's happening in the plasma," Killian said. "We spent a year just trying to figure out what we were seeing once we got the data."

The plasma behavior in the experiments is also made more complex by the magnetic field. Which is precisely why the trapping technique could be so useful.

"There is a lot of complexity as our plasma expands across these field lines and starts to feel the forces and get trapped," Killian said. "This is a really common phenomenon, but it's very complicated and something we really need to understand."

One example from nature is the solar wind, streams of high-energy plasma from the sun that cause the aurora borealis, or northern lights. When plasma from the solar wind strikes Earth, it interacts with our planet's magnetic field, and the details of those interactions are still unclear. Another example is fusion energy research, where physicists and engineers hope to recreate the conditions inside the sun to create a vast supply of clean energy.

Killian said the quadrupole magnetic setup that he, Gorman and Warrens used to bottle their ultracold plasmas is similar to designs that fusion energy researchers developed in the 1960s. The plasma for fusion needs to be about 150 million degrees Celsius, and magnetically containing it is a challenge, Bradshaw said, in part because of unanswered questions about how the plasma and magnetic fields interact and influence one another.

"One of the major problems is keeping the magnetic field stable enough for long enough to actually contain the reaction," Bradshaw said. "As soon as there's a small sort of perturbation in the magnetic field, it grows and 'pfft,' the nuclear reaction is ruined.

Read more at Science Daily

Neanderthals had the capacity to perceive and produce human speech

 Neandertals -- the closest ancestor to modern humans -- possessed the ability to perceive and produce human speech, according to a new study published by an international multidisciplinary team of researchers including Binghamton University anthropology professor Rolf Quam and graduate student Alex Velez.

"This is one of the most important studies I have been involved in during my career," says Quam. "The results are solid and clearly show the Neandertals had the capacity to perceive and produce human speech. This is one of the very few current, ongoing research lines relying on fossil evidence to study the evolution of language, a notoriously tricky subject in anthropology."

The evolution of language, and the linguistic capacities in Neandertals in particular, is a long-standing question in human evolution.

"For decades, one of the central questions in human evolutionary studies has been whether the human form of communication, spoken language, was also present in any other species of human ancestor, especially the Neandertals," says coauthor Juan Luis Arsuaga, Professor of Paleontology at the Universidad Complutense de Madrid and co-director of the excavations and research at the Atapuerca sites. The latest study has reconstructed how Neandertals heard to draw some inferences about how they may have communicated.

The study relied on high resolution CT scans to create virtual 3D models of the ear structures in Homo sapiens and Neandertals as well as earlier fossils from the site of Atapuerca that represent ancestors of the Neandertals. Data collected on the 3D models were entered into a software-based model, developed in the field of auditory bioengineering, to estimate the hearing abilities up to 5 kHz, which encompasses most of the frequency range of modern human speech sounds. Compared with the Atapuerca fossils, the Neandertals showed slightly better hearing between 4-5 kHz, resembling modern humans more closely.

In addition, the researchers were able to calculate the frequency range of maximum sensitivity, technically known as the occupied bandwidth, in each species. The occupied bandwidth is related to the communication system, such that a wider bandwidth allows for a larger number of easily distinguishable acoustic signals to be used in the oral communication of a species. This, in turn, improves the efficiency of communication, the ability to deliver a clear message in the shortest amount of time. The Neandertals show a wider bandwidth compared with their ancestors from Atapuerca, more closely resembling modern humans in this feature.

"This really is the key," says Mercedes Conde-Valverde, professor at the Universidad de Alcalá in Spain and lead author of the study. "The presence of similar hearing abilities, particularly the bandwidth, demonstrates that the Neandertals possessed a communication system that was as complex and efficient as modern human speech."

"One of the other interesting results from the study was the suggestion that Neandertal speech likely included an increased use of consonants," said Quam. "Most previous studies of Neandertal speech capacities focused on their ability to produce the main vowels in English spoken language. However, we feel this emphasis is misplaced, since the use of consonants is a way to include more information in the vocal signal and it also separates human speech and language from the communication patterns in nearly all other primates. The fact that our study picked up on this is a really interesting aspect of the research and is a novel suggestion regarding the linguistic capacities in our fossil ancestors."

Thus, Neandertals had a similar capacity to us to produce the sounds of human speech, and their ear was "tuned" to perceive these frequencies. This change in the auditory capacities in Neandertals, compared with their ancestors from Atapuerca, parallels archaeological evidence for increasingly complex behavioral patterns, including changes in stone tool technology, domestication of fire and possible symbolic practices. Thus, the study provides strong evidence in favor of the coevolution of increasingly complex behaviors and increasing efficiency in vocal communication throughout the course of human evolution.

The team behind the new study has been developing this research line for nearly two decades, and has ongoing collaborations to extend the analyses to additional fossil species. For the moment, however, the new results are exciting.

Read more at Science Daily

Could our immune system be why COVID-19 is so deadly?

 Respiratory viruses such as SARS-CoV-2 (causing COVID-19) can often catalyse an overactive immune response that leads to a life-threatening cycle, known as a cytokine storm. Analysing cytokine responses from patients infected with SARS-CoV-2 and similar common respiratory viruses has unearthed glaringly important differences in how SARS-CoV-2 affects cytokines compared to other common respiratory viruses.

The comprehensive data resource aims to help specialists identify better treatments and diagnosis of underlying causes that can cause the deadly cytokine storm.

Scientists at the Earlham Institute (EI) and the Quadram Institute study how the immune system responds to infection with SARS-CoV-2 and other similar respiratory viruses, in particular to identifying unique features in severely ill COVID-19 patients.

Members of the Korcsmaros Group working alongside the clinical virologist Claire Shannon-Lowe at the University of Birmingham, focused their attention on how SARS-CoV-2 and other respiratory viruses are causing the so-called 'cytokine storm' -- a hyper-activation of our own immune system -- one of the main reasons for the high death rate in subgroup of COVID-19 patients.

To identify the similarities and differences in the cytokine storm, the researchers collected and analysed the vast collection of thousands COVID-19 research papers. They looked for patterns of cytokine changes in patients who had been infected by respiratory viruses that cause cytokine release syndrome.

By systematically analysing over 5,000 scientific studies to find those containing immune response data from patients, the researchers showed that SARS-CoV-2 has a unique tendency of halting the rise of specific cytokines in certain patients, when compared to other similar viruses. This is important in understanding the causes of the potentially fatal cytokine release syndrome, more commonly known as a cytokine storm.

"As the onset of the cytokine storm is one of the key factors behind the mortality rates we're seeing in a particular group of COVID-19 patients, it is critical to understand why it is happening" said project lead PhD Student Marton Olbei in the Korcsmáros Group.

"Cytokine storms are not unique to SARS-CoV-2 infection; they can be found in most of the critical human coronaviruses and influenza. A subtype outbreaks of the past two decades."

Cytokines are small proteins that tightly regulate our immune system and how our body reacts to internal or external stress, such as cancer, inflammation, or infection. Cytokines act as conductors, orchestrating our immune response when infected with viruses. One of their roles is to cause inflammation, which is part of the healing process of many infections and injuries.

Respiratory viruses all activate antiviral responses in the body but there are differences in how each virus attempts to evade the attention of the immune system. The most common strategy is to confuse, or specifically attack, crucial immune response mechanisms -- such as the release of cytokines.

A cytokine storm happens only in certain patients' immune systems when reacting to a virus. A feedback loop causes the continual activation of cytokines responsible for inflammation, resulting in organ failure or even death.

While SARS-CoV-2 cases have distinct similarities to both influenza patients and those who were infected in previous coronavirus outbreaks (SARS-CoV, MERS-CoV), the researchers' analysis found specific immune mechanisms that make SARS-CoV-2 uniquely dangerous.

"We examined the changing cytokine levels upon infection with similar viral pathogens (SARS-CoV, MERS-CoV, H5N1, H7N9) to highlight the protective and unique cytokine responses caused by these viruses," said Marton.

By comparing the COVID-19 patients' immune response data, the researchers found similarities mounted against these pathogens -- discriminating between influenza A subtypes and coronaviruses -- and the unusual aspects of the current circulating SARS-CoV-2 virus.

SARS-CoV-2 is similar to other respiratory viruses but, by targeting specific regulators of the cytokine response, with just small-scale differences, it could lead to a more severe disease -- not from the virus itself, but from the patient's immune system response.

"For a subgroup of patients, when infected by these viruses, a real danger is posed by the immune system overreacting. We're drawing out which specific parts of our immune system react in a potentially harmful way to these viruses, said Marton.

"We wanted to take a step back and summarise what is actually being reported in the scientific literature, specifically focusing on cytokine-mediated immune responses, to put into context and differentiate SARS-CoV-2 from these other viruses. Building up a data repository such as this will also be vital for the future; if other similar viruses arose, you could quickly find their profile and compare."

Read more at Science Daily

A new theory for how memories are stored in the brain

 Research from the University of Kent has led to the development of the MeshCODE theory, a revolutionary new theory for understanding brain and memory function. This discovery may be the beginning of a new understanding of brain function and in treating brain diseases such as Alzheimer's.

In a paper published by Frontiers in Molecular Neuroscience, Dr Ben Goult from Kent's School of Biosciences describes how his new theory views the brain as an organic supercomputer running a complex binary code with neuronal cells working as a mechanical computer. He explains how a vast network of information-storing memory molecules operating as switches is built into each and every synapse of the brain, representing a complex binary code. This identifies a physical location for data storage in the brain and suggests memories are written in the shape of molecules in the synaptic scaffolds.

The theory is based on the discovery of protein molecules, known as talin, containing "switch-like" domains that change shape in response to pressures in mechanical force by the cell. These switches have two stable states, 0 and 1, and this pattern of binary information stored in each molecule is dependent on previous input, similar to the Save History function in a computer. The information stored in this binary format can be updated by small changes in force generated by the cell's cytoskeleton.

In the brain, electrochemical signalling between trillions of neurons occurs between synapses, each of which contains a scaffold of the talin molecules. Once assumed to be structural, this research suggests that the meshwork of talin proteins actually represent an array of binary switches with the potential to store information and encode memory.

This mechanical coding would run continuously in every neuron and extend into all cells, ultimately amounting to a machine code coordinating the entire organism. From birth, the life experiences and environmental conditions of an animal could be written into this code, creating a constantly updated, mathematical representation of its unique life.

Read more at Science Daily

Deciphering the genetics behind eating disorders

 Anorexia nervosa, bulimia nervosa and binge-eating disorder are the three main eating disorders that 4 out of in 10 individuals living in Western Europe will experience at some point in their lives. In recent years, studies on the genetic basis of anorexia nervosa have highlighted the existence of predisposing genetic markers, which are shared with other psychiatric disorders. By analysing the genome of tens of thousands of British people, a team from the University of Geneva (UNIGE), the University Hospitals of Geneva (HUG), King's College London, the University College London, the University of North Carolina (UNC) and The Icahn School of Medicine at Mount Sinai have built on these initial results by discovering similarities between the genetic bases of these various eating disorders, and those of other psychiatric disorders. Eating disorders differ in their genetic association with anthropometric traits, like weight, waist circumference or body mass index. Thus, genetic predisposition to certain weight traits may be a distinctive feature of anorexia nervosa, bulimia nervosa or binge-eating disorder. The study is published in the International Journal of Eating Disorders.

"Previous studies, which highlighted a genetic association between a high risk of anorexia nervosa and a low risk of obesity, have begun to lift the veil on certain aspects of how eating disorders develop that had been mostly neglected until then," explains Nadia Micali, Professor at the Department of Psychiatry at UNIGE Faculty of Medicine and Head of the Division of child and adolescent psychiatry at the HUG, who directed this work. She continues, "However, the same work has not been done for the two other major eating disorders: bulimia nervosa and binge-eating disorder. The goal of our study was to understand similiarities and differences amongst all eating disorders in the role of genes governing body weight."

The genome of more than 20,000 people examined

To understand the similarities and differences between the genetic patterns of anorexia nervosa, bulimia nervosa and binge-eating disorder, the research team analysed the genomes of more than 20,000 people. These were taken from two large population-based studies conducted in the UK: the UK Biobank and the Avon Longitudinal Study of Parents and Children.

First author, Dr Christopher Hübel, from King's College London said: "We were able to access volunteer's DNA, their basic health data (weight, age, etc.) and responses to health questionnaires, including possible psychiatric disorders and their eating disorder history. We are grateful for this access as we were able to conduct multifactorial analyses and calculate more than 250 polygenic scores for each person. Each polygenic score sums the risk genes involved in a specific trait, such as depression, for example. We calculated polygenic scores for psychiatric disorders, such as schizophrenia and obsessive-compulsive disorder, and metabolic and physical traits, including insulin sensitivity, obesity and high BMI." Thus, the higher the score, the greater the genetic risk, whether it is blue eyes or the development of a given disease.

The research team then examined the associations between the polygenic scores of these volunteers (representing genetic liability to psychiatric disorders, metabolic and physical traits) and eating disorders.

A combination of psychiatric and body weight regulation genetic risk

The study shows that while there are great genetic similarities between anorexia nervosa, bulimia nervosa and binge-eating disorder, there are also notable differences.

Nadia Micali details these results: "The similarities lie in the association with psychiatric risks: anorexia nervosa, bulimia nervosa and binge-eating disorder share genetic risk with certain psychiatric disorders, in particular for schizophrenia and depression, thus confirming the strong psychiatric component of these diseases. However, the big difference concerns the associated genetics of body weight regulation, which are opposite between anorexia on the one hand, and bulimia nervosa and binge-eating disorder on the other, the latter being linked to a high genetic risk of obesity, and high BMI."

A genetic predisposition to a heavy weight versus a light weight may constitute a determining factor that pushes individuals with similar psychiatric genetic risk to different eating disorders.

"The metabolic and physical component would therefore direct the individual either towards anorexia nervosa or towards bulimia nervosa or binge-eating disorder," analyses Nadia Micali. "Moreover, this study confirms a clear genetic relationship between binge-eating disorder and attention deficit hyperactivity disorder (ADHD), that was already clinically observed, which might be linked to greater impulsivity, which is shared by these disorders." The role of genetic patterns in body weight regulation identified in this study provides a better understanding of the genetic basis of eating disorders, and of how they differ in their genetic marking despite their similarities. This work could lead to better understand the development of eating disorders.

Read more at Science Daily

Feb 28, 2021

Largest cluster of galaxies known in the early universe

 A study, led by researchers at the Instituto de Astrofísica de Canarias (IAC) and carried out with OSIRIS, an instrument on the Gran Telescopio Canarias (GTC), has found the most densely populated galaxy cluster in formation in the primitive universe. The researchers predict that this structure, which is at a distance of 12.5 billion light years from us, will have evolved into a cluster similar to that of Virgo, a neighbour of the Local Group of galaxies to which the Milky Way belongs. The study is published in the specialized journal Monthly Notices of the Royal Astronomical Society (MNRAS).

Clusters of galaxies are groups of galaxies which remain together because of the action of gravity. To understand the evolution of these "cities of galaxies" scientists look for structures in formation, the so-called galaxy protoclusters, in the early universe.

In 2012 an international team of astronomers made an accurate determination of the distance of the galaxy HDF850.1, known as one of the galaxies with the highest rate of star formation in the observable universe. To their surprise, the scientists also discovered that this galaxy, which is one of the most studied regions on the sky, known as the Hubble Deep Field/GOODS-North, is part of a group of around a dozen protogalaxies which had formed during the first thousand million years of cosmic history. Before its discovery only one other similar primordial group was known.

Now, thanks to a new piece of research with the OSIRIS instrument on the Gran Telescopio Canarias (GTC, or GRANTECAN), the team has shown that it is one of the most densely populated regions populated with galaxies in the primitive Universe, and have for the first time carried out a detailed study of the physical properties of this system. "Surprisingly we have discovered that all the members of the cluster studied up to now, around two dozen, are galaxies with normal star formation, and that the central galaxy appears to dominate the production of stars in this structure" explains Rosa Calvi, formerly a postdoctoral researcher at the IAC and first author of the article.

Witnesses to the infancy of the local Universe

This recent study shows that this cluster of galaxies in formation is made up of various components, or "zones" with differences in their evolution. The astronomers predict that this structure will change gradually until it becomes a galaxy cluster similar to Virgo, the central region of the supercluster of the same name in which is situated the Local Group of galaxies to which the Milky Way belongs. "We see this city in construction just as it was 12,500 million years ago, when the Universe had less than 10% of its present age, so we are seeing the childhood of a cluster of galaxies like those which are typical in the local Universe" notes Helmut Dannerbauer, an IAC researcher who is co-author of this article.

The distance measured to these studied sources agrees perfectly with the predictions based on photometric observations taken previously on GRANTECAN by Pablo Arrabal Haro, formerly a doctoral student at the IAC, supervised by José Miguel Rodríguez Espinosa, an IAC researcher and Assistant General Secretary of the International Astronomical Union (IAU), and Casiana Muñoz-Tuñón, a researcher and Deputy Director of the IAC, all of them co-authors of the present article. Arrabal developed a method for selecting galaxies with normal star formation rates, based on the photometric survey SHARDS (Survey for High-z Absorption Red and Dead Sources), a Large Programme of the European Southern Observatory (ESO) carried out on the GTC. "I am very happy to see that the method developed during my doctoral thesis works so well in finding and confirming a region highly populated with galaxies in the distant Universe" states Arrabal.

The SHARDS programme has been led by Pablo Pérez-González, researcher at the Centro de Astrobiología (CAB, CSIC-INTA) and also author of the paper. As Pérez-González explains, "measuring exactly how these structures are forming, especially at the beginning of the Universe, is not easy, and we need exceptional data such as those we are taking with the GTC telescope as part of the SHARDS and SHARDS Frontier Fields projects, which allow us to determine distances to galaxies and between galaxies at the edge of the Universe with a precision never achieved before."

In addition, Stefan Geier, GTC support astronomer and co-author of the paper points out that "this highly surprising result would not have been possible without the extraordinary capacity of OSIRIS together with the large colllecting area of the GRANTECAN, the largest optical and infrared telescope in the world."

Read more at Science Daily

Two new genes linked to Alzheimer's disease discovered

 A research team led by Chunshui Yu and Mulin Jun Li of Tianjin Medical University has discovered two new genes potentially involved in Alzheimer's disease. They identified them by exploring which genes were turned on and off in the hippocampus of people who suffered from the disease. The team's new findings are published February 25th in PLOS Genetics.

Alzheimer's disease is a neurodegenerative disorder that involves worsening dementia and the formation of protein plaques and tangles in the brain. The hippocampus, part of the brain involved in memory, is one of the first regions to sustain damage. To better understand which genes contribute to the progression of this heritable disease, the researchers identified genes expressed at higher or lower levels in the hippocampus of people with Alzheimer's disease compared to healthy brains. They identified 24 Alzheimer's-related genes that appear to have an effect through the hippocampus, using previous genomic and hippocampus gene expression data. Many genes were already known to contribute to the disease, such as APOE, but two were unknown, PTPN9 and PCDHA4. Additionally, several are involved in biological process related to Alzheimer's disease, such as plaque formation and cell death.

The research team further validated their findings by comparing gene expression for the two dozen genes to images of the individuals' brains. In Alzheimer's disease, damage and loss of neurons causes the hippocampus to shrink, which can be measured through medical imaging. The researchers established that expression of two of the genes is related to the size of the hippocampus and a diagnosis of Alzheimer's disease.

Overall, the new findings improve our understanding of the genetic and cellular mechanisms that cause Alzheimer's disease. The next step will be to investigate the roles of the two novel genes and how they contribute to this devastating disease.

The authors add, "The study identifies two novel genes associated with Alzheimer's disease in the context of hippocampal tissue and reveals candidate hippocampus-mediated neurobiological pathways from gene expression to Alzheimer's disease."

From Science Daily

Measuring the tRNA world

 Transfer RNAs (tRNAs) deliver specific amino acids to ribosomes during translation of messenger RNA into proteins. The abundance of tRNAs can therefore have a profound impact on cell physiology, but measuring the amount of each tRNA in cells has been limited by technical challenges. Researchers at the Max Planck Institute of Biochemistry have now overcome these limitations with mim-tRNAseq, a method that can be used to quantify tRNAs in any organism and will help improve our understanding of tRNA regulation in health and disease.

A cell contains several hundred thousand tRNA molecules, each of which consists of only 70 to 90 nucleotides folded into a cloverleaf-like pattern. At one end, tRNAs carry one of the twenty amino acids that serve as protein building blocks, while the opposite end pairs with the codon specifying this amino acid in messenger RNA during translation. Although there are only 61 codons for the twenty amino acids, cells from different organisms can contain hundreds of unique tRNA molecules, some of which differ from each other by only a single nucleotide. Many nucleotides in tRNAs are also decorated with chemical modifications, which help tRNAs fold or bind the correct codon.

The levels of individual tRNAs are dynamically regulated in different tissues and during development, and tRNA defects are linked to neurogical diseases and cancer. The molecular origins of these links remain unclear, because quantifying the abundance and modifications of tRNAs in cells has long remained a challenge. The team of Danny Nedialkova at the MPI of Biochemistry has now developed mim-tRNAseq, a method that accurately measures the abundance and modification status of different tRNAs in cells.

Modification roadblocks and resolutions

To measure the levels of multiple RNAs simultaneously, scientists use an enzyme called reverse transcriptase to first rewrite RNA into DNA. Millions of these DNA copies can then be quantified in parallel by high-throughput sequencing. Rewriting tRNAs into DNA has been tremendously hard since many tRNA modifications block the reverse transcriptase, causing it to stop synthesizing DNA.

"Many researches have proposed elegant solutions to this problem, but all of them relieve only a fraction of the modification roadblocks in tRNAs," explains Danny Nedialkova, Max Planck Research Group Leader at the Max Planck Institute of Biochemistry. "We noticed that one specific reverse transcriptase seemed to be much better at reading through modified tRNA sites. By optimizing the reaction conditions, we could significantly improve the enzyme's efficiency, enabling it to read through nearly all tRNA modification roadblocks," adds Nedialkova. This made it possible to construct DNA libraries from full-length tRNA copies and use them for high-throughput sequencing.

The mim-tRNAseq computational toolkit


The analysis of the resulting sequencing data also presented significant challenges. "We identified two major issues: the first one is the extensive sequence similarity between different tRNA transcripts," explains Andrew Behrens, PhD student in Nedialkova's group and first author of the paper. "The second one comes from the fact that an incorrect nucleotide (a misincorporation) is introduced at many modified sites during reverse transcription. Both make it extremely challenging to assign each DNA read to the tRNA molecule it originated from," adds Behrens.

Read more at Science Daily

Did teenage 'tyrants' outcompete other dinosaurs?

 Paleo-ecologists from The University of New Mexico and at the University of Nebraska-Lincoln have demonstrated that the offspring of enormous carnivorous dinosaurs, such as Tyrannosaurus rex may have fundamentally re-shaped their communities by out-competing smaller rival species.

The study, released this week in the journal Science, is the first to examine community-scale dinosaur diversity while treating juveniles as their own ecological entity.

"Dinosaur communities were like shopping malls on a Saturday afternoon -- jam-packed with teenagers," explained Kat Schroeder, a graduate student in the UNM Department of Biology who led the study. "They made up a significant portion of the individuals in a species and would have had a very real impact on the resources available in communities."

Because they were born from eggs, dinosaurs like T. rex necessarily were born small -- about the size of a house cat. This meant as they grew to the size of a city bus, these "megatheropods," weighing between one and eight tons, would have changed their hunting patterns and prey items. It's long been suspected by paleontologists that giant carnivorous dinosaurs would change behavior as they grew. But how that might have affected the world around them remained largely unknown.

"We wanted to test the idea that dinosaurs might be taking on the role of multiple species as they grew, limiting the number of actual species that could co-exist in a community," said Schroeder.

The number of different types of dinosaurs known from around the globe is low, particularly among small species.

"Dinosaurs had surprisingly low diversity. Even accounting for fossilization biases, there just really weren't that many dinosaur species," said Felisa Smith, professor of Biology at UNM and Schroeder's graduate advisor.

To approach the question of decreased dinosaur diversity, Schroeder and her coauthors collected data from well-known fossil localities from around the globe, including over 550 dinosaur species. Organizing dinosaurs by mass and diet, they examined the number of small, medium and large dinosaurs in each community.

They found a strikingly clear pattern:

"There is a gap -- very few carnivorous dinosaurs between 100-1000kg [200 pounds to one ton] exist in communities that have megatheropods," Schroeder said. "And the juveniles of those megatheropods fit right into that space."

Schroeder also notes that looking at dinosaur diversity through time was key. Jurassic communities (200-145 million years ago) had smaller gaps and Cretaceous communities (145-65 million years ago) had large ones.

"Jurassic megatheropods don't change as much -- the teenagers are more like the adults, which leaves more room in the community for multiple families of megatheropods as well as some smaller carnivores," Schroeder explained. "The Cretaceous, on the other hand, is completely dominated by Tyrannosaurs and Abelisaurs, which change a lot as they grow."

To tell whether the gap was really caused by juvenile megatheropods, Schroeder and her colleagues rebuilt communities with the teens taken into account. By combining growth rates from lines found in cross-sections of bones, and the number of infant dinosaurs surviving each year based on fossil mass-death assemblages, the team calculated what proportion of a megatheropod species would have been juveniles.

Schroeder explained that this research is important because it (at least partially) elucidates why dinosaur diversity was lower than expected based on other fossil groups. It also explains why there are many more very large species of dinosaurs than small, which is the opposite of what would be expected. But most importantly, she added, it demonstrates the results of growth from very small infants to very large adults on an ecosystem.

Read more at Science Daily