Feb 19, 2022

Research advances knowledge of the battle between viruses and human cells

In the long-term battle between a herpesvirus and its human host, a University of Massachusetts virologist and her team of students have identified some human RNA able to resist the viral takeover -- and the mechanism by which that occurs.

This discovery, described in a paper published Feb. 15 in Proceedings of the National Academy of Sciences, represents an important step in the effort to develop anti-viral drugs to fight off infections.

"This paper is about trying to understand the mechanism that makes these RNA escape degradation," says senior author Mandy Muller, assistant professor of microbiology. "The next step is to figure out if we can manipulate this to our advantage."

In the Muller Lab, student researchers work with Muller studying how Kaposi sarcoma-associated herpesvirus (KSHV) hides for years inside the human body before seeking to gain control over human gene expression to complete the viral infection. At that point, people with a weakened immune system may develop Kaposi sarcoma cancer lesions in the mouth, skin or other organs.

The researchers use genome-wide sequencing, post-transcriptional sequencing and molecular biology to examine how the human cell or the virus knows how to prevent degradation.

"Viruses are very smart, that's what I love to say," Muller says. "They have lots of strategies to stick around, and they don't do a lot of damage for a very long time, because that's one way to hide from the immune system.

"But then, at some point -- many, many years later -- they reactivate. The way they do this is by triggering a massive RNA degradation event where the virus will wipe out the mRNA from the cell. That means the human system can no longer express the proteins that it needs to express, and that means also that a lot of resources are suddenly available for the virus."

How and why some RNA are able to escape the viral degradation are questions Muller's team -- including lead author and graduate student Daniel Macveigh-Fierro and co-authors and undergraduates Angelina Cicerchia, Ashley Cadorette and Vasudha Sharma -- has been investigating.

"We show that RNA that escape have a chemical tag on them -- a post-transcriptional modification -- that makes them different from the others," Muller explains. "By having this tag, M6A, they can recruit proteins that protect them from degradation."

Muller has been studying KSHV since she was an undergraduate in her native France, and her mission continues.

"We know you need this protein to protect the RNA from degradation, but we still don't know how that physically stops the degradation, so that's what we're going to look at now," she says.

Ultimately, understanding the mechanisms and pathways involved in KSHV infection may lead to the development of RNA therapeutics to treat viral diseases.

"By identifying the determinants of what makes an mRNA either resistant or susceptible to viral-induced decay, we could use those findings to our advantage to better design anti-viral drugs and reshape the outcome of infection," Muller says.

Read more at Science Daily

Breakthrough in converting carbon dioxide into fuel using solar energy

A research team led by Lund University in Sweden has shown how solar power can convert carbon dioxide into fuel, by using advanced materials and ultra-fast laser spectroscopy. The breakthrough could be an important piece of the puzzle in reducing the levels of greenhouse gases in the atmosphere in the future. The study is published in Nature Communications.

The sunlight that hits Earth during one hour corresponds roughly to humanity's total energy consumption for an entire year. Our global carbon dioxide emissions are also increasing. Using the sun's energy to capture greenhouse gases and converting it into fuel or another useful chemical, is a research focus for many today. However, there is still no satisfactory solution, but an international research team has now revealed a possible way forward.

"The study uses a combination of materials that absorb sunlight and use its energy to convert carbon dioxide. With the help of ultra-fast laser spectroscopy, we have mapped exactly what happens in that process," says Tönu Pullerits, chemistry researcher at Lund University.

The researchers have studied a porous organic material called COF -- covalent organic framework. The material is known for absorbing sunlight very efficiently. By adding a so-called catalytic complex to COF, they succeeded, without any additional energy, in converting carbon dioxide to carbon monoxide.

"The conversion to carbon monoxide requires two electrons. When we discovered that photons with blue light create long-lived electrons with high energy levels, we could simply charge COF with electrons and complete a reaction," says Kaibo Zheng, chemistry researcher at Lund University.

How can these results be useful? Tönu Pullerits and Kaibo Zheng hope that in the future the discovery can be used to develop larger units that can be used on a global level to, with the help of the sun, absorb carbon dioxide from the atmosphere and convert it into fuel or chemicals. That could be one of many solutions to overcome the climate crisis we are facing.

Read more at Science Daily

Feb 18, 2022

U.S. coastline to see up to a foot of sea level rise by 2050

The United States is expected to experience as much sea level rise by the year 2050 as it witnessed in the previous hundred years. That's according to a NOAA-led report updating sea level rise decision-support information for the U.S. released in partnership with half a dozen other federal agencies.

The Sea Level Rise Technical Report provides the most up-to-date sea level rise projections for all U.S. states and territories by decade for the next 100 years and beyond, based on a combination of tide gauge and satellite observations and all the model ensembles from the Sixth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). The report projects sea levels along the coastline will rise an additional 10-12 inches by 2050 with specific amounts varying regionally, mainly due to land height changes.

The report updates the federal government's 2017 sea level rise projections, and provides additional information on tide, wind, and storm-driven extreme water levels affecting current and future coastal flood risk. A suite of federal tools are using this data, including the NOAA Sea Level Rise Viewer, which are critical to the Administration's commitment to tackle the climate crisis by making actionable climate data accessible to those who need it.

"For businesses along the coast, knowing what to expect and how to plan for the future is critical," said U.S. Secretary of Commerce Gina M. Raimondo. "These updated projections will help businesses, and the communities they support, understand risks and make smart investments in the years ahead."

"This new data on sea rise is the latest reconfirmation that our climate crisis -- as the President has said -- is blinking 'code red,'" said Gina McCarthy, National Climate Advisor. "We must redouble our efforts to cut the greenhouse gases that cause climate change while, at the same time, help our coastal communities become more resilient in the face of rising seas."

"This is a global wake-up call and gives Americans the information needed to act now to best position ourselves for the future," said Rick Spinrad, Ph.D., NOAA Administrator. "As we build a Climate Ready Nation, these updated data can inform coastal communities and others about current and future vulnerabilities in the face of climate change and help them make smart decisions to keep people and property safe over the long run."

The report also finds that the sea level rise expected by 2050 will create a profound increase in the frequency of coastal flooding, even in the absence of storms or heavy rainfall.

"By 2050, moderate flooding ⁠ -- which is typically disruptive and damaging by today's weather, sea level and infrastructure standards ⁠ -- is expected to occur more than 10 times as often as it does today," said Nicole LeBoeuf, NOAA National Ocean Service Director. "These numbers mean a change from a single event every 2-5 years to multiple events each year, in some places."

"This report supports previous studies and confirms what we have long known: Sea levels are continuing to rise at an alarming rate, endangering communities around the world. Science is indisputable and urgent action is required to mitigate a climate crisis that is well underway," said Bill Nelson, NASA Administrator. "NASA is steadfast in our commitment to protecting our home planet by expanding our monitoring capabilities and continuing to ensure our climate data is not only accessible but understandable."

Read more at Science Daily

Scientists reveal how Venus fly trap plants snap shut

Scientists at Scripps Research have revealed the three-dimensional structure of Flycatcher1, an aptly named protein channel that may enable Venus fly trap plants to snap shut in response to prey. The structure of Flycatcher1, published February 14 in Nature Communications, helps shed light on longstanding questions about the remarkably sensitive touch response of Venus fly traps. The structure also gives the researchers a better understanding of how similar proteins in organisms including plants and bacteria, as well as proteins in the human body with similar functions (called mechanosensitive ion channels), might operate.

"Despite how different Venus fly traps are from humans, studying the structure and function of these mechanosensitive channels gives us a broader framework for understanding the ways that cells and organisms respond to touch and pressure," says co-senior author and Scripps Research professor Andrew Ward, PhD.

"Every new mechanosensitive channel that we study helps us make progress in understanding how these proteins can sense force and translate that to action and ultimately reveal more about human biology and health," adds co-senior author Ardem Patapoutian, PhD, a Scripps Research professor who won the Nobel Prize in Physiology or Medicine for research on the mechanosensitive channels that allow the body to sense touch and temperature.

Mechanosensitive ion channels are like tunnels that span the membranes of cells. When jostled by movement, the channels open, letting charged molecules rush across. In response, cells then alter their behavior -- a neuron might signal its neighbor, for instance. The ability for cells to sense pressure and movement is important for people's senses of touch and hearing, but also for many internal body processes -- from the ability of the bladder to sense that it's full to the ability of lungs to sense how much air is being breathed.

Previously, scientists had homed in on three ion channels in Venus fly traps thought to be related to the ability of the carnivorous plant to snap its leaves shut when its sensitive trigger hairs get touched. One, Flycatcher1, caught researchers' attention because its genetic sequence looked similar to a family of mechanosensitive channels, MscS, found in bacteria.

"The fact that variants of this channel are found throughout evolution tells us that it must have some fundamental, important functions that have been maintained in different types of organisms," says co-first author Sebastian Jojoa-Cruz, a graduate student at Scripps Research.

In the new study, the researchers used cryo-electron microscopy -- a cutting-edge technique that reveals the locations of atoms within a frozen protein sample -- to analyze the precise arrangement of molecules that form the Flycatcher1 protein channel in Venus fly trap plants. They found that Flycatcher1 is, in many ways, similar to bacterial MscS proteins -- seven groups of identical helices surrounding a central channel. But, unlike other MscS channels, Flycatcher1 has an unusual linker region extending outward from each group of helices. Like a switch, each linker can be flipped up or down. When the team determined the structure of Flycatcher1, they found six linkers in the down position, and just one flipped up.

"The architecture of Flycatcher1's channel core was similar to other channels that have been studied for years, but these linker regions were surprising," says Kei Saotome, PhD, a former postdoctoral research associate at Scripps Research and co-first author of the new paper.

To help elucidate the function of these switches, the researchers altered the linker to disrupt the up position. Flycatcher1, they found, no longer functioned as usual in response to pressure; the channel remained open for a longer duration when it would normally close upon removal of pressure.

"The profound effect of this mutation tells us that the conformations of these seven linkers is likely relevant for how the channel works," says co-senior author Swetha Murthy, PhD, of Vollum Institute at Oregon Health and Science University, a former postdoctoral research associate at Scripps Research.

Now that they solved the molecular structure, the research team is planning future studies on the function of Flycatcher1 to understand how different conformations affect its function. More work is also needed to determine whether Flycatcher1 is solely responsible for the snapping shut of Venus fly trap leaves, or whether other suspected channels play complementary roles.

Read more at Science Daily

Flies possess more sophisticated cognitive abilities than previously known

As they annoyingly buzz around a batch of bananas in our kitchens, fruit flies appear to have little in common with mammals. But as a model species for science, researchers are discovering increasing similarities between us and the miniscule fruit-loving insects.

In a new study, researchers at the University of California San Diego's Kavli Institute for Brain and Mind (KIBM) have found that fruit flies (Drosophila melanogaster) have more advanced cognitive abilities than previously believed. Using a custom-built immersive virtual reality environment, neurogenetic manipulations and in vivo real-time brain-activity imaging, the scientists present new evidence Feb. 16 in the journal Nature of the remarkable links between the cognitive abilities of flies and mammals.

The multi-tiered approach of their investigations found attention, working memory and conscious awareness-like capabilities in fruit flies, cognitive abilities typically only tested in mammals. The researchers were able to watch the formation, distractibility and eventual fading of a memory trace in their tiny brains.

"Despite a lack of obvious anatomical similarity, this research speaks to our everyday cognitive functioning -- what we pay attention to and how we do it," said study senior author Ralph Greenspan, a professor in the UC San Diego Division of Biological Sciences and associate director of KIBM. "Since all brains evolved from a common ancestor, we can draw correspondences between fly and mammalian brain regions based on molecular characteristics and how we store our memories."

To arrive at the heart of their new findings the researchers created an immersive virtual reality environment to test the fly's behavior via visual stimulation and coupled the displayed imagery with an infra-red laser as an averse heat stimulus. The near 360-degree panoramic arena allowed Drosophila to flap their wings freely while remaining tethered, and with the virtual reality constantly updating based on their wing movement (analyzed in real-time using high-speed machine-vision cameras) it gave the flies the illusion of flying freely in the world. This gave researchers the ability to train and test flies for conditioning tasks by allowing the insect to orient away from an image associated with the negative heat stimulus and towards a second image not associated with heat.

They tested two variants of conditioning, one in which flies were given visual stimulation overlapping in time with the heat (delay conditioning), both ending together, or a second, trace conditioning, by waiting 5 to 20 seconds to deliver the heat after showing and removing the visual stimulation. The intervening time is considered the "trace" interval during which the fly retains a "trace" of the visual stimulus in its brain, a feature indicative of attention, working memory and conscious awareness in mammals.

The researchers also imaged the brain to track calcium activity in real-time using a fluorescent molecule they genetically engineered into their brain cells. This allowed the researchers to record the formation and duration of the fly's living memory since they saw the trace blinking on and off while being held in the fly's short-term (working) memory. They also found that a distraction introduced during training -- a gentle puff of air -- made the visual memory fade more quickly, marking the first time researchers have been able to prove such distractedness in flies and implicating an attentional requirement in memory formation in Drosophila.

"This work demonstrates not only that flies are capable of this higher form of trace conditioning, and that the learning is distractible just like in mammals and humans, but the neural activity underlying these attentional and working memory processes in the fly show remarkable similarity to those in mammals," said Dhruv Grover, a UC San Diego KIBM research faculty member and lead author of the new study. "This work demonstrates that fruit flies could serve as a powerful model for the study of higher cognitive functions. Simply put, the fly continues to amaze in how smart it really is."

The scientists also identified the area of the fly's brain where the memory formed and faded -- an area known as the ellipsoid body of the fly's central complex, a location that corresponds to the cerebral cortex in the human brain.

Further, the research team discovered that the neurochemical dopamine is required for such learning and higher cognitive functions. The data revealed that dopamine reactions increasingly occurred earlier in the learning process, eventually anticipating the coming heat stimulus.

The researchers are now investigating details of how attention is physiologically encoded in the brain. Grover believes the lessons learned from this model system are likely to directly inform our understanding of human cognition strategies and neural disorders that disrupt them, but also contribute to new engineering approaches that lead to performance breakthroughs in artificial intelligence designs.

Read more at Science Daily

Sudden evolutionary change in flowers

When Charles Darwin first codified the theory of evolution by means of natural selection, he thought of it as a gradual process. "We see nothing of these slow changes in progress, until the hand of time has marked the long lapse of ages," he wrote in his seminal work, "On the Origin of Species."

But Darwin didn't have the full picture. "Evolution doesn't necessarily take all these small changes like Darwin proposed," said Scott Hodges, a professor in UC Santa Barbara's Department of Ecology, Evolution, and Marine Biology.

Hodges, doctoral student Zachary Cabin and their colleagues just have identified a case of a sudden evolutionary change. In the journal Current Biology, the scientists describe a population of columbines that have lost their petals, including the characteristic nectar spurs. A drastic change caused by a mutation in a single gene. The finding adds weight to the idea that adaptation can occur in large jumps, rather than merely plodding along over extended timespans.

Ever since the theory of evolution was put forward, biologists have debated whether it always occurs in small, gradual steps over long timespans or sometimes as an equilibrium punctuated by abrupt changes. Often, large morphological changes appear within short geologic timescales where intermediate forms may not have fossilized. The question then remains whether many small changes occurred in a short period of time, or perhaps whether single large-scale mutation might be responsible. So, researchers really have to catch the development in action if they hope to build a case that sudden changes can drive evolution.

Enter the Colorado blue columbine. In one population, a mutation has caused many of the plants to lose their petals with the iconic nectar spurs. While not an uncommon occurrence in columbines, spurlessness seems to have stuck around in this area: About a quarter of the plants lack the distinctive feature.

A single gene

The team plumbed the plant's genome to find the source of the unusual morphology. They considered a gene, APETALA3-3, known to affect spur development. They found that this single gene controlled the entire development of the flower's spurs and nectaries.

"The gene is either on or off, so it's about as simple of a change you can get," said lead author Zachary Cabin. "But that simple difference causes a radical change in morphology."

A single broken gene causes mutant plants to develop flowers with no petals or nectar spurs.

If these flowers were preserved in the fossil record, scientist could well sort them into two wholly different genera. And there would also be a puzzling gap: no intermediate form documenting a transition from one morphology to the other.

"This finding shows that evolution can occur in a big jump if the right kind of gene is involved," Hodges said. APETALA3-3 tells the developing organ to become a petal. "When it's broken, those instructions aren't there anymore, and that causes it to develop into a completely different organ, a sepal," he explained.

APETALA3-3 is a type of homeotic gene, one that specifies the development of an entire organ. A mutation in one of these genes can have a drastic effect on an organism's morphology. For instance, one homeotic mutation causes a fly to develop legs where it should have antennae. "Most of the mutations of this nature are going to be like that, just awful," Hodges continued. "The animal won't have any chance of surviving. Biologist Richard Goldschmidt called them 'hopeless monsters.'"

But once in a very long while, one of these radical changes might provide a beneficial trait in a particular environment, creating a "hopeful monster." And a hopeful monster would show that evolution can proceed in single, large jumps, supporting the punctuated equilibrium hypothesis.

"We did not have a good example of a hopeful monster due to a single genetic change," said Hodges, "until now." Researchers have to catch these abrupt changes as they're happening, otherwise they disappear into an organism's genome. For example, other relatives of columbines have lost their petals and nectaries in the past, but it's now impossible to tell if these events occurred in one fell swoop. The fact that it is actively happening in the Colorado blue columbine enabled the team to confirm their status as a hopeful monster.

"There's definitely some luck involved with us being around at the right time to capture this," Cabin said.

Surprising selection


Catching the change in action offers another benefit as well: the opportunity to study the genetics and selective pressures at work.

The team discovered five versions, or alleles, of APETALA3-3, only one of which codes for a petal with a functional nectar spur. The other four were broken, as Hodges put it. They also determined that spurlessness is a recessive trait. The flower will develop normally as long as the plant has one copy of the functional allele. But any two of the mutant alleles together will prevent this. "You can mix and match them," Cabin explained.

About a quarter of Colorado blue columbines in this area display the recessive trait of spurlesness, more than can be attributed to mere chance.

Across all species of columbines it's possible to find rare individuals that develop flowers without nectar spurs. But with a quarter of the Colorado population missing the feature, Cabin and Hodges knew this was more than a chance occurrence. "To get that many of this mutant type really suggests that there's selection favoring it somehow," Hodges said, which he finds odd, since the spur produces nectar that attracts the plant's pollinators.

Hodges is deeply familiar with columbines, and all of his previous research suggests that nectar spurs are important to the group. Even slight changes to the structure have driven speciation and diversification in the genus. "So, how the heck can you lose your spurs and still be favored?" he asked.

Attracting pollinators is only one factor contributing to reproductive success. It turned out the mutant plants actually produced more seeds than their counterparts, much to the team's surprise. They began combing through their observations, searching for an explanation.

"The first time we really realized the pattern was at the airport on the way home," Cabin recalled. He was reading off data as Hodges entered it into the computer. "Scott could see the pattern developing, because he had all the data in front of him, and was getting more and more excited."

The team had recorded herbivory from caterpillars, aphids and deer on the different morphs. Damage from caterpillars and aphids can hamper seed production, Cabin explained, while deer can devastate an entire plant. And as the data built up, a clear trend emerged: Deer and aphids preferred flowers with nectar spurs.

Shifts in floral morphology are usually driven by pollinators, but spurlessness seems to be driven by herbivory. "Natural selection can come from very surprising sources," Hodges said. "It's not always what you'd expect it to be."

Timing it right

Now that they've identified their hopeful monster, Cabin and Hodges plan to investigate the DNA around APETALA3-3 to build a timeline of when the mutations may have occurred. When the gene first mutated, only one of the plant's chromosomes was affected. That means that every descendant with that mutation would have the same genetic code around APETALA3-3 for many generations, Hodges explained.

However, chromosomes do swap alleles occasionally in a process called recombination. By tracking the amount of recombination that has accumulated around the different versions of APETALA3-3, the scientists can estimate how long ago each mutation occurred. More variation requires more time to accumulate. And the closer this variation is to APETALA3-3 itself, the more recombination events there have been since a mutation first appeared.

Read more at Science Daily

Feb 17, 2022

Psyche, the iron giant of asteroids, may be less iron than researchers thought

The asteroid 16 Psyche, which NASA intends to visit with a spacecraft in 2026, may be less heavy metal and more hard rock than scientists have surmised, according to a new study by researchers from Brown and Purdue universities.

Psyche, which orbits the sun in the asteroid belt between Mars and Jupiter, is the largest of the M-type asteroids, which are composed chiefly of iron and nickel as opposed to the silicate rocks that make up most other asteroids. But when viewed from Earth, Psyche sends mixed signals about its composition.

The light it reflects tells scientists that the surface is indeed mostly metal. That has led to conjecture that Psyche may be the exposed iron core of a primordial planetary body -- one whose rocky crust and mantle were blasted away by an ancient collision. However, measurements of Psyche's mass and density tell a different story. The way its gravity tugs on neighboring bodies suggests that Psyche is far less dense than a giant hunk of iron should be. So if Psyche is indeed all metal, it would have to be highly porous -- a bit like a giant ball of steel wool with nearly equal parts void space and solid metal.

"What we wanted to do with this study was see whether it was possible for an iron body the size of Psyche to maintain that near-50% porosity," said Fiona Nichols-Fleming, a Ph.D. student at Brown and study's lead author. "We found that it's very unlikely."

For the study, published in Geophysical Research Letters, Nichols-Fleming worked with Alex Evans, an assistant professor at Brown, and Purdue professors Brandon Johnson and Michael Sori. The team created a computer model, based on known thermal properties of metallic iron, to estimate how the porosity of a large iron body would evolve over time.

The model shows that to remain highly porous, Psyche's internal temperature would have to cool below 800 Kelvin very shortly after its formation. At temperatures above that, iron would have been so malleable that Psyche's own gravity would have collapsed most of the pore space within its bulk. Based on what is known about conditions in the early solar system, the researchers say, it's extremely unlikely that a body of Psyche's size -- about 140 miles in diameter -- could have cooled so quickly.

In addition, any event that may have added porosity to Psyche after its formation -- a massive impact, for example -- would likely have also heated Psyche back up above 800 K. So any newly introduced porosity would have been unlikely to last.

Taken together, the results suggest that Psyche probably isn't a porous, all-iron body, the researchers conclude. More likely, it's harboring a hidden rocky component that drives its density down. But if Psyche does have a rocky component, why does its surface look so metallic when viewed from Earth? There are few possible explanations, the researchers say.

One of those possibilities is ferrovolcanism -- iron-spewing volcanoes. It's possible, the researchers say, that Psyche is actually a differentiated body with a rocky mantle and an iron core. But widespread ferrovolcanic activity may have brought large amounts of Psyche's core up to the surface, putting an iron coating atop its rocky mantle. Prior research by Johnson and Evans has shown that ferrovolcanism is possible on a body like Psyche.

Whatever the case, scientists will soon get a much clearer picture of this mysterious asteroid. Later this year, NASA plans to launch a spacecraft that will rendezvous with Psyche after a four-year journey to the asteroid belt.

Read more at Science Daily

Ancestors of legionella bacteria infected cells two billion years ago

Researchers at Uppsala University have discovered that the ancestors of legionella bacteria infected eukaryotic cells as early as two billion years ago. It happened soon after eukaryotes began to feed on bacteria. These results, described in a new study published in Molecular Biology and Evolution, are also relevant in the chicken-or-egg debate about whether mitochondria or phagocytosis came first.

"Our study can help us understand how harmful bacteria arise and how complex cells evolved from simpler cells," says Lionel Guy, associate professor of evolutionary microbiology at the Department of Medical Biochemistry and Microbiology, who headed the study.

Two billion years ago, ancestors of legionella bacteria already had the ability to avoid being digested by eukaryotes. Instead, they began using eukaryotic cells -- complex cells with a nucleus that make up amoebas, fungi and human beings -- to multiply.

The legionella bacterium, which causes Legionnaires' disease, belongs to a large group of bacteria called Legionellales. All Legionellales bacteria can infect eukaryotic hosts: amoebas, insects or our own cells.

"We discovered that the ancestor of the whole group lived about two billion years ago, at a time when eukaryotes were still in the making, evolving from simpler cells to the complex cell structure they have now," says Andrei Guliaev, a researcher at the Department of Medical Biochemistry and Microbiology. "We believe Legionellales were among the first to infect eukaryotic cells."

The first step in an infection with legionella bacteria is for a eukaryotic host, such as an amoeba, to bring the bacterium into its cell through a process called phagocytosis. The next step for the amoeba would be to digest the bacterium and use its parts as an energy source. But legionella bacteria have molecular tools that keep them from being digested and allow them to instead use the amoeba as an energy source so they can multiply.

In the study, the researchers show that all Legionellales have the same kind of molecular tools as legionella. That suggests that the ability to infect eukaryotes already existed in the ancestor of all Legionellales. This means that phagocytosis is at least as old as Legionellales -- two billion years old -- when eukaryotes were in the early stages of their evolution.

Which has implications for a hot chicken-or-egg debate in evolutionary biology about how eukaryotes came into being. Which came first? Was it the mitochondria, which originated from another group of bacteria and became our cells' own energy factories? Or was it phagocytosis, which is considered necessary to absorb mitochondria but is very costly from an energy standpoint?

Read more at Science Daily

Hidden diversity: When one wasp species is actually 16 wasp species

A common refrain among biologists holds that the majority of Earth's plant and animal species remain undiscovered. While many of those species inhabit narrow or hard-to-reach ranges, others may in fact be hiding right under our noses.

Take Ormyrus labotus, a tiny parasitoid wasp known to science since 1843. It has long been considered a generalist, laying its eggs in more than 65 different species of other insects. But a new study published today in Insect Systematics and Diversity suggests that the wasps currently called Ormyrus labotus are actually at least 16 different species, identical in appearance but genetically distinct.

It's not unusual, especially with advancing genetic techniques, to discover "cryptic" species within one known insect species, but the number of those found within Ormyrus labotus underlines the importance of seeking out the world's "hidden diversity," says Andrew Forbes, Ph.D., associate professor of biology at the University of Iowa and senior author of the study.

"We know so much from ecology about how important even the smallest species can be to an ecosystem," he says, "such that uncovering this hidden diversity -- and, maybe more importantly, understanding the biology of each species -- becomes a critical component of conservation and maintenance of ecosystem health."

Intriguing Insects That Emerge From Oak Galls

Parasitoid wasps lay their eggs on or in other insects and arthropods, and they commonly specialize in parasitizing a small number of host species, or even just one. Meanwhile, a wide variety of insects lay their eggs on plants where their larvae hatch and then induce the plant to form a protective structure called a "gall" around the larvae. Wasps in the genus Ormyrus parasitize these gall-forming insects.

For a separate research project between 2015 and 2019, Sofia Sheikh and Anna Ward, both graduate students in Forbes' lab, collected galls formed on oak trees and observed the insects that emerged. They noticed that wasps emerging from a large diversity of gall types all matched the description of Ormyrus labotus, and this got the researchers wondering.

"It seemed highly unusual for one parasitoid species to be able to exploit such a wide and dynamic set of hosts," says Sheikh, a master's student at the time in Forbes' lab (now a Ph.D. student at the University of Chicago) and lead author on the new study.

To test whether the wasps they collected were all truly one species or instead a band of look-alikes, Sheikh, Ward, and Forbes extracted DNA samples from each of the wasp specimens that emerged from the oak galls and analyzed the degree of genetic variation between them, with assistance from collaborators at Rice University and the U.S. Department of Agriculture. Then they combined this genetic analysis with data on the wasps' physical attributes and ecological factors -- e.g., which type of oak galls they emerged from, at what time of year, and so on -- to place the wasps in groups of likely separate species.

The final result? The collected wasps that originally appeared to be Ormyrus labotus instead comprise at least 16 distinct species, and possibly as many as 18.

The Hunt for Cryptic Species


In their review of other research, the team found several other studies that had uncovered cryptic species within purported generalist species but none that had found so many at once. And it's possible more distinct species that would otherwise match O. labotus remain to be found, the researchers say, because the original collection of oak-gall specimens that Sheikh and Ward conducted wasn't designed to encompass all known O. labotus hosts.

For now, Ormyrus labotus will remain a "species complex," with these newly delineated species known to exist but not yet formally described and named. Forbes says his lab "only dabbles" in formal taxonomy, but all specimens from the study have been preserved and are available for other researchers who want to conduct a taxonomic revision of the Ormyrus genus. "If someone wants to take a crack at naming these species of Ormyrus, we're ready to help however we can," he says.

Until then, the current findings underscore the importance of fundamental biodiversity research and its potential implications. For example, if O. labotus were ever enlisted for control of an invasive oak-galling pest, it would be critical to know which species within the complex targeted that specific pest species -- and the same dynamic applies in the use of any parasitoid wasp species for biological control. Meanwhile, failing to differentiate specialists from generalists hinders scientists' ability to understand actual generalist insects and what enables them to target a variety of hosts, the researchers note.

Read more at Science Daily

How long does it really take to recover from concussion?

A new study suggests that people with mild traumatic brain injuries may be more likely to have cognitive impairment, cognitive decline or both one year later, compared to people who were not injured. The research is published in the February 16, 2022, online issue of Neurology®, the medical journal of the American Academy of Neurology. People with poor cognitive outcomes were also more likely to have other symptoms like anxiety and lower satisfaction with life.

"Our results suggest that clinically meaningful poor cognitive outcomes, which we defined as cognitive impairment, cognitive decline or both, one year after a concussion may be more common than previously thought," said study author Raquel Gardner, MD, of the University of California San Francisco. "They also highlight the need to better understand the mechanisms underlying poor cognitive outcome, even after relatively mild brain injuries, to improve therapy for recovery."

The study looked at 656 people who had been admitted to trauma center emergency rooms with concussions and 156 healthy people without head injuries. Their average age was 40. Participants were given up to three neurological evaluations after their injury, at two weeks, six months and one year. Each of those evaluations provided five scores from three tests of recall, language skills and other cognitive domains.

Poor cognitive outcome was defined as satisfying the criteria for cognitive impairment, cognitive decline or both. Cognitive impairment was defined as lower-than-expected performance on at least two cognitive tests such as one memory test and one processing speed test. Cognitive decline was defined as clinically meaningful decline on at least two cognitive tests.

Researchers found that 86 out of 656 people with mild brain injuries, or 14%, had poor cognitive outcomes one year later. Of those, 10% had cognitive impairment only, 2% had cognitive decline only and 2% had both. That's compared to eight out of 156 people without concussions, or 5%, who had poor cognitive outcomes one year later. Of those healthy people, 3% had cognitive impairment, none had cognitive decline only, and 1% had both.

Researchers also found that people who had depression before their injury, had no health insurance, or had a high school education or less were more likely to have a poor cognitive outcome than those who were not depressed before the injury, or had insurance or had more than a high school education.

Researchers found that people who had good cognitive outcomes were more likely to have higher life satisfaction one year after their concussion. The life satisfaction test given to participants ranges in score from five to 35, with lower scores indicating lower life satisfaction. The people with good cognitive outcomes scored an average of 26 on the test, compared to people with poor cognitive outcomes, who scored an average of 21.

The study does not prove that people with concussions will have worse cognitive outcomes one year later, but it shows an association.

"Previous studies of people with moderate to severe brain injuries show that early, intensive rehabilitation can improve people's cognitive outcomes over time. More research is needed to find out the role of cognitive rehabilitation on people with more mild brain injuries who are also at risk for poor cognitive outcomes, and how to predict who falls into this risk category," Gardner said.

A limitation of the study is that people were enrolled at the time of their concussion and their cognitive health before injury was not known.

Read more at Science Daily

Feb 16, 2022

Atomic clocks measure Einstein's general relativity at millimeter scale

JILA physicists have measured Albert Einstein's theory of general relativity, or more specifically, the effect called time dilation, at the smallest scale ever, showing that two tiny atomic clocks, separated by just a millimeter or the width of a sharp pencil tip, tick at different rates.

The experiments, described in the Feb. 17 issue of Nature, suggest how to make atomic clocks 50 times more precise than today's best designs and offer a route to perhaps revealing how relativity and gravity interact with quantum mechanics, a major quandary in physics.

JILA is jointly operated by the National Institute of Standards and Technology (NIST) and the University of Colorado Boulder.

"The most important and exciting result is that we can potentially connect quantum physics with gravity, for example, probing complex physics when particles are distributed at different locations in the curved space-time," NIST/JILA Fellow Jun Ye said. "For timekeeping, it also shows that there is no roadblock to making clocks 50 times more precise than today -- which is fantastic news."

Einstein's 1915 theory of general relativity explains large-scale effects such as the gravitational effect on time and has important practical applications such as correcting GPS satellite measurements. Although the theory is more than a century old, physicists remain fascinated by it. NIST scientists have used atomic clocks as sensors to measure relativity more and more precisely, which may help finally explain how its effects interact with quantum mechanics, the rulebook for the subatomic world.

According to general relativity, atomic clocks at different elevations in a gravitational field tick at different rates. The frequency of the atoms' radiation is reduced -- shifted toward the red end of the electromagnetic spectrum -- when observed in stronger gravity, closer to Earth. That is, a clock ticks more slowly at lower elevations. This effect has been demonstrated repeatedly; for example, NIST physicists measured it in 2010 by comparing two independent atomic clocks, one positioned 33 centimeters (about 1 foot) above the other.

The JILA researchers have now measured frequency shifts between the top and bottom of a single sample of about 100,000 ultracold strontium atoms loaded into an optical lattice, a lab setup similar to the group's earlier atomic clocks. In this new case the lattice, which can be visualized as a stack of pancakes created by laser beams, has unusually large, flat, thin cakes, and they are formed by less intense light than normally used. This design reduces the distortions in the lattice ordinarily caused by the scattering of light and atoms, homogenizes the sample, and extends the atoms' matter waves, whose shapes indicate the probability of finding the atoms in certain locations. The atoms' energy states are so well controlled that they all ticked between two energy levels in exact unison for 37 seconds, a record for what is called quantum coherence.

Crucial to the new results were the Ye group's imaging innovation, which provided a microscopic map of frequency distributions across the sample, and their method of comparing two regions of an atom cloud rather than the traditional approach of using two separate clocks.

The measured redshift across the atom cloud was tiny, in the realm of 0.0000000000000000001, consistent with predictions. (While much too small for humans to perceive directly, the differences add up to major effects on the universe as well as technology such as GPS.) The research team resolved this difference quickly for this type of experiment, in about 30 minutes of averaging data. After 90 hours of data, their measurement precision was 50 times better than in any previous clock comparison.

"This a completely new ballgame, a new regime where quantum mechanics in curved space-time can be explored," Ye said. "If we could measure the redshift 10 times even better than this, we will be able to see the atoms' whole matter waves across the curvature of space-time. Being able to measure the time difference on such a minute scale could enable us to discover, for example, that gravity disrupts quantum coherence, which could be at the bottom of why our macroscale world is classical."

Better clocks have many possible applications beyond timekeeping and navigation. Ye suggests atomic clocks can serve as both microscopes to see minuscule links between quantum mechanics and gravity and as telescopes to observe the deepest corners of the universe. He is using clocks to look for mysterious dark matter, believed to constitute most matter in the universe. Atomic clocks are also poised to improve models and understanding of the shape of the Earth through the application of a measurement science called relativistic geodesy.

Read more at Science Daily

Supermassive black hole caught hiding in a ring of cosmic dust

The European Southern Observatory's Very Large Telescope Interferometer (ESO's VLTI) has observed a cloud of cosmic dust at the centre of the galaxy Messier 77 that is hiding a supermassive black hole. The findings have confirmed predictions made around 30 years ago and are giving astronomers new insight into "active galactic nuclei," some of the brightest and most enigmatic objects in the universe.

Active galactic nuclei (AGNs) are extremely energetic sources powered by supermassive black holes and found at the centre of some galaxies. These black holes feed on large volumes of cosmic dust and gas. Before it is eaten up, this material spirals towards the black hole and huge amounts of energy are released in the process, often outshining all the stars in the galaxy.

Astronomers have been curious about AGNs ever since they first spotted these bright objects in the 1950s. Now, thanks to ESO's VLTI, a team of researchers, led by Violeta Gámez Rosas from Leiden University in the Netherlands, have taken a key step towards understanding how they work and what they look like up close. The results are published today in Nature.

By making extraordinarily detailed observations of the centre of the galaxy Messier 77, also known as NGC 1068, Gámez Rosas and her team detected a thick ring of cosmic dust and gas hiding a supermassive black hole. This discovery provides vital evidence to support a 30-year-old theory known as the Unified Model of AGNs.

Astronomers know there are different types of AGN. For example, some release bursts of radio waves while others don't; certain AGNs shine brightly in visible light, while others, like Messier 77, are more subdued. The Unified Model states that despite their differences, all AGNs have the same basic structure: a supermassive black hole surrounded by a thick ring of dust.

According to this model, any difference in appearance between AGNs results from the orientation at which we view the black hole and its thick ring from Earth. The type of AGN we see depends on how much the ring obscures the black hole from our view point, completely hiding it in some cases.

Astronomers had found some evidence to support the Unified Model before, including spotting warm dust at the centre of Messier 77. However, doubts remained about whether this dust could completely hide a black hole and hence explain why this AGN shines less brightly in visible light than others.

"The real nature of the dust clouds and their role in both feeding the black hole and determining how it looks when viewed from Earth have been central questions in AGN studies over the last three decades," explains Gámez Rosas. "Whilst no single result will settle all the questions we have, we have taken a major step in understanding how AGNs work."

The observations were made possible thanks to the Multi AperTure mid-Infrared SpectroScopic Experiment (MATISSE) mounted on ESO's VLTI, located in Chile's Atacama Desert. MATISSE combined infrared light collected by all four 8.2-metre telescopes of ESO's Very Large Telescope (VLT) using a technique called interferometry. The team used MATISSE to scan the centre of Messier 77, located 47 million light-years away in the constellation Cetus.

"MATISSE can see a broad range of infrared wavelengths, which lets us see through the dust and accurately measure temperatures. Because the VLTI is in fact a very large interferometer, we have the resolution to see what's going on even in galaxies as far away as Messier 77. The images we obtained detail the changes in temperature and absorption of the dust clouds around the black hole," says co-author Walter Jaffe, a professor at Leiden University.

Combining the changes in dust temperature (from around room temperature to about 1200 °C) caused by the intense radiation from the black hole with the absorption maps, the team built up a detailed picture of the dust and pinpointed where the black hole must lie. The dust -- in a thick inner ring and a more extended disc -- with the black hole positioned at its centre supports the Unified Model. The team also used data from the Atacama Large Millimeter/submillimeter Array, co-owned by ESO, and the National Radio Astronomy Observatory's Very Long Baseline Array to construct their picture.

"Our results should lead to a better understanding of the inner workings of AGNs," concludes Gámez Rosas. "They could also help us better understand the history of the Milky Way, which harbours a supermassive black hole at its centre that may have been active in the past."

Read more at Science Daily

What lies behind a baby’s eyes

We give meaning to our world through the categorisation of objects. When and how does this process begin? By studying the gaze of one hundred infants, scientists at the Institut des Sciences Cognitives Marc Jeannerod (CNRS/Université Claude Bernard Lyon 1) have demonstrated that, by the age of fourth months, babies can assign objects that they have never seen to the animate or inanimate category. These findings, published in PNAS on 15 February 2022, reveal measurable changes in neural organisation, which reflect the transition from simply viewing the world to understanding it.

The way babies look at the world is a great mystery. What do they really see? What information do they get from seeing? One might think they look at things that stand out the most -- by virtue of size or colour, for example. But when do babies begin to see and interpret the world like adults?

To answer this question, researchers from the Institut des Sciences Cognitives Marc Jeannerod (CNRS / Université Claude Bernard Lyon 1) studied one hundred babies aged between 4 and 19 months. The scientists recorded the babies' eye movements and the durations of their gaze as they looked at pairs of pictures representing animate or inanimate things from eight different categories (e.g., human faces and natural or artificial objects). The data obtained from eye tracking on babies were matched with measures of brain activity obtained from a group of adults using fMRI, in order to determine the correspondence between the categorical object organisation emerging from the babies' eyes and that mapped on the adults' visual cortex.

The methodology used in the study has revealed the transition from the visual exploration guided by the salience of objects, in the youngest babies, to an object representation towards the mature categorical organisation of the adult brain, in the older babies. Already at four months, babies can distinguish between animate and inanimate objects. For instance, they can tell that a man and a crocodile, being animals, are more similar to each other than they are to a tree, which is an inanimate object. This ability appears astonishing as, at that age, babies are unlikely to know what a tree or crocodile is.

Between 10 and 19 months of age, more refined categories emerge and the infants' organisation of objects into categories increasingly approaches that in the adult brain. Children in this age range immediately recognise a soft, furry object with a face as a nonhuman animal.

Read more at Science Daily

Orangutans instinctively use hammers to strike and sharp stones to cut, study finds

Untrained, captive orangutans can complete two major steps in the sequence of stone tool use: striking rocks together and cutting using a sharp stone, according to a study by Alba Motes-Rodrigo at the University of Tübingen in Germany and colleagues, publishing February 16 in the open-access journal PLOS ONE.

The researchers tested tool making and use in two captive male orangutans (Pongo pygmaeus) at Kristiansand Zoo in Norway. Neither had previously been trained or exposed to demonstrations of the target behaviors. Each orangutan was provided with a concrete hammer, a prepared stone core, and two baited puzzle boxes requiring them to cut through a rope or a silicon skin in order to access a food reward. Both orangutans spontaneously hit the hammer against the walls and floor of their enclosure, but neither directed strikes towards the stone core. In a second experiment, the orangutans were also given a human-made sharp flint flake, which one orangutan used to cut the silicon skin, solving the puzzle. This is the first demonstration of cutting behavior in untrained, unenculturated orangutans.

To then investigate whether apes could learn the remaining steps from observing others, the researchers demonstrated how to strike the core to create a flint flake to three female orangutans at Twycross Zoo in the UK. After these demonstrations, one female went on to use the hammer to hit the core, directing the blows towards the edge as demonstrated.

This study is the first to report spontaneous stone tool use without close direction in orangutans that have not been enculturated by humans. The authors say their observations suggest that two major prerequisites for the emergence of stone tool use -- striking with stone hammers and recognizing sharp stones as cutting tools -- may have existed in our last common ancestor with orangutans, 13 million years ago.

The authors add: "Our study is the first to report that untrained orangutans can spontaneously use sharp stones as cutting tools. We also found that they readily engage in lithic percussion and that this activity occasionally leads to the detachment of sharp stone pieces."

From Science Daily

Key brain mechanisms for organizing memories in time

In a scientific first, researchers at the University of California, Irvine have discovered fundamental mechanisms by which the hippocampus region of the brain organizes memories into sequences and how this can be used to plan future behavior. The finding may be a critical early step toward understanding memory failures in cognitive disorders such as Alzheimer's disease and other forms of dementia.

Combining electrophysiological recording techniques in rodents with a statistical machine learning analysis of huge troves of data, the UCI researchers uncovered evidence suggesting that the hippocampal network encodes and preserves progressions of experiences to aid in decision-making. The team's work is the subject of a paper published recently in Nature Communications.

"Our brain keeps a pretty good record of when specific experiences or events occur. This ability helps us function in our daily life, but before this study, we didn't have a clear idea of the neuronal mechanisms behind these processes," said corresponding author Norbert Fortin, UCI associate professor of neurobiology and behavior. "Where it connects with everybody is that this type of memory is strongly impaired in a variety of neurological disorders or simply with aging, so we really need to know how this brain function works."

The project, which took more than three years to complete, involved experimental and data analysis phases. The researchers monitored the firing of neurons in rats' brains as they underwent a series of odor identification tests. By presenting five different smells in various sequences, the scientists were able to measure the animals' memory of the correct sequence and detect how their brains captured these sequential relationships.

"The analogy I would think about is computing," Fortin said. "If I were to stick electrodes in your brain -- we can't; that's why we use rats -- I could see which cells are firing and which ones are not firing at any given moment. That provides us with some insight into how the brain represents and computes information. When we record activity patterns in a structure, it's like we're seeing zeros and ones in a computer."

Obtained in millisecond intervals over several minutes, neuronal activity and inactivity measurements present a dynamic picture of the brain's functioning. Fortin said that he and his colleagues were, in some ways, able to "read the minds" of their subjects by viewing the "coding" of the cells -- which ones were firing and which were not -- in rapid succession.

"When you're thinking about something, it moves quickly," he said. "You're not stuck on that memory for long. Right now, it's being represented, but we can see how that changes very quickly."

Fortin knew early on that the readings of hippocampal activity would result in enormous quantities of raw data. From the beginning stages of the project, he enlisted the participation of statisticians in the Donald Bren School of Information & Computer Sciences.

"The neuroscience questions we had at the time in my lab were way too advanced for the statistical knowledge we had. That's why we needed to involve partners with data science expertise," Fortin said.

"These emerging neuroscience studies rely on data science methods because of the complexity of their data," said senior co-author Babak Shahbaba, UCI Chancellor's Fellow and professor of statistics. "Brain activities are recorded at millisecond scale, and these experiments run for more than an hour, so you can imagine how fast the amount of data grows. It gets to a point that neuroscientists need more advanced techniques to accomplish what they had imagined but weren't able to implement."

He noted that when neurons encode information such as memories, scientists can get a glimpse of that process by examining the pattern of spiking activity across all recorded neurons, known collectively as an ensemble.

"We found that we could treat these neural patterns as images, and this unlocked our ability to apply deep machine learning methods," Shahbaba said. "We analyzed the data with a convolutional neural network, which is a methodology used frequently in image processing applications such as facial recognition."

This way, the researchers were able to decode the firing of neurons to retrieve information.

"We know what the signature for odor B looks like, just as we know the ones for A, C and D," Fortin said. "Because of that, you can see when those signatures reappear at a different moment in time, such as when our subjects are anticipating something that has yet to happen. We're seeing these signatures being quickly replayed as they're thinking about the future."

Shahbaba said that the tools and methodologies developed during this project can be applied to a wide range of problems, and Fortin may extend his line of inquiry into other brain regions.

The study is an example of the power of convergence research at institutions such as UCI, Shahbaba said: "I could directly see the difference this is making for our students. Researchers in Norbert's neuroscience group are taking data science classes and can now ask some really important scientific questions they could not investigate in the past, and my own students are thinking fundamentally about the scientific method in an unprecedented way."

He added, "Through this collaboration, we are training the next generation of scientists, who have the required skills to conduct interdisciplinary research."

Read more at Science Daily

Feb 15, 2022

Astronomers discover a new type of star covered in helium burning ashes

A team of German astronomers, led by Professor Klaus Werner of the University of Tübingen, have discovered a strange new type of star covered in the by-product of helium burning. It is possible that the stars might have been formed by a rare stellar merger event.The fascinating results are published in Monthly Notices of the Royal Astronomical Society.

While normal stars have surfaces composed of hydrogen and helium, the stars discovered by Werner and his colleagues have their surfaces covered with carbon and oxygen, the ashes of helium burning -- an exotic composition for a star. The situation becomes more puzzling as the new stars have temperatures and radii that indicate they are still burning helium in their cores -- a property typically seen in more evolved stars than those observed by Werner and his team in this study.

Published alongside the work of Professor Werner and his team, a second paper from a group of astronomers from the University of La Plata and the Max Planck Institute for Astrophysics offers a possible explanation for their formation. "We believe the stars discovered by our German colleagues might have formed in a very rare kind of stellar merger event between two white dwarf stars," says Dr Miller Bertolami of the Institute for Astrophysics of La Plata, lead author of the second paper. White dwarfs are the remnants of larger stars that have exhausted their nuclear fuel, and are typically very small and dense.

Stellar mergers are known to happen between white dwarfs in close binary systems due to the shrinking of the orbit caused by the emission of gravitational waves. "Usually, white dwarf mergers do not lead to the formation of stars enriched in carbon and oxygen," explains Miller Bertolami, "but we believe that, for binary systems formed with very specific masses, a carbon- and oxygen-rich white dwarf might be disrupted and end up on top of a helium-rich one, leading to the formation of these stars."

Yet no current stellar evolutionary models can fully explain the newly discovered stars. The team need refined models in order to assess whether these mergers can actually happen. These models could not only help the team to better understand these stars, but could also provide a deeper insight into the late evolution of binary systems and how their stars exchange mass as they evolve. Until astronomers develop more refined models for the evolution of binary stars, the origin of the helium covered stars will be up for debate.

"Normally we expect stars with these surface compositions to have already finished burning helium in their cores, and to be on their way to becoming white dwarfs. These new stars are a severe challenge to our understanding of stellar evolution." explains Professor Werner.

From Science Daily

Tilting of Earth’s crust governed the flow of ancient megafloods

As ice sheets began melting at the end of the last ice age, a series of cataclysmic floods called the Missoula megafloods scoured the landscape of eastern Washington, carving long, deep channels and towering cliffs through an area now known as the Channeled Scablands. They were among the largest known floods in Earth’s history, and geologists struggling to reconstruct them have now identified a crucial factor governing their flows.

In a study published February 14 in Proceedings of the National Academy of Sciences, researchers showed how the changing weight of the ice sheets would have caused the entire landscape to tilt, changing the course of the megafloods.

“People have been looking at high water marks and trying to reconstruct the size of these floods, but all of the estimates are based on looking at the present-day topography,” said lead author Tamara Pico, assistant professor of Earth and planetary sciences at UC Santa Cruz. “This paper shows that the ice age topography would have been different over broad scales due to the deformation of Earth’s crust by the weight of the ice sheets.”

During the height of the last ice age, vast ice sheets covered much of North America. They began to melt after about 20,000 years ago, and the Missoula megafloods occurred between 18,000 and 15,500 years ago. Pico’s team studied how the changing weight of the ice sheets during this period would have tilted the topography of eastern Washington, changing how much water would flow into different channels during the floods.

Glacial Lake Missoula formed in western Montana when a lobe of the Cordilleran ice sheet dammed the Clark Fork valley in the Idaho panhandle and melt water built up behind the dam. Eventually the water got so deep that the ice dam began to float, resulting in a glacial outburst flood. After enough water had been released, the ice dam resettled and the lake refilled. This process is thought to have been repeated dozens of times over a period of several thousand years.

Downstream from glacial Lake Missoula, the Columbia River was dammed by another ice lobe, forming glacial Lake Columbia. When Lake Missoula’s outburst floods poured into Lake Columbia, the water spilled over to the south onto the eastern Washington plateau, eroding the landscape and creating the Channeled Scablands.

During this period, the deformation of the Earth’s crust in response to the growing and shrinking of ice sheets would have changed the elevation of the topography by hundreds of meters, Pico said. Her team incorporated these changes into flood models to investigate how the tilting of the landscape would have changed the routing of the megafloods and their erosional power in different channels.

“We used flood models to predict the velocity of the water and the erosional power in each channel, and compared that to what would be needed to erode basalt, the type of rock on that landscape,” Pico said.

They focused on two major channel systems, the Cheney-Palouse and Telford-Crab Creek tracts. Their results showed that earlier floods would have eroded both tracts, but that in later floods the flow would have been concentrated in the Telford-Crab Creek system.

“As the landscape tilted, it affected both where the water overflowed out of Lake Columbia and how water flowed in the channels, but the most important effect was on the spillover into those two tracts,” Pico said. “What’s intriguing is that the topography isn’t static, so we can’t just look at the topography of today to reconstruct the past.”

The findings provide a new perspective on this fascinating landscape, she said. Steep canyons hundreds of feet deep, dry falls, and giant potholes and ripple marks are among the many remarkable features etched into the landscape by the massive floods.

“When you are there in person, it’s crazy to think about the scale of the floods needed to carve those canyons, which are now dry,” Pico said. “There are also huge dry waterfalls—it’s a very striking landscape.”

She also noted that the oral histories of Native American tribes in this region include references to massive floods. “Scientists were not the first people to look at this,” Pico said. “People may even have been there to witness these floods.”

Read more at Science Daily

Time crystals leave the lab

We have all seen crystals, whether a simple grain of salt or sugar, or an elaborate and beautiful amethyst. These crystals are made of atoms or molecules repeating in a symmetrical three-dimensional pattern called a lattice, in which atoms occupy specific points in space. By forming a periodic lattice, carbon atoms in a diamond, for example, break the symmetry of the space they sit in. Physicists call this "breaking symmetry."

Scientists have recently discovered that a similar effect can be witnessed in time. Symmetry breaking, as the name suggests, can arise only where some sort of symmetry exists. In the time domain, a cyclically changing force or energy source naturally produces a temporal pattern.

Breaking of the symmetry occurs when a system driven by such a force faces a déjà vu moment, but not with the same period as that of the force. 'Time crystals' have in the past decade been pursued as a new phase of matter, and more recently observed under elaborate experimental conditions in isolated systems. These experiments require extremely low temperatures or other rigorous conditions to minimize undesired external influences, called noise.

In order for scientists to learn more about time crystals and employ their potential in technology, they need to find ways to produce time crystalline states and keep them stable outside the laboratory.

Cutting-edge research led by UC Riverside and published this week in Nature Communications has now observed time crystals in a system that is not isolated from its ambient environment. This major achievement brings scientists one step closer to developing time crystals for use in real-world applications.

"When your experimental system has energy exchange with its surroundings, dissipation and noise work hand-in-hand to destroy the temporal order," said lead author Hossein Taheri, an assistant research professor of electrical and computer engineering in UC Riverside's Marlan and Rosemary Bourns College of Engineering. "In our photonic platform, the system strikes a balance between gain and loss to create and preserve time crystals."

The all-optical time crystal is realized using a disk-shaped magnesium fluoride glass resonator one millimeter in diameter. When bombarded by two laser beams, the researchers observed subharmonic spikes, or frequency tones between the two laser beams, that indicated breaking of temporal symmetry and creation of time crystals.

The UCR-led team utilized a technique called self-injection locking of the two lasers to the resonator to achieve robustness against environmental effects. Signatures of the temporally repeating state of this system can readily be measured in the frequency domain. The proposed platform therefore simplifies the study of this new phase of matter.

Without the need for a low temperature, the system can be moved outside a complex lab for field applications. One such application could be highly accurate measurements of time. Because frequency and time are mathematical inverses of each other, accuracy in measuring frequency enables accurate time measurement.

"We hope that this photonic system can be utilized in compact and lightweight radiofrequency sources with superior stability as well as in precision timekeeping," said Taheri.

Read more at Science Daily

'Math neurons' identified in the brain

The brain has neurons that fire specifically during certain mathematical operations. This is shown by a recent study conducted by the Universities of Tübingen and Bonn. The findings indicate that some of the neurons detected are active exclusively during additions, while others are active during subtractions. They do not care whether the calculation instruction is written down as a word or a symbol. The results have now been published in the journal Current Biology.

Most elementary school children probably already know that three apples plus two apples add up to five apples. However, what happens in the brain during such calculations is still largely unknown. The current study by the Universities of Bonn and Tübingen now sheds light on this issue.

The researchers benefited from a special feature of the Department of Epileptology at the University Hospital Bonn. It specializes in surgical procedures on the brains of people with epilepsy. In some patients, seizures always originate from the same area of the brain. In order to precisely localize this defective area, the doctors implant several electrodes into the patients. The probes can be used to precisely determine the origin of the spasm. In addition, the activity of individual neurons can be measured via the wiring.

Some neurons fire only when summing up

Five women and four men participated in the current study. They had electrodes implanted in the so-called temporal lobe of the brain to record the activity of nerve cells. Meanwhile, the participants had to perform simple arithmetic tasks. "We found that different neurons fired during additions than during subtractions," explains Prof. Florian Mormann from the Department of Epileptology at the University Hospital Bonn.

It was not the case that some neurons responded only to a "+" sign and others only to a "-" sign: "Even when we replaced the mathematical symbols with words, the effect remained the same," explains Esther Kutter, who is doing her doctorate in Prof. Mormann's research group. "For example, when subjects were asked to calculate '5 and 3', their addition neurons sprang back into action; whereas for '7 less 4,' their subtraction neurons did."

This shows that the cells discovered actually encode a mathematical instruction for action. The brain activity thus showed with great accuracy what kind of tasks the test subjects were currently calculating: The researchers fed the cells' activity patterns into a self-learning computer program. At the same time, they told the software whether the subjects were currently calculating a sum or a difference. When the algorithm was confronted with new activity data after this training phase, it was able to accurately identify during which computational operation it had been recorded.

Prof. Andreas Nieder from the University of Tübingen supervised the study together with Prof. Mormann. "We know from experiments with monkeys that neurons specific to certain computational rules also exist in their brains," he says. "In humans, however, there is hardly any data in this regard." During their analysis, the two working groups came across an interesting phenomenon: One of the brain regions studied was the so-called parahippocampal cortex. There, too, the researchers found nerve cells that fired specifically during addition or subtraction. However, when summing up, different addition neurons became alternately active during one and the same arithmetic task. Figuratively speaking, it is as if the plus key on the calculator were constantly changing its location. It was the same with subtraction. Researchers also refer to this as "dynamic coding."

"This study marks an important step towards a better understanding of one of our most important symbolic abilities, namely calculating with numbers," stresses Mormann. The two teams from Bonn and Tübingen now want to investigate exactly what role the nerve cells found play in this.

Read more at Science Daily

Feb 14, 2022

How galaxies can exist without dark matter

In a new Nature Astronomy study, an international team led by astrophysicists from the University of California, Irvine and Pomona College report how, when tiny galaxies collide with bigger ones, the bigger galaxies can strip the smaller galaxies of their dark matter -- matter that we can't see directly, but which astrophysicists think must exist because, without its gravitational effects, they couldn't explain things like the motions of a galaxy's stars.

It's a mechanism that has the potential to explain how galaxies might be able to exist without dark matter -- something once thought impossible.

It started in 2018 when astrophysicists Shany Danieli and Pieter van Dokkum of Princeton University and Yale University observed two galaxies that seemed to exist without most of their dark matter.

"We were expecting large fractions of dark matter," said Danieli, who's a co-author on the latest study. "It was quite surprising, and a lot of luck, honestly."

The lucky find, which van Dokkum and Danieli reported on in a Nature paper in 2018 and in an Astrophysical Journal Letters paper in 2020, threw the galaxies-need-dark-matter paradigm into turmoil, potentially upending what astrophysicists had come to see as a standard model for how galaxies work.

"It's been established for the last 40 years that galaxies have dark matter," said Jorge Moreno, an astronomy professor at Pomona College, who's the lead author of the new paper. "In particular, low-mass galaxies tend to have significantly higher dark matter fractions, making Danieli's finding quite surprising. For many of us, this meant that our current understanding of how dark matter helps galaxies grow needed an urgent revision."

The team ran computer models that simulated the evolution of a chunk of the universe -- one about 60 million light years across -- starting soon after the Big Bang and running all the way to the present.

The team found seven galaxies devoid of dark matter. After several collisions with neighboring galaxies 1,000-times more massive, they were stripped of most of their material, leaving behind nothing but stars and some residual dark matter.

"It was pure serendipity," said Moreno. "The moment I made the first images, I shared them immediately with Danieli, and invited her to collaborate."

Robert Feldmann, a professor at the University of Zurich who designed the new simulation, said that "this theoretical work shows that dark matter-deficient galaxies should be very common, especially in the vicinity of massive galaxies."

UCI's James Bullock, an astrophysicist who's an expert on low-mass galaxies, described how he and the team didn't build their model just so they could create galaxies without dark matter -- something he said makes the model stronger, because it wasn't designed in any way to create the collisions that they eventually found. "We don't presuppose the interactions," said Bullock.

Confirming that galaxies lacking dark matter can be explained in a universe where there's lots of dark matter is a sigh of relief for researchers like Bullock, whose career and everything he's discovered therein hinges on dark matter being the thing that makes galaxies behave the way they do.

"The observation that there are dark matter-free galaxies has been a little bit worrying to me." said Bullock. "We have a successful model, developed over decades of hard work, where most of the matter in the cosmos is dark. There is always the possibility that nature has been fooling us."

But, Moreno said, "you don't have to get rid of the standard dark matter paradigm."

Now that astrophysicists know how a galaxy might lose its dark matter, Moreno and his collaborators hope the findings inspire researchers who look at the night sky to look for real-world massive galaxies they might be in the process of stripping dark matter away from smaller ones.

"It still doesn't mean this model is right," Bullock said. "A real test will be to see if these things exist with the frequency and general characteristics that match our predictions."

As part of this new work, Moreno, who has indigenous roots, received permission from Cherokee leaders to name the seven dark matter-free galaxies found in their simulations in honor of the seven Cherokee clans: Bird, Blue, Deer, Long Hair, Paint, Wild Potato and Wolf.

"I feel a personal connection to these galaxies," said Moreno, who added that, just as the more massive galaxies robbed the smaller galaxies of their dark matter, "many people of indigenous ancestry were stripped of our culture. But our core remains, and we are still thriving."

Read more at Science Daily

Decolonize research to save heritage threatened by climate change

Climate change threatens to destroy invaluable heritage sites and traditions in marginalised countries -- but empowering local people is key to adaptation.

That's according to new research from the University of East Anglia (UEA), the University of Cape Town (UCT) and partner institutions in 11 countries. They contributed to the study, 'Decolonising climate change heritage research', which is published today in Nature Climate Change.

Traditional ways of life in Amazonia and the Pacific Island States, ancient Roman coastal cities such as Tipasa in Algeria, the iconic earthen architecture of Mali -- including the masonic craftmanship passed down through generations to maintain it -- all face the threat of loss and damage from climate change because heritage research and funding haven't historically been prioritised in these areas.

Locally led research and more equitable research funding are needed to address the true potential loss and damage to heritage from climate change, said lead researcher Prof Joanne Clarke, of UEA's School of Art, Media and American Studies.

Prof Clarke said: "Climate change poses a threat to heritage globally, but it is particularly acute in low- and middle-income countries (LMIC) where vulnerability to climate change is generally high, with climate hazards such as sea-level rise, flooding and wildfires.

"These physical risks are compounded by land-use change leading to loss of livelihoods and local and Indigenous knowledge, as well as migration. This knowledge is crucial for safeguarding other forms of heritage, such as traditional buildings and building methods."

As heritage includes all the inherited traditions, monuments, objects, places and culture, as well as contemporary activities, knowledge, meanings and behaviours that are drawn from them, it is vital that research and knowledge generation becomes more inclusive, equitable and diverse.

Climate change is an increasing focus of heritage research across Europe and North America, including identification of site-specific adaptation options for heritage preservation.

In contrast, climate change research in LMICs is limited by systemic gaps in access to funding and its associated knowledge generation, which act to reinforce historical colonial structures deeply embedded in heritage management.

The researchers said decolonial approaches are not yet widely established in climate change-heritage scholarship and practice, and decolonial efforts can begin to address systemic inequities, recognise the breadth of heritage, and strengthen adaptation action globally.

Dr Nicholas Simpson, of the University of Cape Town, also led on the research. He said: "Euro-American centricity, dispossession, racism, and ongoing power imbalances have perpetuated a narrow view, mirroring colonial legacies that continue to shape priorities for climate research questions, funding, and outputs globally.

"We need to commit to actively undoing those systems and ways of thinking through transformations to agenda-setting, funding, training, access to data and governance."

Climate adaptation funding for many vulnerable LMICs is heavily dependent on international aid organisations that are commonly located in high-income countries (HIC). This inevitably leads to an unequal balance in the types of heritage earmarked for research or development, with a bias toward heritage valued by those living in HICs while commonly side-lining pre-colonial heritage.

Moreover, belief systems or other forms of intangible heritage historically haven't been deemed as valuable to Western researchers and funders as places or things, such as temples or statues.

Research agendas and funding, along with the policy agendas to which they are linked, need to be decentred from the HICs. Priorities for research and practice should be informed by Indigenous and local communities and should integrate their values, preferences and judgements with climate change risk and vulnerability assessments, the study suggests.

Specific efforts also will be required to train scholars to accommodate multiple knowledges and world views in the formulation of research questions and the co-creation of solutions. Collaborating with and mentoring scholars from LMICs will create the next generation of climate change heritage scholars, while aiming to make research findings and data more accessible to them.

Prof Clarke said: "It is critical to overcome geographic, intersectional, and distributional blind spots associated with colonial research legacies. Failure to actively transform in these ways will further entrench these long-standing inequities, as well as exacerbate inequalities in heritage-relevant responses to climate change.

Read more at Science Daily

Harvesting baker's yeast for aging-related therapeutics

Around the world, more people are growing older. According to the World Health Organisation, 1 in 6 people in the world will be aged 60 years or over by 2030. By 2050, the world's population of people aged 60 years and older will double to 2.1 billion. The number of persons aged 80 years or older is expected to triple between 2020 and 2050 to reach 426 million.

In line with the growing number of seniors, the number of people living with age-related diseases such as dementia, including Alzheimer's Disease and Parkinson's Disease is also expected to increase exponentially. These age-related diseases are an emerging impediment to healthy and functional aging.

A class of medicine used in the treatment of neuro-cognitive diseases and other neurological ailments (migraines, headaches, etc) are currently obtained from extracts of the ergot fungus. However, continued cultivation of the ergot fungus for medicine is not sustainable as industrial agriculture is one of the largest contributors to carbon emissions worldwide.

To meet the global demand for such medication, between 10-15 tons of D-lysergic acid (DLA), an ingredient used in producing the medicine, are produced each year. The ergot fungi are parasites to cereal crops such as rye, and their cultivation entails growing them on top of fields of such crops that could otherwise be used for food production. In order to reduce the use of arable land to produce such medicine, a group of researchers from the Yong Loo Lin School of Medicine at the National University of Singapore (NUS Medicine) and Imperial College London have trialled an alternative way of producing DLA.

Using yeast commonly known to make bread, and synthetic biology techniques, the team introduced the enzymes from the ergot fungus into baker's yeast, which also happens to be another fungus. Through a process known as fermentation, the modified yeast was then grown using sugar to produce DLA. Natural fermentation has been used throughout human history for food production, most notably in the production of bread and beer. Just like how baker's yeast has been used to produce the alcohol and flavours in beer, fermentation using the modified yeast can now produce DLA.

The study was published in Nature Communications on 7 Feb 2022.

"It is possible to produce up to five tons of DLA annually using the current yeast strain; and with further optimisation, commercial production levels could be attainable," explained Associate Professor Yew Wen Shan from the Department of Biochemistry at NUS Medicine and the co-lead Principal Investigator of the study. "This research builds upon the growing body of work that use microbes such as yeast for the sustainable production of medicine and functional food ingredients."

Read more at Science Daily

Cell groups push, rather than pull, themselves into place as organs form and cancers spread

Cells push and pull on surrounding tissue to move in groups as they form organs in an embryo, track down invading bacteria, and as they become cancerous and spread.

Published online in Nature Cell Biology on February 14, a new study found in a living embryo that the back ends of moving cell groups push the group forward. This runs contrary to previous findings, where cell groups grown in dishes of nutrients (cultures) pulled themselves forward with their front edges.

Led by researchers from NYU Grossman School of Medicine and the NYU Courant Institute of Mathematical Sciences, the study used a new technique to measure the forces applied by a cell group as it moved along a "road-like" tissue membrane and into place in a developing animal. Specifically, the study found for the first time in an animal tissue that proteins called integrins on the surfaces of the cells at the rear attach in greater numbers to the membrane as they move along, and exert more force in one direction, than the cells in the group's front. The integrin clusters (focal adhesions) observed in the embryo were smaller than those seen in culture studies, and broke down faster.

Confirmation of such mechanistic details in living tissue have important implications, say the researchers, as many cancers spread in cell groups, and may use the newfound "rear engine propulsion."

"Our results clarify how cell groups that will become organs move into place, and reaffirm that cells behave differently when removed from their natural environments," said senior study author Holger Knaut, PhD, associate professor in the Department of Cell Biology at NYU Langone Health.

Study Details

The study results are based on mechanisms of cell movement established by past studies. For instance, a protein called actin is known to form the protein "skeleton" of cells, with actin chains able to grow in a certain direction, and apply force that change a cell's shape. Integrins, proteins built into outer cell membranes, interact both with actin networks, and proteins outside of cells. These and other proteins form a system that a cell uses to briefly attach to and "roll along" a basement membrane, a pliable mesh of proteins and sugars. What was unknown going into the current study was how tissues in living animals apply force in groups to generate this motion.

The new study examined cell group motion in a zebrafish embryo, a major model in the study of development because it shares many cellular mechanisms with human cells, and because zebrafish embryos development externally, such that each stage in development can be directly observed using high-powered microscopes. In this way the team tracked the movement of the primordium -- a tissue made up of about 140 cells -- as it migrated during development from behind the ear to the tip of the zebrafish tail, where it matures into an organ that senses water flow.

"In the first study of its kind, we combined advanced microscopy with automated, high-throughput computational modeling to measure cellular forces in living organisms," says co-corresponding author Daniele Panozzo, PhD, an associate professor at the Courant Institute of Mathematical Sciences at New York University.

Using "bleached" dots on the basement membrane to measure shape changes (deformations) on a minute scale, and a new software called embryogram to calculate how far the dots move as the primordium "grips" the membrane, the researchers determined how much the cells pulled and pushed on the membrane, "like a tire on pavement." The effect is much like the high school physics experiment where students draw two dots on a rubber band, and calculate the force applied as they stretch the band by measuring the change in distance between the dots.

With these tools in hand, the team showed that the primordium cells link the force-generating actin-myosin network at the back end of the moving group through integrin clusters on the side closest to the basement membrane. The team theorizes that cells attached to membrane toward the back push on the cells in front of them to move the entire group. The researches also gained new insights on an established mechanism where cells have surface proteins that let them "sense" and follow a guidance cue called a chemokine, from low concentration to high concentration. The new study found, however, that cells toward the back end of the primordium sense the chemokine gradient more strongly.

Interestingly, the study found that the primordium moved in a "continuous breaststroke" by pushing the basement membrane downward, sideways and backwards, much like the arms of a swimmer. The authors do not know why this is, but speculate that this is the most efficient way to move forward. They note that banana slugs also use the rear edge of the "foot" they apply to the ground, suggesting that evolution favors rear engine propulsions because they are most efficient at different size scales.

The study suggests that group cell movement have the potential to be harnessed to stop cancer spread, perhaps by designing treatments that block the action of integrins, say the authors. Integrin inhibitors have been tested as drugs for cardiovascular and autoimmune disease in clinical trials, but their use against cancer spread has been limited by the need for a better understanding of the mechanisms.

Read more at Science Daily

Feb 13, 2022

Chemical history of the Milky Way revealed by new catalog of tens of millions of stars

University of Notre Dame researchers, along with collaborators in China and Australia, published a new sample catalog of more than 24 million stars that can be used to decipher the chemical history of elements in the Milky Way Galaxy.

The research, published this month in The Astrophysical Journal, represents about one-hundredth of a percent of the roughly 240 billion stars in the Milky Way. It marks a milestone for Timothy Beers, the Grace-Rupley Professor of Physics at Notre Dame, who has spent most of his career planning and executing ever-larger surveys of stars to decipher the galaxy's formation and chemical evolution -- a field called galactic archaeology. Researchers employed a new approach to measure the light from each star to infer the abundances of heavy metals such as iron. They also measured their distances, motions and ages.

"The elemental abundances of individual stars trace the chemical enrichment of the Milky Way galaxy, from when it first began to form stars shortly after the Big Bang to the present," Beers said.

"Combining this information with the stellar distances and motions allows us to constrain the origin of different components in the galaxy, such as the halo and disk populations," he continued. "Adding age estimates puts a `clock' on the process, so that a much more complete picture of the entire process can be drawn."

Previous spectroscopic work by Beers and collaborators provided the information for the tens of thousands of stars that were used to calibrate the new approach, based on precision photometric measurements. The recent research used large photometric samples obtained with the Australian SkyMapper Southern Survey and the European Gaia satellite mission to calibrate estimates of metallicity.

Until recently, the only means to obtain accurate estimates of the content of heavy metals, such as iron, for large numbers of stars was by taking low- and medium-resolution spectra that could be analyzed to extract this information. The process was long and painstaking.

Beers is most interested in the stars with the lowest metallicities -- very metal-poor stars with iron abundances less than 1 percent that of the sun -- because they were born early in the history of the universe, and therefore reveal the origin of elements in the periodic table. In the early 1980s, when Beers started his work, researchers knew of only about 20 very metal-poor stars. This new catalog brings the total of what Beers refers to as "fossils of the night sky" to more than 500,000.

Containing more than 19 million dwarf and five million giant stars, the new catalog is expected to advance the knowledge of how the Milky Way was formed in a variety of ways, Beers said. These include characterizing the structure of the galactic thin/thick disks -- the structural components of spiral galaxies -- as well as the population of stars and globular clusters that surround most disk galaxies, called the stellar halo. The catalog of stars will also help researchers identify the trails of stars left behind from disrupted dwarf galaxies and globular clusters.

Read more at Science Daily