Nov 6, 2021

What sponges can tell us about the evolution of the brain

As you read these lines the highly sophisticated biological 'machine' that is your brain is at work. The human brain is made up of approximately 86 billion neurons and controls not only our bodily functions from vision to movement but also provides consciousness and understanding.

Despite its central importance the brain's origins have not yet been uncovered. The first animal brains appeared hundreds of millions of years ago. Today, only the most primitive animal species, such as aquatic sponges, lack brains. Paradoxically, these species may hold the key to unlock the mystery of how neurons and brains first evolved.

Individual neurons in a brain communicate via synapses. These connections between cells lie at the heart of brain function and are regulated by a number of different genes. Sponges do not have these synapses, but their genome still encodes many of the synaptic genes. EMBL scientists asked the question why this might be the case. Their latest findings are published today in the journal Science.

"We know that these synaptic genes are involved in neuronal function in higher animals. Finding them in primitive species like sponges begs the question: if these animals don't have brains, what is the role of these genes?" explained Detlev Arendt, EMBL Group Leader and Senior Scientist at EMBL Heidelberg. "As simple as that sounds, answering this question was beyond our technological abilities so far."

To study the role of these synaptic genes in sponges, the Arendt lab established microfluidic and genomic technologies in the freshwater sponge Spongilla lacustris. Using these techniques, the scientists captured individual cells from several sponges inside microfluidic droplets and then profiled each cell's genetic activity.

"We showed that certain cells in the sponge digestive chambers activate the synaptic genes. So even in a primitive animal lacking synapses, the synaptic genes are active in specific parts of its body," said Jacob Musser, Research Scientist in the Arendt group and lead author on the study.

Sponges use their digestive chambers to filter out food from the water and interact with environmental microbes. To understand what the cells expressing synaptic genes do, the Arendt group joined forces with six EMBL teams as well as collaborators in Europe and worldwide. Working with EMBL's Electron Microscopy Core Facility, Yannick Schwab's team and Thomas Schneider's group operating synchrotron beamlines at EMBL Hamburg the researchers developed a new correlative imaging approach. "By combining electron microscopy with X-ray imaging on a synchrotron beamline we were able to visualize the stunning behaviour of these cells," Dr Schwab explained.

The scientists captured three-dimensional snapshots of cells crawling throughout the digestive chamber to clear out bacterial invaders and sending out long arms that enwrap the feeding apparatus of specific digestive cells. This behaviour creates an interface for targeted cell-cell communication, as it also happens across synapses between neuronal cells in our brains.

Read more at Science Daily

Spiders' web secrets unraveled

Johns Hopkins University researchers discovered precisely how spiders build webs by using night vision and artificial intelligence to track and record every movement of all eight legs as spiders worked in the dark.

Their creation of a web-building playbook or algorithm brings new understanding of how creatures with brains a fraction of the size of a human's are able to create structures of such elegance, complexity and geometric precision. The findings, now available online, are set to publish in the November issue of Current Biology.

"I first got interested in this topic while I was out birding with my son. After seeing a spectacular web I thought, 'if you went to a zoo and saw a chimpanzee building this you'd think that's one amazing and impressive chimpanzee.' Well this is even more amazing because a spider's brain is so tiny and I was frustrated that we didn't know more about how this remarkable behavior occurs," said senior author Andrew Gordus, a Johns Hopkins behavioral biologist. "Now we've defined the entire choreography for web building, which has never been done for any animal architecture at this fine of a resolution."

Web-weaving spiders that build blindly using only the sense of touch, have fascinated humans for centuries. Not all spiders build webs but those that do are among a subset of animal species known for their architectural creations, like nest-building birds and puffer fish that create elaborate sand circles when mating.

The first step to understanding how the relatively small brains of these animal architects support their high-level construction projects, is to systematically document and analyze the behaviors and motor skills involved, which until now has never been done, mainly because of the challenges of capturing and recording the actions, Gordus said.

Here his team studied a hackled orb weaver, a spider native to the western United States that's small enough to sit comfortably on a fingertip. To observe the spiders during their nighttime web-building work, the lab designed an arena with infrared cameras and infrared lights. With that set-up they monitored and recorded six spiders every night as they constructed webs. They tracked the millions of individual leg actions with machine vision software designed specifically to detect limb movement.

"Even if you video record it, that's a lot of legs to track, over a long time, across many individuals," said lead author Abel Corver, a graduate student studying web-making and neurophysiology. "It's just too much to go through every frame and annotate the leg points by hand so we trained machine vision software to detect the posture of the spider, frame by frame, so we could document everything the legs do to build an entire web."

They found that web-making behaviors are quite similar across spiders, so much so that the researchers were able to predict the part of a web a spider was working on just from seeing the position of a leg.

"Even if the final structure is a little different, the rules they use to build the web are the same," Gordus said. "They're all using the same rules, which confirms the rules are encoded in their brains. Now we want to know how those rules are encoded at the level of neurons."

Future work for the lab includes experiments with mind-altering drugs to determine which circuits in the spider's brain are responsible for the various stages of web-building.

"The spider is fascinating," Corver said, "because here you have an animal with a brain built on the same fundamental building blocks as our own, and this work could give us hints on how we can understand larger brain systems, including humans, and I think that's very exciting.

Read more at Science Daily

Nov 5, 2021

Rocky exoplanets are even stranger than we thought

An astronomer from NSF's NOIRLab has teamed up with a geologist from California State University, Fresno, to make the first estimates of rock types that exist on planets orbiting nearby stars. After studying the chemical composition of "polluted" white dwarfs, they have concluded that most rocky planets orbiting nearby stars are more diverse and exotic than previously thought, with types of rocks not found anywhere in our Solar System.

Astronomers have discovered thousands of planets orbiting stars in our galaxy -- known as exoplanets. However, it's difficult to know what exactly these planets are made of, or whether any resemble Earth. To try to find out, astronomer Siyi Xu of NSF's NOIRLab partnered with geologist Keith Putirka of California State University, Fresno, to study the atmospheres of what are known as polluted white dwarfs. These are the dense, collapsed cores of once-normal stars like the Sun that contain foreign material from planets, asteroids, or other rocky bodies that once orbited the star but eventually fell into the white dwarf and "contaminated" its atmosphere. By looking for elements that wouldn't naturally exist in a white dwarf's atmosphere (anything other than hydrogen and helium), scientists can figure out what the rocky planetary objects that fell into the star were made of.

Putirka and Xu looked at 23 polluted white dwarfs, all within about 650 light-years of the Sun, where calcium, silicon, magnesium, and iron had been measured with precision using the W. M. Keck Observatory in Hawai'i, the Hubble Space Telescope, and other observatories. The scientists then used the measured abundances of those elements to reconstruct the minerals and rocks that would form from them. They found that these white dwarfs have a much wider range of compositions than any of the inner planets in our Solar System, suggesting their planets had a wider variety of rock types. In fact, some of the compositions are so unusual that Putirka and Xu had to create new names (such as "quartz pyroxenites" and "periclase dunites") to classify the novel rock types that must have existed on those planets.

"While some exoplanets that once orbited polluted white dwarfs appear similar to Earth, most have rock types that are exotic to our Solar System," said Xu. "They have no direct counterparts in the Solar System."

Putirka describes what these new rock types might mean for the rocky worlds they belong to. "Some of the rock types that we see from the white dwarf data would dissolve more water than rocks on Earth and might impact how oceans are developed," he explained. "Some rock types might melt at much lower temperatures and produce thicker crust than Earth rocks, and some rock types might be weaker, which might facilitate the development of plate tectonics."

Earlier studies of polluted white dwarfs had found elements from rocky bodies, including calcium, aluminum, and lithium. However, Putirka and Xu explain that those are minor elements (which typically make up a small part of an Earth rock) and measurements of major elements (which make up a large part of an Earth rock), especially silicon, are needed to truly know what kind of rock types would have existed on those planets.

In addition, Putirka and Xu state that the high levels of magnesium and low levels of silicon measured in the white dwarfs' atmospheres suggest that the rocky debris detected likely came from the interiors of the planets -- from the mantle, not their crust. Some previous studies of polluted white dwarfs reported signs that continental crust existed on the rocky planets that once orbited those stars, but Putirka and Xu found no evidence of crustal rocks. However, the observations do not completely rule out that the planets had continental crust or other crust types. "We believe that if crustal rock exists, we are unable to see it, probably because it occurs in too small a fraction compared to the mass of other planetary components, like the core and mantle, to be measured," Putirka stated.

Read more at Science Daily

Increasingly frequent wildfires linked to human-caused climate change

Research by scientists from UCLA and Lawrence Livermore National Laboratory strengthens the case that climate change has been the main cause of the growing amount of land in the western U.S. that has been destroyed by large wildfires over the past two decades.

Rong Fu, a UCLA professor of atmospheric and oceanic sciences and the study's corresponding author, said the trend is likely to worsen in the years ahead. "I am afraid that the record fire seasons in recent years are only the beginning of what will come, due to climate change, and our society is not prepared for the rapid increase of weather contributing to wildfires in the American West."

The dramatic increase in destruction caused by wildfires is borne out by U.S. Geological Survey data. In the 17 years from 1984 to 2000, the average burned area in 11 western states was 1.69 million acres per year. For the next 17 years, through 2018, the average burned area was approximately 3.35 million acres per year. And in 2020, according to a National Interagency Coordination Center report, the amount of land burned by wildfires in the West reached 8.8 million acres -- an area larger than the state of Maryland.

But the factors that have caused that massive increase have been the subject of debate: How much of the trend was caused by human-induced climate change and how much could be explained by changing weather patterns, natural climate variation, forest management, earlier springtime snowmelt and reduced summer rain?

For the study, published in the Nov. 9 edition of the journal Proceedings of the National Academy of Sciences, the researchers applied artificial intelligence to climate and fire data in order to estimate the roles that climate change and other factors play in determining the key climate variable tied to wildfire risk: vapor pressure deficit.

Vapor pressure deficit measures the amount of moisture the air can hold when it is saturated minus the amount of moisture in the air. When vapor pressure deficit, or VPD, is higher, the air can draw more moisture from soil and plants. Large wildfire-burned areas, especially those not located near urban areas, tend to have high vapor pressure deficits, conditions that are associated with warm, dry air.

The study found that the 68% of the increase in vapor pressure deficit across the western U.S. between 1979 and 2020 was likely due to human-caused global warming. The remaining 32% change, the authors concluded, was likely caused by naturally occurring changes in weather patterns.

The findings suggest that human-induced climate change is the main cause for increasing fire weather in the western United States.

"And our estimates of the human-induced influence on the increase in fire weather risk are likely to be conservative," said Fu, director of UCLA's Joint Institute for Regional Earth System Science and Engineering, a collaboration with NASA's Jet Propulsion Laboratory.

The researchers analyzed the so-called August Complex wildfire of 2020, which burned more than a million acres in Northern California. They concluded that human-induced warming likely explains 50% of the unprecedentedly high VPD in the region during the month the fire began.

Fu said she expects wildfires to continue to become more intense and more frequent in the western states overall, even though wetter and cooler conditions could offer brief respites. And areas where vast swaths of plant life have already been lost to fires, drought, heatwaves and the building of roads likely would not see increases in wildfires despite the increase of the vapor pressure deficit.

"Our results suggest that the western United States appears to have passed a critical threshold -- that human-induced warming is now more responsible for the increase of vapor pressure deficit than natural variations in atmospheric circulation," Fu said. "Our analysis shows this change has occurred since the beginning of the 21st century, much earlier than we anticipated."

Read more at Science Daily

Just a game? Study shows no evidence that violent video games lead to real-life violence

As the latest Call of Duty video game is released in the UK today, and with Battlefield 2042 and a remastered Grand Theft Auto trilogy to follow later this month, new research finds no evidence that violence increases after a new video game is released.

Mass media and general public often link violent video games to real-life violence, although there is limited evidence to support the link.

Debate on the topic generally intensifies after mass public shootings, with some commentators linking these violent acts to the perpetrators' interests in violent video games.

However, others have pointed out that different factors, such as mental health issues and/or easy access to guns, are more likely explanations.

In the light of these conflicting claims, President Obama called in 2013 for more government funding for research on video games and violence.

But before governments introduce any policies restricting access to violent video games, it is important to establish whether violent video games do indeed make players behave violently in the real world.

Research by Dr Agne Suziedelyte, Senior Lecturer in the Department of Economics at City, University of London, provides evidence of the effects of violent video game releases on children's violent behaviour using data from the US. Dr Suziedelyte examined the effects of violent video games on two types of violence: aggression against other people, and destruction of things/property.

The study, published in the Journal of Economic Behavior & Organization, focused on boys aged 8-18 years -- the group most likely to play violent video games.

Dr Suziedelyte used econometric methods that identify plausibly causal effects of violent video games on violence, rather than only associations. She found no evidence that violence against other people increases after a new violent video game is released. Parents reported, however, that children were more likely to destroy things after playing violent video games.

Dr Suziedelyte said: "Taken together, these results suggest that violent video games may agitate children, but this agitation does not translate into violence against other people -- which is the type of violence which we care about most.

"A likely explanation for my results is that video game playing usually takes place at home, where opportunities to engage in violence are lower. This 'incapacitation' effect is especially important for violence-prone boys who may be especially attracted to violent video games.

Read more at Science Daily

Save the planet (and your health) by steering clear of sweets and pastries

Keen to do your bit for the environment? Cut back on sweets, pastries, fried foods and processed meat. According to a new study published this month, reducing these foods in our diet is not only better for our health but also the planet.

Australia and New Zealand households eat more discretionary and junk foods than recommended by dietary guidelines, contributing to food-related greenhouse gas emissions (GHGe) and other environmental impacts.

University of South Australia (UniSA) dietitian Sara Forbes, who led a review examining 20 studies on the environmental impacts of food consumption in both countries, says the findings highlight the need for more sustainable dietary choices.

According to a Federal Government report released in 2020, Australia emitted an estimated 510 metric tonnes of carbon dioxide, with food-related emissions accounting for 14.2 per cent of this total. The report found that the average Australian produces the equivalent of 19.7kg of carbon dioxide each day via their diets.

Another report from 2017 found that food waste comprises approximately six per cent of Australia's GHGe, considering the water, energy and pesticides used in food production and packaging that ends up in landfill, where it releases even more methane as it decomposes.

Unlike New Zealand, current Australian Dietary Guidelines (ADG) do not consider environmental impacts of food and need to be updated, researchers say.

The existing ADG recommends daily servings of 'core' foods every day: fruit and vegetables, grains, lean meats, fish, eggs, nuts, seeds, legumes, milk, cheese, yoghurt and alternatives.

These core foods are estimated to contribute between 67-73 per cent of total food-related GHGe in Australia, with meat, grains and dairy contributing the most emissions. Fruit and vegetables are two of the lowest contributors.

Non-core or 'discretionary' foods include sugar-sweetened drinks, alcohol, confectionary and processed meats, accounting for between 27-33 per cent of food-related GHGe. While the percentage is lower than core food emissions, the fact that Australians are consuming large amounts of avoidable energy-rich, nutrient-poor foods is not helping the environment.

In New Zealand, the highest greenhouse gas emitters are meat, seafood and eggs (35 per cent), followed by highly processed foods such as pastries and ice cream (34 per cent).

Other studies examined the environmental impacts of water use in food production.

Australian irrigators soak up eight million megalitres of water each year to grow crops, but the majority are exported, making it difficult to accurately reflect the nation's water footprint.

The researchers assessed 20 articles in their study, published in the past decade, with varying findings. Despite the differences, clear trends emerged.

"Discretionary foods have a higher cropland, water scarcity and Ecological Footprint. Meat also emits greenhouse gases, although its water scarcity footprint is lower compared to dairy products, cereals, grains, fruit and vegetables," Forbes says.

"It is time we better acknowledged the environmental impacts of the type and amount of food we eat, considering the planet as well as our health.

"By 2050, the world's population is projected to reach 10 billion people. There is no way we can feed that amount of people unless we change the way we eat and produce food."

Read more at Science Daily

Nov 4, 2021

Astronomers make most distant detection yet of fluorine in star-forming galaxy

A new discovery is shedding light on how fluorine -- an element found in our bones and teeth as fluoride -- is forged in the Universe. Using the Atacama Large Millimeter/submillimeter Array (ALMA), in which the European Southern Observatory (ESO) is a partner, a team of astronomers have detected this element in a galaxy that is so far away its light has taken over 12 billion years to reach us. This is the first time fluorine has been spotted in such a distant star-forming galaxy.

"We all know about fluorine because the toothpaste we use every day contains it in the form of fluoride," says Maximilien Franco from the University of Hertfordshire in the UK, who led the new study, published today in Nature Astronomy. Like most elements around us, fluorine is created inside stars but, until now, we did not know exactly how this element was produced. "We did not even know which type of stars produced the majority of fluorine in the Universe!"

Franco and his collaborators spotted fluorine (in the form of hydrogen fluoride) in the large clouds of gas of the distant galaxy NGP-190387, which we see as it was when the Universe was only 1.4 billion years old, about 10% of its current age. Since stars expel the elements they form in their cores as they reach the end of their lives, this detection implies that the stars that created fluorine must have lived and died quickly.

The team believes that Wolf-Rayet stars, very massive stars that live only a few million years, a blink of the eye in the Universe's history, are the most likely production sites of fluorine. They are needed to explain the amounts of hydrogen fluoride the team spotted, they say. Wolf-Rayet stars had been suggested as possible sources of cosmic fluorine before, but astronomers did not know until now how important they were in producing this element in the early Universe.

"We have shown that Wolf-Rayet stars, which are among the most massive stars known and can explode violently as they reach the end of their lives, help us, in a way, to maintain good dental health!" jokes Franco.

Besides these stars, other scenarios for how fluorine is produced and expelled have been put forward in the past. An example includes pulsations of giant, evolved stars with masses up to few times that of our Sun, called asymptotic giant branch stars. But the team believes these scenarios, some of which take billions of years to occur, might not fully explain the amount of fluorine in NGP-190387.

"For this galaxy, it took just tens or hundreds of millions of years to have fluorine levels comparable to those found in stars in the Milky Way, which is 13.5 billion years old. This was a totally unexpected result," says Chiaki Kobayashi, a professor at the University of Hertfordshire. "Our measurement adds a completely new constraint on the origin of fluorine, which has been studied for two decades."

The discovery in NGP-190387 marks one of the first detections of fluorine beyond the Milky Way and its neighbouring galaxies. Astronomers have previously spotted this element in distant quasars, bright objects powered by supermassive black holes at the centre of some galaxies. But never before had this element been observed in a star-forming galaxy so early in the history of the Universe.

The team's detection of fluorine was a chance discovery made possible thanks to the use of space and ground-based observatories. NGP-190387, originally discovered with the European Space Agency's Herschel Space Observatory and later observed with the Chile-based ALMA, is extraordinarily bright for its distance. The ALMA data confirmed that the exceptional luminosity of NGP-190387 was partly caused by another known massive galaxy, located between NGP-190387 and the Earth, very close to the line of sight. This massive galaxy amplified the light observed by Franco and his collaborators, enabling them to spot the faint radiation emitted billions of years ago by the fluorine in NGP-190387.

Read more at Science Daily

On ancient Earth, it never rained but it poured

Today, we are experiencing the dramatic impacts that even a small increase in global temperatures can have on a planet's climate. Now, imagine an Earth 20 to 30 degrees Fahrenheit hotter than today. Earth likely experienced these temperatures at various times in the distant past and will experience them again hundreds of millions of years from now as the sun continues to brighten.

Little is known about how the atmosphere and climate behaved during these so-called hothouse periods. In a new study, researchers from Harvard University found that during these epochs of extreme heat, Earth may have experienced cycles of dryness followed by massive rain storms hundreds of miles wide that could dump more than a foot of rain in a matter of hours.

"If you were to look at a large patch of the deep tropics today, it's always raining somewhere," said Jacob Seeley, a Postdoctoral Fellow in Environmental Science and Engineering at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) and the Department of Earth and Planetary Science at Harvard and first author of the paper. "But we found that in extremely warm climates, there could be multiple days with no rain anywhere over a huge part of the ocean. Then, suddenly, a massive rainstorm would erupt over almost the entire domain, dumping a tremendous amount of rain. Then it would be quiet for a couple of days and repeat."

"This episodic cycle of deluges is a new and completely unexpected atmospheric state" said Robin Wordsworth, the Gordon McKay Professor of Environmental Science and Engineering at SEAS and senior author of the study.

The research not only sheds light on Earth's distant past and far-flung future but may also help to understand the climates of exoplanets orbiting distant stars.

The research is published in Nature.

In an atmospheric model, Seeley and Wordsworth cranked up Earth's sea surface temperature to a scalding 130 degrees Fahrenheit, either by adding more CO2 -- about 64-times the amount currently in the atmosphere -- or by increasing the brightness of the sun by about 10 percent.

At those temperatures, surprising things start happening in the atmosphere. When the air near the surface becomes extremely warm, absorption of sunlight by atmospheric water vapor heats the air above the surface and forms what's known as an "inhibition layer," a barrier that prevents convective clouds from rising into the upper atmosphere and forming rain clouds.

Instead, all that evaporation gets stuck in the near-surface atmosphere.

At the same time, clouds form in the upper atmosphere, above the inhibition layer, as heat is lost to space. The rain produced in those upper-level clouds evaporates before reaching the surface, returning all that water to the system.

"It's like charging a massive battery," said Seeley. "You have a ton of cooling high in the atmosphere and a ton of evaporation and heating near the surface, separated by this barrier. If something can break through that barrier and allow the surface heat and humidity to break into the cool upper atmosphere, it's going to cause an enormous rainstorm."

That's exactly what happens. After several days, the evaporative cooling from the upper atmosphere's rainstorms erodes the barrier, triggering an hours-long deluge. In one simulation, the researchers observed more rainfall in a six-hour period than some tropical cyclones drop in the U.S. across several days.

After the storm, the clouds dissipate, and precipitation stops for several days as the atmospheric battery recharges and the cycle continues.

"Our research goes to show that there are still a lot of surprises in the climate system," said Seeley. "Although a 30-degree increase in sea surface temperatures is way more than is being predicted for human-caused climate change, pushing atmospheric models into unfamiliar territory can reveal glimpses of what the Earth is capable of."

Read more at Science Daily

One and done: Researchers urge testing eyewitness memory only once

We all know the scene from countless courtroom dramas: A witness points at the defendant and confidently declares to judge and jury: "That's the one, that's who did it!" But is it? Perhaps. If that same witness was also confident the very first time their memory was tested -- write a team of psychological scientists and criminologists led by memory expert John Wixted of the University of California San Diego. Otherwise, there's too high a chance that a contaminated memory will convict an innocent person.

As most of us also know, people have been convicted of crimes they didn't commit on the basis of eyewitness memory. Some of these wrongful convictions have later been overturned by DNA or other physical evidence. But that type of evidence doesn't always exist. To reduce the likelihood of injustice, the researchers suggest a simple, no-cost reform to our system of jurisprudence. "Test a witness's memory of a suspect only once," the researchers urge in a paper published by Psychological Science in the Public Interest, a journal of the Association for Psychological Science.

"The first test is the most reliable test," says Wixted, a professor of psychology at UC San Diego, who has been working on memory for more than 30 years and eyewitness memory specifically for the past decade. "The first test probes the witness's memory but also unavoidably contaminates the witness's memory. All tests beyond that very first one only serve to test contaminated memory and to contaminate it further. And once a memory is contaminated, there is no way to decontaminate it."

In their paper, Wixted and his co-authors -- Gary Wells of Iowa State University, Elizabeth Loftus of UC Irvine and Brandon Garrett of the Duke University School of Law -- explain how many wrongful convictions of innocent prisoners in which a witness conclusively identified the defendant in court began with something other than a conclusive initial eyewitness identification.

It's not that witnesses are vindictive or malicious, or that anyone else in the process is either. Nor is it the case that eyewitness memory is so hopelessly faulty that it shouldn't be admitted as evidence at all. But our system of jurisprudence ignores the confidence with which first identifications are made and relies too often on subsequent identifications, usually the very last one made in the courtroom. At that point, at trial, perhaps a year or more after the crime has been committed, witnesses have usually become so familiar with a suspect's face that they are certain they're remembering the face. And in fact, they are remembering -- but very possibly not from the time the crime was committed. Rather, they're remembering having seen the person in a line-up (sometimes multiple times) or even on news or social media.

"Memory is malleable," Wixted says. "And because it's malleable, we must avoid repeated identification procedures with the same witness and suspect. This recommendation applies not only to additional tests conducted by police investigators butalso to the final test conducted in the courtroom."

In their paper, the researchers describe the latest science on eyewitness memory, including findings based on signal detection theory, elaborative processing and source misattribution. To make a decision about a face in a lineup (signal detection theory), the witness has to compare that face to their memory of the perpetrator (elaborative processing). Doing so automatically creates a memory of that face. Even if the initial decision is "no, that is not him," the face will seem more familiar on any later test. Often, the witness loses sight of the fact that the face is familiar because of the previous lineup test and comes to believe that the face is familiar because it is in fact the face of the perpetrator (source misattribution).

The researchers also detail three real-life cases to underscore the theoretical and experimental points: The cases of John Jerome White and Steven Gary Titus,both of whom were convicted of rape on the basis of witness memories and whose convictions were later overturned, and the case of Charles Don Flores.

The Flores case is especially instructive, Wixted says. It inspired him to assemble the research team for this paper -- outlining the latest scientific understanding of eyewitness memory and calling for reform.

On January 29, 1998, in a suburb of Dallas, two men entered the home of Elizabeth Black, who was later found shot dead. A neighbor saw the men enter Black's home shortly before the murder, and she became a key witness. When the police captured suspected triggerman Richard Childs, the witness immediately identified Childs from a photo lineup as one of the two men she saw that morning. Childs also confessed to the murder and was sentenced to 35 years in prison. The police suspected Flores as the accomplice because he was engaged in a drug deal with Childs only hours before the murder, and at his 1999 trial, the same witness confidently identified Flores as the other man she saw enter her neighbor's house. However, on the day of the crime in January of 1998, the witness told police that the accomplice was a white male with shoulder-length hair. After being hypnotized to calm her nerves, she helped to make a composite sketch of the perpetrator with a police artist. Consistent with her initial description, the sketch was that of a white male with shoulder-length hair. The police then showed her a photo lineup containing Flores -- a Hispanic male with a crew cut -- along with five similar-looking Hispanic males. She rejected the lineup, presumably because none of the faces even remotely matched her memory of the accomplice. Yet, Wixted says, while examining the faces on that first and only uncontaminated test of her memory for Flores, she became unavoidably familiarized with his face. By the time of the trial, she no longer had any doubt that he was the man she saw that morning.

Read more at Science Daily

Bilingualism comes naturally to our brains

The brain uses a shared mechanism for combining words from a single language and for combining words from two different languages, a team of neuroscientists has discovered. Its findings indicate that language switching is natural for those who are bilingual because the brain has a mechanism that does not detect that the language has switched, allowing for a seamless transition in comprehending more than one language at once.

"Our brains are capable of engaging in multiple languages," explains Sarah Phillips, a New York University doctoral candidate and the lead author of the paper, which appears in the journal eNeuro. "Languages may differ in what sounds they use and how they organize words to form sentences. However, all languages involve the process of combining words to express complex thoughts."

"Bilinguals show a fascinating version of this process -- their brains readily combine words from different languages together, much like when combining words from the same language," adds Liina Pylkkänen, a professor in NYU's Department of Linguistics and Department of Psychology and the senior author of the paper.

An estimated 60 million in the U.S. use two or more languages, according to the U.S. Census. However, despite the widespread nature of bi- and multilingualism, domestically and globally, the neurological mechanisms used to understand and produce more than one language are not well understood.

This terrain is an intriguing one; bilinguals often mix their two languages together as they converse with one another, raising questions about how the brain functions in such exchanges.

To better understand these processes, Phillips and Pylkkänen, who is also part of the NYU Abu Dhabi Institute, explored whether bilinguals interpret these mixed-language expressions using the same mechanisms as when comprehending single-language expressions or, alternatively, if understanding mixed-language expressions engages the brain in a unique way.

To test this, the scientists measured the neural activity of Korean/English bilinguals.

Here, the study's subjects viewed a series of word combinations and pictures on a computer screen. They then had to indicate whether or not the picture matched the preceding words. The words either formed a two-word sentence or were simply a pair of verbs that did not combine with each other into a meaningful phrase (e.g., "icicles melt" vs. "jump melt"). In some instances, the two words came from a single language (English or Korean) while in others both languages were used, with the latter mimicking mixed-language conversations.

In order to measure the study subjects' brain activity during these experiments, the researchers deployed magnetoencephalography (MEG), a technique that maps neural activity by recording magnetic fields generated by the electrical currents produced by our brains.

The recordings showed that Korean/English bilinguals, in interpreting mixed-language expressions, used the same neural mechanism as they did while interpreting single-language expressions.

Specifically, the brain's left anterior temporal lobe, a brain region well-studied for its role in combining the meanings of multiple words, was insensitive to whether the words it received were from the same language or from different languages. This region, then, proceeded to combine words into more complex meanings so long as the meanings of the two words combined together into a more complex meaning.

These findings suggest that language switching is natural for bilinguals because the brain has a combinatory mechanism that does not "see" that the language has switched.

Read more at Science Daily

Nov 3, 2021

Gravitational ‘kick’ may explain the strange shape at the center of Andromeda

When two galaxies collide, the supermassive black holes at their cores release a devastating gravitational "kick," similar to the recoil from a shotgun. New research led by CU Boulder suggests that this kick may be so powerful it can knock millions of stars into wonky orbits.

The research, published Oct. 29 in The Astrophysical Journal Letters, helps solve a decades-old mystery surrounding a strangely-shaped cluster of stars at the heart of the Andromeda Galaxy. It might also help researchers better understand the process of how galaxies grow by feeding on each other.

"When scientists first looked at Andromeda, they were expecting to see a supermassive black hole surrounded by a relatively symmetric cluster of stars," said Ann-Marie Madigan, a fellow of JILA, a joint research institute between CU Boulder and the National Institute of Standards and Technology (NIST). "Instead, they found this huge, elongated mass."

Now, she and her colleagues think they have an explanation.

In the 1970s, scientists launched balloons high into Earth's atmosphere to take a close look in ultraviolet light at Andromeda, the galaxy nearest to the Milky Way. The Hubble Space Telescope followed up on those initial observations in the 1990s and delivered a surprising finding: Like our own galaxy, Andromeda is shaped like a giant spiral. But the area rich in stars near that spiral's center doesn't look like it should -- the orbits of these stars take on an odd, ovalish shape like someone stretched out a wad of Silly Putty.

And no one knew why, said Madigan, also an assistant professor of astrophysics. Scientists call the pattern an "eccentric nuclear disk."

In the new study, the team used computer simulations to track what happens when two supermassive black holes go crashing together -- Andromeda likely formed during a similar merger billions of years ago. Based on the team's calculations, the force generated by such a merger could bend and pull the orbits of stars near a galactic center, creating that telltale elongated pattern.

"When galaxies merge, their supermassive black holes are going to come together and eventually become a single black hole," said Tatsuya Akiba, lead author of the study and a graduate student in astrophysics. "We wanted to know: What are the consequences of that?"

Bending space and time

He added that the team's findings help to reveal some of the forces that may be driving the diversity of the estimated two trillion galaxies in the universe today -- some of which look a lot like the spiral-shaped Milky Way, while others look more like footballs or irregular blobs.

Mergers may play an important role in shaping these masses of stars: When galaxies collide, Akiba said, the black holes at the centers may begin to spin around each other, moving faster and faster until they eventually slam together. In the process, they release huge pulses of "gravitational waves," or literal ripples in the fabric of space and time.

"Those gravitational waves will carry momentum away from the remaining black hole, and you get a recoil, like the recoil of a gun," Akiba said.

He and Madigan wanted to know what such a recoil could do to the stars within 1 parsec, or roughly 19 trillion miles, of a galaxy's center. Andromeda, which can be seen with the naked eye from Earth, stretches tens of thousands of parsecs from end to end.

It gets pretty wild.

Galactic recoil

The duo used computers to build models of fake galactic centers containing hundreds of stars -- then kicked the central black hole to simulate the recoil from gravitational waves.

Madigan explained the gravitational waves produced by this kind of disastrous collision won't affect the stars in a galaxy directly. But the recoil will throw the remaining supermassive black hole back through space -- at speeds that can reach millions of miles per hour, not bad for a body with a mass millions or billions of times greater than that of Earth's sun.

"If you're a supermassive black hole, and you start moving at thousands of kilometers per second, you can actually escape the galaxy you're living in," Madigan said.

When black holes don't escape, however, the team discovered they may pull on the orbits of the stars right around them, causing those orbits to stretch out. The result winds up looking a lot like the shape scientists see at the center of Andromeda.

Madigan and Akiba said they want to grow their simulations so they can directly compare their computer results to that real-life galaxy core -- which contains many times more stars. They noted their findings might also help scientists to understand the unusual happenings around other objects in the universe, such as planets orbiting mysterious bodies called neutron stars.

Read more at Science Daily

ALMA scientists detect signs of water in a galaxy far, far away

Water has been detected in the most massive galaxy in the early Universe, according to new observations from the Atacama Large Millimeter/submillimeter Array (ALMA). Scientists studying SPT0311-58 found H20, along with carbon monoxide in the galaxy, which is located nearly 12.88 billion light years from Earth. Detection of these two molecules in abundance suggests that the molecular Universe was going strong shortly after the elements were forged in early stars. The new research comprises the most detailed study of molecular gas content of a galaxy in the early Universe to date and the most distant detection of H20 in a regular star-forming galaxy. The research is published in The Astrophysical Journal.

SPT0311-58 is actually made up of two galaxies, and was first seen by ALMA scientists in 2017 at its location, or time, in the Epoch of Reionization. This epoch occurred at a time when the Universe was just 780 million years old -- roughly 5-percent of its current age -- and the first stars and galaxies were being born. Scientists believe that the two galaxies may be merging, and that their rapid star formation is not only using up their gas, or star-forming fuel, but that it may eventually evolve the pair into massive elliptical galaxies like those seen in the Local Universe.

"Using high-resolution ALMA observations of molecular gas in the pair of galaxies known collectively as SPT0311-58 we detected both water and carbon monoxide molecules in the larger of the two galaxies. Oxygen and carbon, in particular, are first-generation elements, and in the molecular forms of carbon monoxide and water, they are critical to life as we know it," said Sreevani Jarugula, an astronomer at the University of Illinois and the principal investigator on the new research. "This galaxy is the most massive galaxy currently known at high redshift, or the time when the Universe was still very young. It has more gas and dust compared to other galaxies in the early Universe, which gives us plenty of potential opportunities to observe abundant molecules and to better understand how these life-creating elements impacted the development of the early Universe."

Water, in particular, is the third most abundant molecule in the Universe after molecular hydrogen and carbon monoxide. Previous studies of galaxies in the local and early Universe have correlated water emission and the far-infrared emission from dust. "The dust absorbs the ultraviolet radiation from the stars in the galaxy and re-emits it as far-infrared photons," said Jarugula. "This further excites the water molecules, giving rise to the water emission that scientists are able to observe. In this case, it helped us to detect water emission in this massive galaxy. This correlation could be used to develop water as a tracer of star formation, which could then be applied to galaxies on a cosmological scale."

Studying the first galaxies to form in the Universe helps scientists to better understand the birth, growth, and evolution of the Universe, and everything in it, including the Solar System and Earth. "Early galaxies are forming stars at a rate thousands of times that of the Milky Way, said Jarugula. "Studying the gas and dust content of these early galaxies informs us of their properties, such as how many stars are being formed, the rate at which gas is converted into stars, how galaxies interact with each other and with the interstellar medium, and more."

According to Jarugula, there's plenty left to learn about SPT0311-58 and the galaxies of the early Universe. "This study not only provides answers about where, and how far away, water can exist in the Universe, but also has given rise to a big question: How has so much gas and dust assembled to form stars and galaxies so early in the Universe? The answer requires further study of these and similar star-forming galaxies to get a better understanding of the structural formation and evolution of the early Universe."

Read more at Science Daily

Mammals’ noses come from reptiles’ jaws

New examinations of skeletons and animal embryos have allowed researchers to discover how mammals developed protruding, flexible noses. This study contributes to uncovering the origin of mammals' strong sense of smell and creates the potential for new animal models, like chickens or frogs, that are often used in lab experiments to investigate facial development disorders such as cleft palate.

The traditional scientific understanding of facial evolution is that both mammalian and reptilian jaws develop in almost the same way. Even though mammals have a unique nose, the evolution of this structure has remained unknown.

"Existing fossils of four-legged animals, both reptilian and mammalian ancestors, have the same number of upper jaw bones. It's very easy to think that the bones are the same, but now we can study embryos and track cellular development to study these bones in much greater detail," explained postdoctoral researcher Hiroki Higashiyama, who studies evolutionary development at the University of Tokyo Graduate School of Medicine. The research, recently published in Proceedings of the National Academy of Sciences, is the first to examine the evolution of facial structure using cellular studies comparing multiple embryos of multiple species.

Higashiyama and his colleagues in the laboratory of Professor Hiroki Kurihara designed experiments to track facial development in embryos of different species, including birds (chickens), reptiles (geckos) and mammals (mice). They focused on a group of cells known as the facial prominences in embryos that produce the physical structures of the face. Researchers stained the cells to track them as they moved and grew. A group of cells called the frontonasal prominence forms the jaw tip in reptiles, but becomes the protruding nose in mammals. Mammals' jaw tips form instead from a separate group of cells called the maxillary prominence.

Using this new perspective from their cellular experiments, researchers then examined fossil specimens.

As species' ancestors accumulated more physical and genetic differences, the bone at the tip of reptiles' upper jaw, the premaxilla, became smaller and migrated upwards and the bone that was behind it, the septomaxilla, became larger and moved forwards to become mammals' jaw tip. Researchers say that the facial bones of egg-laying mammals, like the Australian platypus and echidna, provide additional living examples of transitional bone structures from the evolutionarily older reptile model to the more recently evolved mammalian structure.

This separation of the nose and jaw gives mammals their unique ability to "sniff," using muscles to flare the nostrils and deeply inhale odors from the environment.

"This finding is a key innovation in the evolution of our and other mammals' motile nose, which contributes to mammals' highly sensitive sense of smell," said Higashiyama.

Distinguishing and recognizing so many odors may have also helped mammals develop larger, more complex brains than earlier ancestor species.

The recent research has provided physical evidence of the evolutionary shift in premaxilla and septomaxilla arrangement, but separate studies will be needed to identify the genetic causes.

"Now we know the composition of facial prominences and embryonic development in multiple species, so we can compare facial development disorders in chickens or frogs to humans. We have mainly just improved textbook knowledge for now, but in the future, these animal models will be a practical application of our studies," said Higashiyama.

Read more at Science Daily

Three ways to reduce the carbon footprint of food purchased by US households

Most consumers want to make food purchases that are smart for their wallets, their health and the environment. And while switching to a vegetarian or vegan diet can lower one's impact on greenhouse gas emissions, it may not be realistic or healthful for everyone. Now, researchers in ACS' Environmental Science & Technology report three ways that Americans can reduce the carbon footprint of their food purchases, without requiring drastic dietary changes.

Getting food from farms to people's plates contributes a sizeable portion of the global greenhouse gas emissions. And animals are inefficient at converting the plants they eat into energy, so meat and dairy products result in higher emissions than fruit, vegetables and grains. Based on that knowledge, previous researchers have provided suggestions for changes that individuals or households can make to reduce the emissions generated by food production. However, most of these recommendations have been based on an "average American diet." In reality, not everyone eats the same types or quantities of foods, so to account for this diversity, Hua Cai and colleagues wanted to assess the actual groceries purchased by U.S. households and identify the hotspots of carbon emissions in these purchases.

The researchers analyzed detailed grocery purchase records of over 57,000 U.S. households in 2010, and for each home, summed the greenhouse gas emissions for growing and harvesting the food items. Data for packaging and transportation were not included because that information was unavailable. Then, they compared the emissions calculation to that which would be generated from buying foods for a benchmark healthy and sustainable diet.

The team's analysis revealed that 71% of homes surveyed could decrease their food carbon footprint, identifying three main ways for consumers to do so. The suggestions are:
 

  • Small households of one or two people should buy less food in bulk quantities, which is often more than will be eaten, and manufacturers should offer cost-effective package sizes.
  • Cutting out foods with high caloric content and low nutritional values would result in a 29% reduction of the total potential emissions, while also potentially improving health outcomes.
  • People should buy less savory bakery products and ready-made foods. Though those foods are responsible for relatively low carbon emissions, the large amounts of these items that are purchased adds up to significant emissions.


In summary, the researchers say these strategies are initial ways people can reduce their at-home food-based carbon footprint.

Read more at Science Daily

Forest fires linked to low birth weight in newborns

Women exposed to smoke from landscape fires during pregnancy are more likely to give birth to babies with low or very low birth weights, according to findings published in eLife.

The study is the first to report a link between low birth weight and exposure to fire smoke in low and middle-income countries (LMICs), where 90% of low birth weight infants are born and landscape fires are prevalent.

Landscape fires, such as wildfires, tropical deforestation fires and agricultural biomass burning, play an important role in maintaining terrestrial ecosystems. Yet, landscape fire smoke is triggering a costly and growing global public health problem, causing recurrent episodes of pollution mostly affecting LMICs.

Previous studies have shown that exposure to fire smoke during pregnancy is linked to low birth weight, which itself is a public health problem in LMICs. Reducing the risk of low birth weight is one of the World Health Organization's global targets for 2025.

"Babies with low birth weights are at higher risk of a range of diseases in later life compared to normal weight newborns," explains co-first author Jiajianghui Li, a PhD student at the Institute of Reproductive and Child Health, School of Public Health Science Centre, Peking University, China. "Several studies have shown the effects of landscape fire smoke on acute lung and heart conditions, but the health impacts of these pollutants on susceptible pregnant women are not well known. We wanted to explore the association between birth weight and exposure to fire source pollution across several countries and over a long time period."

The researchers conducted a case-control study in 54 LMICs where they matched 108,137 groups of siblings to their mothers. They used surveys conducted by the US Agency for International Development between 2000 and 2014 to find out information about sibling birth weights and other health and demographic factors. They then assessed exposure to landscape fire pollutants using data on fire emissions from the Global Fire Emission Database and a model that converted this data into ground-surface concentrations of particulate matter in different regions.

Their analysis showed that an increase in exposure of one microgram per cubic metre of fire-sourced particulate matter was associated with a 2.17-gram reduction in birth weight. "The effect was even more pronounced when we looked at whether exposure to fire smoke was linked to low or very low birth weight; for every microgram per cubic metre increase in particulate matter exposure, the risks of low and very low birth weight increased by around three and 12 per cent, respectively," says co-first author Tianjia Guan, an assistant professor at the Department of Health Policy, School of Health Policy and Management, Chinese Academy of Medical Sciences and Peking Union Medical College, China.

The researchers found that very low birth weight was most strongly linked to the pollution. To find out why, they developed a model that looked at the average birth weight of infants within single families. Newborns in families that had lower birth weights on average were more susceptible to the risks of fire smoke pollution than those who had moderate baseline birthweights. "This suggests that other factors affecting maternal and foetal health, such as nutrition or maternal employment status, might make mothers and their developing infants even more susceptible to the risks of pollution," says co-first author Qian Guo, a PhD student at the School of Energy and Environmental Engineering, University of Science and Technology, China.

Read more at Science Daily

Nov 2, 2021

Engineers develop better method for cleaning up orbiting space junk

Space near Earth has become a trash heap.

According to NASA, there are more than 27,000 pieces of space debris bigger than the size of a softball currently orbiting Earth, and they are traveling at speeds of up to 17,500 mph, fast enough for a small chunk to damage a satellite or spacecraft like an intergalactic cannonball.

Consequently, cleaning up this space junk will be an important task if agencies are to shoot more rockets and satellites into orbit. University of Utah mechanical engineering professor Jake J. Abbott is leading a team of researchers that has discovered a method to manipulate orbiting debris with spinning magnets. With this technology, robots could one day gently maneuver the scrap to a decaying orbit or further out into space without actually touching it, or they could repair malfunctioning objects to extend their life.

Their research is detailed in the paper, "Dexterous magnetic manipulation of conductive non-magnetic objects," published this month in the science journal, Nature. The co-authors include U graduate students Lan Pham, Griffin Tabor and Ashkan Pourkand, former graduate student Jacob L. B. Aman, and U School of Computing associate professor Tucker Hermans. You can read a copy of the paper here.

The concept involves moving metallic, non-magnetized objects in space with spinning magnets. When the metallic debris is subjected to a changing magnetic field, electrons circulate within the metal in circular loops, "like when you swirl your cup of coffee and it goes around and around," says Abbott.

The process turns the piece of debris into essentially an electromagnet that creates torque and force, which can allow you to control where the debris goes without physically grabbing it.

While the idea of using these kinds of magnetic currents to manipulate objects in space is not new, what Abbott and his team have discovered is that using multiple magnetic-field sources in a coordinated fashion allows them to move the objects in six degrees of movement, including rotating them. Before, it was only known how to move them in one degree of movement, like just pushing them.

"What we wanted to do was to manipulate the thing, not just shove it but actually manipulate it like you do on Earth," he says. "That form of dexterous manipulation has never been done before."

With this new knowledge, scientists for example could stop a damaged satellite from wildly spinning in order to repair it, something that would not have been possible before.

"You have to take this crazy object floating in space, and you have to get it into a position where it can be manipulated by a robot arm," Abbott says. "But if it's spinning out of control, you could break the robot arm doing that, which would just create more debris."

This method also allows scientists to manipulate objects that are especially fragile. While a robot arm could damage an object because its claw applies force to one part of it, these magnets would apply a gentler force to the entire object so no one section is harmed.

To test their research, the team used a series of magnets to move a copper ball on a plastic raft in a tank of water (the best way to simulate slow-moving objects in microgravity). The magnets moved the sphere not only in a square, but they also rotated the ball.

Abbott says this newly discovered process could be used with a spinning magnet on a robotic arm, a stationary magnet that creates spinning magnetic fields, or a spinning super-conductive electromagnet like those used in MRI scanners.

Abbott believes this principle of manipulating non-magnetic metallic objects with magnets could also have applications beyond the clearing of space debris.

"I'm starting to open my mind to what potential applications there are," he says. "We have a new way to apply a force to an object for precise alignment without touching it."

But for now, this idea could immediately be applied to help fix the problem of space junk orbiting the Earth.

Read more at Science Daily

A life less obvious: Study sheds light on the evolution of underground microbes

Deep, dark fractures reaching far down into the oldest rocks on Earth may seem about as hospitable to life as outer space, but some estimates suggest that microbes dwelling deep in the Earth's crust account for the majority of microbial life. These underground lifeforms, which make up what's known as the deep biosphere, could account for as much as 20% of all biomass on Earth.

These ecosystems host microbial lineages that are of interest for understanding the origin and evolution of life on our planet but remain the least explored and understood ecosystems on Earth, according to the authors of a new study that takes a closer look at how deep habitats changed during Earth's tumultuous past.

"Understanding the history of the deep biosphere can provide insight into the evolution of life on Earth," said Peter Reiners, a professor of geosciences and associate dean of the University of Arizona College of Science, who co-authored the paper with Henrik Drake, an associate professor at the Linnaeus University in Sweden. "This requires understanding the complex evolution of habitable conditions in these underground environments, but such assessment had not been presented until now."

While microbes have been known to eke out a living as deep as 3 miles below Earth's surface, and possibly beyond, very little is known about how the deep biosphere has evolved over geologic history, and how modern microbes are related to their ancient ancestors in the subsurface.

Reiners and Drake focused on Precambrian cratons, which are some of the oldest rocks still present today, to find out where and when subsurface microbes should have been active on Earth hundreds of millions to billions of years ago. The results of their study, published this week in the Proceedings of the National Academy of Sciences, reveal that many cratons were uninhabitable for microbes for much of their existence, with the longest period of habitability not much beyond a billion years, and many cratons have only been habitable for the past 50 million to 300 million years.

"We showed that because microbial habitability generally requires temperatures less than about 100 degrees Celsius (212 degrees Fahrenheit), in only a few places do we expect to find evidence of subsurface microbial life older than about a billion years," Reiners said. "Just because these rocks are really old, and the fluids in them may be old, too, doesn't mean that they could've supported life until relatively recently, when they got very close to the surface by erosion."

Precambrian cratons are home to microorganisms that get their energy from consumption of nutrients including sparsely available organic carbon but also from chemical reactions between fluids and rocks. Drake and Reiners estimate that subsurface bacteria and archea (single-celled prokaryotes similar to bacteria), which now compose up to 90% of all microbial life on Earth, probably composed an even larger fraction of total life hundreds of millions to billions of years ago.

"Their evolution, particularly the evolution of their metabolisms -- how they get energy and what chemical elements they 'eat' and 'poop' -- provide key insights into the evolution of all other critters," Reiners said, adding that some researchers think that life may have first evolved beneath Earth's surface.

The researchers used a combination of records of deep ancient life found within craton fractures and recent advances in intermediate- and low-temperature thermochronology, a technique that allows scientists to reconstruct the temperature histories of rocks. Rocks may have endured higher temperatures and pressure during periods when sediments accumulated on top of them, only to be brought closer to the surface and into more habitable conditions once those sedimentary layers eroded away.

"By combining thermochronologic results from several different radioisotopic dating systems, we can reconstruct their thermal histories through the ups and downs of burial and erosion over time," Reiners said. "This approach gives us context for prospecting and interpreting the little-explored geologic record of the deep biosphere of Earth's cratons."

By assessing when these rock environments became habitable, and in some cases when they may have been buried and sterilized again, the study provides new insights into the evolutionary aspect of the deep biosphere.

"Cratonic rocks formed billions of years ago, often deep in the crust, at temperatures too high for any life," Reiners said. "It was only much later, following erosion, that the currently exposed rocks reached levels in the crust where temperatures were habitable."

Drake said thermochronology could help identify areas where researchers could look for the oldest records of subsurface microorganisms on Earth.

Read more at Science Daily

Scientists identify new antibody for COVID-19 and variants

A research collaboration between scientists at Duke University and the University of North Carolina at Chapel Hill has identified and tested an antibody that limits the severity of infections from a variety of coronaviruses, including those that cause COVID-19 as well as the original SARS illness.

The antibody was identified by a team at the Duke Human Vaccine Institute (DHVI) and tested in animal models at UNC-Chapel Hill. Researchers published their findings Nov. 2 in the journal Science Translational Medicine.

"This antibody has the potential to be a therapeutic for the current epidemic," said co-senior author Barton Haynes, M.D., director of DHVI. "It could also be available for future outbreaks, if or when other coronaviruses jump from their natural animal hosts to humans."

Haynes and colleagues at DHVI isolated the antibody by analyzing the blood from a patient who had been infected with the original SARS-CoV-1 virus, which caused the SARS outbreak in the early 2000s, and from a current COVID-19 patient.

They identified more than 1,700 antibodies, which the immune system produces to bind at specific sites on specific viruses to block the pathogen from infecting cells. When viruses mutate, many binding cites are altered or eliminated, leaving antibodies ineffectual. But there are often sites on the virus that remain unchanged despite mutations. The researchers focused on antibodies that target these sites because of their potential to be highly effective across different lineages of a virus.

Of the 1,700 antibodies from the two individuals, the Duke researchers found 50 antibodies that had the ability to bind to both the SARS-CoV-1 virus as well as SARS-CoV-2, which causes COVID-19.

Further analysis found that one of those cross-binding antibodies was especially potent -- able to bind to a multitude of animal coronaviruses in addition to the two human-infecting pathogens.

"This antibody binds to the coronavirus at a location that is conserved across numerous mutations and variations," Haynes said. "As a result, it can neutralize a wide range of coronaviruses."

With the antibody isolated, the DHVI team turned to researchers at UNC who have expertise in animal coronaviruses. The UNC team, led by co-senior author Ralph S. Baric, Ph.D., epidemiology professor at UNC Gillings School of Global Public Health, tested it in mice to determine whether it could effectively block infections, or minimize the infections that occurred.

They found that it did both. When given before the animals were infected, the antibody protected mice against developing SARS, COVID-19 and its variants such as Delta, and many animal coronaviruses that have the potential to cause human pandemics.

"The findings provide a template for the rational design of universal vaccine strategies that are variant-proof and provide broad protection from known and emerging coronaviruses," Baric said.

When given after infections, the antibody reduced severe lung symptoms compared to animals that were not treated with the antibody.

"The therapeutic activity even after mice were infected suggests that this could be a treatment deployed in the current pandemic, but also stockpiled to prevent the spread of a future outbreak or epidemic with a SARS-related virus," said David Martinez, Ph.D., a post-doctoral researcher in the Department of Epidemiology at UNC's Gillings School.

"This antibody could be harnessed to prevent maybe SARS-CoV-3 or SARS-CoV-4," Martinez said.

Read more at Science Daily

Potential strategy for fighting obesity

UT Southwestern scientists may have identified a method of safely mimicking the weight-loss benefits of a plant compound that -- despite its harmful side effects -- hold critical answers to developing therapies for obesity.

Celastrol, derived from the root extracts of a white-flowered plant in China, has drawn increased attention in recent years after studies showed it can both prevent and reverse obesity in mice. However, because celastrol can cause reactions such as high blood pressure and lethargy in mice, researchers have sought to understand how the compound works and use that knowledge to develop safe weight-loss treatments for people.

UT Southwestern may have solved part of the puzzle in a new study that shows celastrol requires a specific protein in a type of neuron that influences metabolism. Scientists found they can mimic a "fed" signal to mouse brains by deleting this protein from the neurons, resulting in mice losing 7% of their body weight in two weeks despite being a fed high-fat diet.

Key to the findings: The mice did not appear to endure the same physical ailments documented in previous research in which celastrol was administered.

"This new understanding of how celastrol works on the cellular level opens more possibilities for targeting pathways that can improve our metabolism without the negative health impact," said study author Kevin W. Williams, Ph.D., an investigator at UT Southwestern's Center for Hypothalamic Research. "We haven't uncovered all the cell populations that influence weight loss, but each of these findings brings us closer to developing effective, safe therapies for obesity."

The study, published in JCI Insight, is the latest research from Dr. Williams that may someday help improve glucose metabolism in patients with obesity-driven conditions such as diabetes. More than 30 million Americans have diabetes, accounting for nearly 10% of the population, according to the Centers for Disease Control and Prevention.

The new research focused on a class of cells in the brain called POMC neurons, which are associated with reduced appetite, lower blood glucose levels, and higher energy burning when activated. A 2019 study from Dr. Williams showed a single bout of exercise can boost the activity of POMC for up to two days.

In the latest research, the Williams lab found this neuron also plays a critical part in how celastrol impacts weight loss. Mice given the compound saw decreased activity of a protein called PERK within the region of the brain where POMC neurons reside. The lab further found that deleting PERK from these neurons can replicate much of the weight-loss effects of celastrol, and appears to do so without causing harmful side effects often associated with anti-obesity drugs.

"The mice were leaner and had the same activity levels; they didn't appear lethargic, sickly or ill," Dr. Williams said. "But this is through observation only. Further study is needed to verify how targeting this pathway may be influencing their cardiovascular systems and other functions."

The Food and Drug Administration cautions people against the use of celastrol, a substance also known as thunder god vine used in traditional Chinese medicine. Although extracts from the plant are sold as supplements, the National Institutes of Health (NIH) has posted cautionary statements saying scientists do not yet have enough data about celastrol's safety and effectiveness.

But the compound has already given scientists important insight into how safer strategies for weight loss may be developed in the lab. In the new study, for instance, deleting PERK from the POMC neurons blocked about half the food intake-reducing effect of celastrol.

Read more at Science Daily

Nov 1, 2021

A lab in the sky: Physics experiment in Earth’s atmosphere could help improve GPS performance

The Earth's atmosphere has been used as a 'laboratory' to carry out a physics experiment, in research collaboration involving the University of Strathclyde which could help to improve the performance of GPS.

The study displays a new method of remotely monitoring the plasma in the ionosphere and of controlling wave modes in a way which could help GPS make better calculations in the face of extreme space weather.

The researchers conducted a controlled radar wave experiment by injecting radio waves into the ionosphere, at slightly different frequencies.

The returned signal was then recorded and analysed. The researchers found that plasma waves were excited in the ionosphere and non-linear waves were mixed, leading to a wide spectrum of non-linear frequencies in the returned signal.

Plasma in the ionosphere plays a significant role in reflecting and modifying radio waves used for communication and radio navigation systems such as GPS, but the accuracy of these can be affected by 'space weather' events such as solar storms.

The experiment was carried out at the EISCAT facility near Tromsø, Norway and the research has been published in the journal Nature Communications.

Dr Bengt Eliasson, a Reader in Strathclyde's Department of Physics, was a partner in the research and said: "The Ionosphere is part of Earth's upper atmosphere, between 80 and about 1000 km, where extreme ultraviolet and x-ray solar radiation ionizes atoms and molecules, creating a layer of plasma.

"Other phenomena, such as energetic charged particles and cosmic rays, also have an ionizing effect and can contribute to the ionospheric plasma density.

"The discovery of the Earth's ionosphere came from early radio wave observations more than a century ago, and the recognition that only a reflecting layer composed of electrons and ions could explain the observations. Early research was aimed at explaining the various layers in the ionosphere and their variability through factors such as local time, latitude and season.

"Today, the emphasis of ionospheric research has shifted toward understanding the dynamics and plasma physics of ionospheric phenomena, particularly due to disturbances driven by the sun, known as space weather events. These space weather events dynamically increase the total number of ionospheric electrons; GPS systems cannot correctly model this dynamic enhancement and errors occur in position calculations.

"The active control of the wave modes excited in the ionosphere, described in our article, has the potential of providing new and improved diagnostics of temperature, density, magnetic field and ion composition, with the potential of improving GPS position calculations during times of disturbance."

Read more at Science Daily

How bread wheat got its gluten: Tracing the impact of a long-lost relative on modern bread wheat

Genetic detective work has uncovered an obscure ancestor of modern bread wheat, in a finding similar to uncovering a famous long-lost relative through DNA analysis in humans.

In a study which appears in Nature Biotechnology researchers sequenced the DNA from 242 unique accessions of Aegilops tauschii gathered over decades from across its native range -- from Turkey to Central Asia.

Population genome analysis led by Dr Kumar Gaurav from the John Innes Centre revealed the existence of a distinct lineage of Aegilops tauschii restricted to present day Georgia, in the Caucuses region -- some 500 kilometers from the Fertile Crescent where wheat was first cultivated -- an area stretching across modern-day Iraq, Syria, Lebanon, Palestine, Israel, Jordan, and Egypt.

First author of the study in Nature Biotechnology, Dr Kumar Gaurav said, "The discovery of this previously unknown contribution to the bread wheat genome is akin to discovering the introgression of Neanderthal DNA into the out of Africa human genome,"

"It is most likely to have occurred though a hybridization outside the Fertile Crescent. This group of Georgian accessions form a distinct lineage that contributed to the wheat genome by leaving a footprint in the DNA."

The discovery comes via a major international collaboration to improve crops by exploring useful genetic diversity in Aegilops tauschii, awild relative of bread wheat. The Open Wild Wheat Consortium brought together 38 research groups and researchers from 17 countries.

Further research by Dr Jesse Poland's group at Kansas State University was published in a companion paper in Communications Biology and shows that the ancestral Aegilops tauschii DNA found in modern bread wheat includes the gene that gives superior strength and elasticity to dough.

Dr. Poland said, "We were amazed to discover that this lineage has provided the best-known gene for superior dough quality."

The researchers speculate that the newly discovered lineage may have been more geographically widespread in the past, and that it may have become separated as a refugium population during the last ice-age.

Reflecting on all that has come together to make this work possible, Dr Brande Wulff, corresponding author of the study, remarked, "Fifty or sixty years ago at a time when we barely understood DNA, my scientific forebears were traversing the Zagros mountains in the middle east and Syria and Iraq. They were collecting seeds, perhaps having an inkling that one day these could be used for improving wheat. Now we are so close to unlocking that potential, and for me that is extraordinarily exciting."

Deciphering Wheat's Complex Genome

Modern "hexaploid" wheat, is a complex genetic combination of different grasses with a huge genetic code, split into A, B and D sub-genomes. Hexaploid wheat accounts for 95 percent of all cultivated wheat. Hexaploid means that the DNA contains six sets of chromosomes -- three pairs of each.

Through a combination of natural hybridizations and human cultivation, Aegilops tauschii provided the D-genome to modern wheat. The D-genome added the properties for making dough, and enabled bread wheat to flourish in different climates and soils.

The origin of modern hexaploid bread wheat has long been the subject of intense scrutiny with archeological and genetic evidence suggesting that the first wheat was cultivated 10,000 years ago in the Fertile Crescent.

Domestication, while increasing yield and increasing agronomic performance, came at the cost of a pronounced genetic bottleneck eroding genetic diversity for protective traits to be found in Aegilops tauschii such as disease resistance and heat tolerance.

Analysis performed by Dr. Gaurav and the research team revealed that just 25% of the genetic diversity present in Aegilops tauschii made it into hexaploid wheat. To explore this diversity in the wild gene pool, they used a technique called association mapping to discover new candidate genes for disease and pest resistance, yield and environmental resilience.

Dr. Sanu Arora, who had earlier led a study to clone disease resistance genes from Aegilops tauschii said, "Previously we were restricted to exploring a very small subset of the genome for disease resistance, but in the current study, we have generated data and techniques to undertake an unbiased exploration of the species diversity."

Further experiments demonstrated the transfer of candidate genes for a subset of these traits into wheat using genetic transformation and conventional crossing -- facilitated by a library of synthetic wheats -- specially bred material which incorporates Aegilops tauschii genomes.

This publicly available library of synthetic wheats captures 70 per cent of the diversity present across all three known Aegilops tauschii lineages, enabling researchers to assess traits rapidly in a background of hexaploid wheats.

"Our study provides an end-to-end pipeline for rapid and systematic exploration of the Aegilops tauschii gene pool for improving modern bread wheat." says Dr Wulff.

Read more at Science Daily

Study shows how 1.5°C temperature rise can cause significant changes in coastal species

A temperature increase of around 1.5°C -- just under the maximum target agreed at the COP23 Paris meeting in 2017 -- can have a marked impact on algae and animal species living on UK coastlines, new research has found.

The study, by ecologists at the University of Plymouth, examined how increases in rock surface temperature were affecting the quantity and behaviour of species commonly found on the shorelines of Devon and Cornwall.

It focused on two sites on the region's north coast (at Bude and Croyde) and two on the south coast (Bantham and South Milton Sands), all of which have deep gullies with both north-facing and south-facing surfaces.

Their findings showed the average annual temperature on the south-facing surfaces at low tide was 1.6°C higher than those facing north and that temperature extremes (i.e. > 30°C) were six-fold more frequent on south-facing aspects.

Across the four sites, these differences had a significant effect on species abundance with 45 different species found on north-facing sites during the summer of 2018 compared to 30 on south-facing ones.

In winter, the figures were 42 and 24 respectively, while some species -- including the red seaweed Plumaria plumosa and sea cauliflower (Leathesia marina) -- were restricted to north-facing surfaces.

The different temperatures also had an impact on species' breeding patterns with five times more dog whelk (Nucella lapillus) eggs found on north-facing surfaces than south-facing ones.

And while limpet reproduction generally occurred earlier on south-facing surfaces, these key grazers also exhibited greater levels of stress.

The research, published in Marine Environmental Research, is the first to explore the association between temperature and site geography on species abundance, physiology and reproductive behaviour in coastal areas.

Its authors say it provides evidence of how temperature variation at local scales can affect species while also offering an insight as to how future changes in global temperatures might have a negative impact over the coming decades.

The research was led by Dr Axelle Amstutz as part of her PhD, working alongside Associate Professor of Marine Ecology Dr Louise Firth, Professor of Marine Zoology John Spicer, and Associate Professor in Plant-Animal Interactions Dr Mick Hanley.

Dr Hanley, the study's senior author, said: "We have all heard for some time about the importance of limiting average global temperature increases to 1.5°C, and it will undoubtedly be one of the key topics discussed at the forthcoming COP26 conference. This study shows the impact even that kind of increase could have on important species that contribute to the health and biodiversity of our planet. As such, it does add to overwhelming evidence of the threats posed by human-induced climate change.

Read more at Science Daily

New type of nerve cell discovered in the retina

Scientists at the John A. Moran Eye Center at the University of Utah have discovered a new type of nerve cell, or neuron, in the retina.

In the central nervous system a complex circuitry of neurons communicate with each other to relay sensory and motor information; so-called interneurons serve as intermediaries in the chain of communication. Publishing in the Proceedings of the National Academy of Sciences of the United States of America, a research team led by Ning Tian, PhD, identifies a previously unknown type of interneuron in the mammalian retina.

The discovery marks a notable development for the field as scientists work toward a better understanding of the central nervous system by identifying all classes of neurons and their connections.

"Based on its morphology, physiology, and genetic properties, this cell doesn't fit into the five classes of retinal neurons first identified more than 100 years ago," said Tian. "We propose they might belong to a new retinal neuron class by themselves."

The research team named their discovery the Campana cell after its shape, which resembles a hand bell. Campana cells relay visual signals from both types of light-sensing rod and cone photoreceptors in the retina, but their precise purpose is the subject of ongoing research. Experiments showed Campana cells remain activated for an unusually long time -- as long as 30 seconds -- in response to a 10 millisecond light flash stimulation.

"In the brain, persistent firing cells are believed to be involved in memory and learning," said Tian. "Since Campana cells have a similar behavior, we theorize they could play a role in prompting a temporal 'memory' of a recent stimulation."

The published research study is: "An uncommon neuronal class conveys visual signals from rods and cones to retinal ganglion cells." Authors are: Brent K. Young, Charu Ramakrishnan, Tushar Ganjawala, Ping Wang, Karl Deisseroth, and Ning Tian.

Read more at Science Daily

Oct 31, 2021

The upside-down orbits of a multi-planetary system

When planets form, they usually continue their orbital evolution in the equatorial plane of their star. However, an international team, led by astronomers from the University of Geneva (UNIGE), Switzerland, has discovered that the exoplanets of a star in the constellation Pisces orbit in planes perpendicular to each other, with the innermost planet the only one still orbiting in the equatorial plane. Why so? This radically different configuration from our solar system could be due to the influence of a distant companion of the star that is still unknown. This study, to be read in the journal Astronomy & Astrophysics, was made possible by the extreme precision achieved by ESPRESSO and CHEOPS, two instruments whose development was led by Switzerland.

Theories of the origin of planetary systems predict that planets form in the equatorial plane of their star and continue to evolve there, unless disturbed by special events. This is not the case in the solar system, where our planets lie close to the solar equatorial plane. In this case, the planets are said to be aligned with their star. However, a study showed in 2019 that two of the three planets around the star HD3167 are not aligned with it. HD3167c and HD3167d, two mini-Neptunes that orbit in 8.5 and 29.8 days, actually pass over the star's poles, nearly 90 degrees from its equatorial plane.

Synergies between instruments

By re-observing this system with more efficient instruments, a team led by astronomers from UNIGE was able to measure the orientation of the third planet's orbital plane, the super earth HD3167b, which orbits in less than a day (23 hours exactly). When a planet transits its star, the orientation of its orbit can be determined with a spectrograph, which allows measuring the motion of the stellar regions occulted by the planet and thus deducing its trajectory. The smaller the planet, the more difficult this motion is to detect. It is therefore with ESPRESSO on one of the four 8.2m telescopes of the VLT in Chile that the researchers were able to determine the orbit of HD3167b, which happens to be aligned with the star and perpendicular to the orbital plane of its two siblings. "We needed a maximum of light and a very precise spectrograph to be able to measure the signal of such a small planet," comments Vincent Bourrier, researcher at the Department of Astronomy of the Faculty of Science of the UNIGE. "Two conditions that are met by the precision of ESPRESSO, combined with the collecting power of the VLT."

This result could not have been obtained without a precise knowledge of when HD3167b transits its star, which was not possible with the time predicted by the literature with a precision of 20 minutes -- an eternity for a transit that lasts 97 minutes. The researchers therefore turned to the CHEOPS satellite consortium, whose main mission is precisely to measure transits with very high precision. "CHEOPS allowed us to know the time of transit with a precision better than one minute. This is a good illustration of the synergy there can be between different instruments, here CHEOPS and ESPRESSO, and the teams that operate them," says Christophe Lovis, a researcher in the Department of Astronomy of the UNIGE and member of the two consortia.

Read more at Science Daily

Is it worth trying to sway the most staunch climate deniers?

Thanks to algorithms that learn about social media users' content preferences, Facebook timelines, Twitter feeds, suggested YouTube videos, and other news streams can look startlingly different from one person's online account to the next. Media and communication experts often wrestle with how to rein in the forces that further polarize people with different views, especially people who sit on opposite sides of the political aisle. When it comes to online content that contains disinformation -- inaccurate messages or propaganda intended to deceive and influence readers -- why are some people more likely to believe falsehoods often spread via social media and the internet?

Arunima Krishna, a Boston University College of Communication researcher who studies public perceptions of controversial social issues, is studying the spread of disinformation, specifically related to climate science -- an issue that has been manipulated by climate change deniers for decades. In her latest study, Krishna surveyed 645 Americans about their beliefs on climate change -- whether or not those beliefs are informed by fact or fiction -- to assess their communication behaviors about climate change.

"I think a lot of folks don't see how close to home climate change is. Even though we're seeing climate refugees, [worsening] hurricanes, and other [natural] disasters, there is still a level of distance from the problem," says Krishna, a College of Communication assistant professor of public relations.

She points out that physical distance from the effects of climate change could be partly why some people find it is easier to separate themselves from the climate crisis. Plus, climate solutions are often things many people don't readily want to do, like eating less meat, using less plastic, and buying less material goods. Fossil fuel companies and lobbyists for the industry have also worked extremely hard to deceive the public from knowing the full extent of the damaging impact of burning fossil fuels, she says.

According to Krishna's survey of Americans, 7 in 10 people who are susceptible to believing climate disinformation self-identified as politically conservative. In contrast, 8 in 10 Americans who self-identified as liberal were found to be immune to disinformation about climate change. Those findings double down on past research from the Yale Program on Climate Change Communication, which found liberals and Democrats are significantly more worried about climate change than conservatives and Republicans, and are more likely to believe humans are causing the climate crisis.

Krishna also detected a difference in age between those who were more susceptible to disinformation and those who weren't. More than half of the respondents immune to false information about climate were under 45. Those more receptive to climate disinformation were, on average, over the age of 46.

Diving deeper into the respondents' responses, Krishna categorized the survey results into four different groups. The first segment, made up of people she calls the "disinformation immune," have not accepted any disinformation about climate change and humans' role in it, and they likely never will. The second group, the"disinformation vulnerable," have negative attitudes about how humans are influencing climate. While they haven't yet accepted disinformation, some of their responses to facts about climate change -- as well as their attitudes and motivations -- indicate they could possibly believe climate disinformation in the future. The third group, the "disinformation receptive," have accepted false information about climate change already. Lastly, the fourth group, the "disinformation amplifying," is made up of people who hold extremely negative attitudes about climate change and doubt humans' role in accelerating it, have already accepted disinformation, and are highly motivated to spread the disinformation they believe.

"My study found that [disinformation amplifiers] are more likely to spread their opinions about climate change compared to everybody else in the survey," Krishna says. The amplifiers are known as what Krishna calls "lacuna publics," a term she coined in 2017 when she was researching vaccine hesitant groups. (The word "publics" refers to groups connected by issue-specific motivation, and "lacuna" means a gap in knowledge.) Though the disinformation amplifiers, or lacuna publics, are in the minority, they are different from groups that are disinformation vulnerable or receptive because of their willingness to spread disinformation.

The United States has more climate skeptics than anywhere else in the world, Krishna says, but their ranks have started to shrink. Climate scientists around the world have found unequivocally that the more we continue to emit heat-trapping greenhouse gases into the atmosphere, the worse the consequences will be for humans, most species, and ecosystems on Earth.

Though there is no single solution to stopping the spread of climate disinformation, Krishna emphasizes the importance of engaging with people most vulnerable to believing disinformation. Lacuna publics, or amplifiers, however, might be difficult or impossible to sway.

Read more at Science Daily