Apr 3, 2021

From stardust to pale blue dot: Carbon's interstellar journey to Earth

 We are made of stardust, the saying goes, and a pair of studies including University of Michigan research finds that may be more true than we previously thought.

The first study, led by U-M researcher Jie (Jackie) Li and published in Science Advances, finds that most of the carbon on Earth was likely delivered from the interstellar medium, the material that exists in space between stars in a galaxy. This likely happened well after the protoplanetary disk, the cloud of dust and gas that circled our young sun and contained the building blocks of the planets, formed and warmed up.

Carbon was also likely sequestered into solids within one million years of the sun's birth -- which means that carbon, the backbone of life on earth, survived an interstellar journey to our planet.

Previously, researchers thought carbon in the Earth came from molecules that were initially present in nebular gas, which then accreted into a rocky planet when the gases were cool enough for the molecules to precipitate. Li and her team, which includes U-M astronomer Edwin Bergin, Geoffrey Blake of the California Institute of Technology, Fred Ciesla of the University of Chicago and Marc Hirschmann of the University of Minnesota, point out in this study that the gas molecules that carry carbon wouldn't be available to build the Earth because once carbon vaporizes, it does not condense back into a solid.

"The condensation model has been widely used for decades. It assumes that during the formation of the sun, all of the planet's elements got vaporized, and as the disk cooled, some of these gases condensed and supplied chemical ingredients to solid bodies. But that doesn't work for carbon," said Li, a professor in the U-M Department of Earth and Environmental Sciences.

Much of carbon was delivered to the disk in the form of organic molecules. However, when carbon is vaporized, it produces much more volatile species that require very low temperatures to form solids. More importantly, carbon does not condense back again into an organic form. Because of this, Li and her team inferred most of Earth's carbon was likely inherited directly from the interstellar medium, avoiding vaporization entirely.

To better understand how Earth acquired its carbon, Li estimated the maximum amount of carbon Earth could contain. To do this, she compared how quickly a seismic wave travels through the core to the known sound velocities of the core. This told the researchers that carbon likely makes up less than half a percent of Earth's mass. Understanding the upper bounds of how much carbon the Earth might contain tells the researchers information about when the carbon might have been delivered here.

"We asked a different question: We asked how much carbon could you stuff in the Earth's core and still be consistent with all the constraints," Bergin said, professor and chair of the U-M Department of Astronomy. "There's uncertainty here. Let's embrace the uncertainty to ask what are the true upper bounds for how much carbon is very deep in the Earth, and that will tell us the true landscape we're within."

A planet's carbon must exist in the right proportion to support life as we know it. Too much carbon, and the Earth's atmosphere would be like Venus, trapping heat from the sun and maintaining a temperature of about 880 degrees Fahrenheit. Too little carbon, and Earth would resemble Mars: an inhospitable place unable to support water-based life, with temperatures around minus 60.

In a second study by the same group of authors, but led by Hirschmann of the University of Minnesota, the researchers looked at how carbon is processed when the small precursors of planets, known as planetesimals, retain carbon during their early formation. By examining the metallic cores of these bodies, now preserved as iron meteorites, they found that during this key step of planetary origin, much of the carbon must be lost as the planetesimals melt, form cores and lose gas. This upends previous thinking, Hirschmann says.

"Most models have the carbon and other life-essential materials such as water and nitrogen going from the nebula into primitive rocky bodies, and these are then delivered to growing planets such as Earth or Mars," said Hirschmann, professor of earth and environmental sciences. "But this skips a key step, in which the planetesimals lose much of their carbon before they accrete to the planets."

Hirschmann's study was recently published in Proceedings of the National Academy of Sciences.

"The planet needs carbon to regulate its climate and allow life to exist, but it's a very delicate thing," Bergin said. "You don't want to have too little, but you don't want to have too much."

Bergin says the two studies both describe two different aspects of carbon loss -- and suggest that carbon loss appears to be a central aspect in constructing the Earth as a habitable planet.

"Answering whether or not Earth-like planets exist elsewhere can only be achieved by working at the intersection of disciplines like astronomy and geochemistry," said Ciesla, a U. of C. professor of geophysical sciences. "While approaches and the specific questions that researchers work to answer differ across the fields, building a coherent story requires identifying topics of mutual interest and finding ways to bridge the intellectual gaps between them. Doing so is challenging, but the effort is both stimulating and rewarding."

Blake, a co-author on both studies and a Caltech professor of cosmochemistry and planetary science, and of chemistry, says this kind of interdisciplinary work is critical.

"Over the history of our galaxy alone, rocky planets like the Earth or a bit larger have been assembled hundreds of millions of times around stars like the Sun," he said. "Can we extend this work to examine carbon loss in planetary systems more broadly? Such research will take a diverse community of scholars."

Read more at Science Daily

450-million-year-old sea creatures had a leg up on breathing

 A new study has found the first evidence of sophisticated breathing organs in 450-million-year-old sea creatures. Contrary to previous thought, trilobites were leg breathers, with structures resembling gills hanging off their thighs.

Trilobites were a group of marine animals with half-moon-like heads that resembled horseshoe crabs, and they were wildly successful in terms of evolution. Though they are now extinct, they survived for more than 250 million years -- longer than the dinosaurs.

Thanks to new technologies and an extremely rare set of fossils, scientists from UC Riverside can now show that trilobites breathed oxygen and explain how they did so. Published in the journal Science Advances, these findings help piece together the puzzle of early animal evolution.

"Up until now, scientists have compared the upper branch of the trilobite leg to the non-respiratory upper branch in crustaceans, but our paper shows, for the first time, that the upper branch functioned as a gill," said Jin-Bo Hou, a UCR paleontology doctoral student who led the research.

Among the oldest animals on earth, this work helps situate trilobites on the evolutionary tree more securely in between older arthropods, a large group of animals with exoskeletons, and crustaceans.

The research was possible, in part, because of unusually preserved fossil specimens. There are more than 22,000 trilobite species that have been discovered, but the soft parts of the animals are visible in only about two dozen.

"These were preserved in pyrite -- fool's gold -- but it's more important than gold to us, because it's key to understanding these ancient structures," said UCR geology professor and paper co-author Nigel Hughes.

A CT scanner was able to read the differences in density between the pyrite and the surrounding rock and helped create three-dimensional models of these rarely seen gill structures.

"It allowed us to see the fossil without having to do a lot of drilling and grinding away at the rock covering the specimen," said paleontologist Melanie Hopkins, a research team member at the American Museum of Natural History.

"This way we could get a view that would even be hard to see under a microscope -- really small trilobite anatomical structures on the order of 10 to 30 microns wide," she said. For comparison, a human hair is roughly 100 microns thick.

Though these specimens were first described in the late 1800s and others have used CT scans to examine them, this is the first study to use the technology to examine this part of the animal.

The researchers could see how blood would have filtered through chambers in these delicate structures, picking up oxygen along its way as it moved. They appear much the same as gills in modern marine arthropods like crabs and lobsters.

Comparing the specimens in pyrite to another trilobite species gave the team additional detail about how the filaments were arranged relative to one another, and to the legs.

Most trilobites scavenged the ocean floor, using spikes on their lower legs to catch and grind prey. Above those parts, on the upper branch of the limbs, were these additional structures that some believed were meant to help with swimming or digging.

"In the past, there was some debate about the purpose of these structures because the upper leg isn't a great location for breathing apparatus," Hopkins said. "You'd think it would be easy for those filaments to get clogged with sediment where they are. It's an open question why they evolved the structure in that place on their bodies."

The Hughes lab uses fossils to answer questions about how life developed in response to changes in Earth's atmosphere. Roughly 540 million years ago, there was an explosive diversification in the variety and complexity of animals living in the oceans.

Read more at Science Daily

Apr 2, 2021

First-of-its-kind mechanical model simulates bending of mammalian whiskers

 Researchers have developed a new mechanical model that simulates how whiskers bend within a follicle in response to an external force, paving the way toward better understanding of how whiskers contribute to mammals' sense of touch. Yifu Luo and Mitra Hartmann of Northwestern University and colleagues present these findings in the open-access journal PLOS Computational Biology.

With the exception of some primates, most mammals use whiskers to explore their environment through the sense of touch. Whiskers have no sensors along their length, but when an external force bends a whisker, that deformation extends into the follicle at the base of the whisker, where the whisker pushes or pulls on sensor cells, triggering touch signals in the nervous system.

Few previous studies have examined how whiskers deform within follicles in order to impinge on the sensor cells -- mechanoreceptors -- inside. To better understand this process, Luo and colleagues drew on data from experimental studies of whisker follicles to create the first mechanical model capable of simulating whisker deformation within follicles.

The simulations suggest that whisker deformation within follicles most likely occurs in an "S" shape, although future experimental data may show that the deformation is "C" shaped. The researchers demonstrate that these shape estimates can be used to predict how whiskers push and pull on different kinds of mechanoreceptors located in different parts of the follicle, influencing touch signals sent to the brain.

The new model applies to both passive touch and active "whisking," when an animal uses muscles to move its whiskers. The simulations suggest that, during active whisking, the tactile sensitivity of the whisker system is enhanced by increased blood pressure in the follicle and by increased stiffness of follicular muscle and tissue structures.

"It is exciting to use simulations, constrained by anatomical observations, to gain insights into biological processes that cannot be directly measured experimentally," Hartmann says. "The work also underscores just how important mechanics are to understanding the sensory signals that the brain has evolved to process."

Future research will be needed to refine the model, both computationally and by incorporating new experimental data.

From Science Daily

Distant, spiralling stars give clues to the forces that bind sub-atomic particles

Space scientists at the University of Bath in the UK have found a new way to probe the internal structure of neutron stars, giving nuclear physicists a novel tool for studying the structures that make up matter at an atomic level.

Neutron stars are dead stars that have been compressed by gravity to the size of small cities. They contain the most extreme matter in the universe, meaning they are the densest objects in existence (for comparison, if Earth were compressed to the density of a neutron star, it would measure just a few hundred meters in diameter, and all humans would fit in a teaspoon). This makes neutron stars unique natural laboratories for nuclear physicists, whose understanding of the force that binds sub-atomic particles is limited to their work on Earth-bound atomic nuclei. Studying how this force behaves under more extreme conditions offers a way to deepen their knowledge.

Step in astrophysicists, who look to distant galaxies to unravel the mysteries of physics.

In a study described in the Monthly Notices of the Royal Astronomical Society, Bath astrophysicists have found that the action of two neutron stars moving ever faster as they spiral towards a violent collision gives a clue to the composition of neutron-star material. From this information, nuclear physicists will be in a stronger position to calculate the forces that determine the structure of all matter.

RESONANCE

It is through the phenomenon of resonance that the Bath team has made its discovery. Resonance occurs when force is applied to an object at its natural frequency, generating a large, often catastrophic, vibrational motion. A well-known example of resonance is found when an opera singer shatters a glass by singing loudly enough at a frequency that matches the oscillation modes of the glass.

When a pair of in-spiralling neutron stars reach a state of resonance, their solid crust -- which is thought to be 10-billion times stronger than steel -- shatters. This results in the release of a bright burst of gamma-rays (called a Resonant Shattering Flare) that can be seen by satellites. The in-spiralling stars also release gravitational waves that can be detected by instruments on Earth. The Bath researchers found that by measuring both the flare and the gravitational-wave signal, they can calculate the 'symmetry energy' of the neutron star.

Symmetry energy is one of the properties of nuclear matter. It controls the ratio of the sub-atomic particles (protons and neutrons) that make up a nucleus, and how this ratio changes when subjected to the extreme densities found in neutron stars. A reading for symmetry energy would therefore give a strong indication of the makeup of neutron stars, and by extension, the processes by which all protons and neutrons couple, and the forces that determine the structure of all matter.

The researchers stress that measurements obtained by studying the resonance of neutron stars using a combination of gamma-rays and gravitational-waves would be complementary to, rather than a replacement for, the lab experiments of nuclear physicists.

"By studying neutron stars, and the cataclysmic final motions of these massive objects, we're able to understand something about the tiny, tiny nuclei that make up extremely dense matter," said Bath astrophysicist Dr David Tsang. "The enormous difference in scale makes this fascinating."

Astrophysics PhD student Duncan Neill, who led the research, added: "I like that this work looks at the same thing being studied by nuclear physicists. They look at tiny particles and we astrophysicists look at objects and events from many millions of light years away. We are looking at the same thing in a completely different way."

Dr Will Newton, astrophysicist at the Texas A&M University-Commerce and project collaborator, said: "Though the force that binds quarks into neutrons and protons is known, how it actually works when large numbers of neutrons and protons come together is not well understood. The quest to improve this understanding is helped by experimental nuclear physics data, but all the nuclei we probe on Earth have similar numbers of neutrons and protons bound together at roughly the same density.

Read more at Science Daily

Sugar not so nice for your child's brain development

 Sugar practically screams from the shelves of your grocery store, especially those products marketed to kids.

Children are the highest consumers of added sugar, even as high-sugar diets have been linked to health effects like obesity and heart disease and even impaired memory function.

However, less is known about how high sugar consumption during childhood affects the development of the brain, specifically a region known to be critically important for learning and memory called the hippocampus.

New research led by a University of Georgia faculty member in collaboration with a University of Southern California research group has shown in a rodent model that daily consumption of sugar-sweetened beverages during adolescence impairs performance on a learning and memory task during adulthood. The group further showed that changes in the bacteria in the gut may be the key to the sugar-induced memory impairment.

Supporting this possibility, they found that similar memory deficits were observed even when the bacteria, called Parabacteroides, were experimentally enriched in the guts of animals that had never consumed sugar.

"Early life sugar increased Parabacteroides levels, and the higher the levels of Parabacteroides, the worse the animals did in the task," said Emily Noble, assistant professor in the UGA College of Family and Consumer Sciences who served as first author on the paper. "We found that the bacteria alone was sufficient to impair memory in the same way as sugar, but it also impaired other types of memory functions as well."

Guidelines recommend limiting sugar

The Dietary Guidelines for Americans, a joint publication of the U.S. Departments of Agriculture and of Health and Human Services, recommends limiting added sugars to less than 10 percent of calories per day.

Data from the Centers for Disease Control and Prevention show Americans between the ages 9-18 exceed that recommendation, the bulk of the calories coming from sugar-sweetened beverages.

Considering the role the hippocampus plays in a variety of cognitive functions and the fact the area is still developing into late adolescence, researchers sought to understand more about its vulnerability to a high-sugar diet via gut microbiota.

Juvenile rats were given their normal chow and an 11% sugar solution, which is comparable to commercially available sugar-sweetened beverages.

Researchers then had the rats perform a hippocampus-dependent memory task designed to measure episodic contextual memory, or remembering the context where they had seen a familiar object before.

"We found that rats that consumed sugar in early life had an impaired capacity to discriminate that an object was novel to a specific context, a task the rats that were not given sugar were able to do," Noble said.

A second memory task measured basic recognition memory, a hippocampal-independent memory function that involves the animals' ability to recognize something they had seen previously.

In this task, sugar had no effect on the animals' recognition memory.

"Early life sugar consumption seems to selectively impair their hippocampal learning and memory," Noble said.

Additional analyses determined that high sugar consumption led to elevated levels of Parabacteroides in the gut microbiome, the more than 100 trillion microorganisms in the gastrointestinal tract that play a role in human health and disease.

To better identify the mechanism by which the bacteria impacted memory and learning, researchers experimentally increased levels of Parabacteroides in the microbiome of rats that had never consumed sugar. Those animals showed impairments in both hippocampal dependent and hippocampal-independent memory tasks.

"(The bacteria) induced some cognitive deficits on its own," Noble said.

Noble said future research is needed to better identify specific pathways by which this gut-brain signaling operates.

Read more at Science Daily

Toddler TV time not to blame for attention problems

A comprehensive review published in the journal Psychological Science re-examines previous work that claimed to show a direct link between early screen time and attention problems in children. Although other studies do not reflect these findings, the earlier research continues to be widely reported by the media.

"The findings from the original study, upon further scrutiny, are not borne out. We found that there is still no evidence that TV, by itself, causes ADHD or any kind of attention problems in young children," said Wallace E. Dixon, Jr., a professor of psychology and department head at East Tennessee State University and coauthor of the study. "Our research also tells us that it's important to be skeptical of earth-shattering findings that come in the form of 'something that everybody is doing harms our children.' Extraordinary claims require extraordinary evidence.

"What excites us about the research is that we can ease up on blaming parents or making them feel guilty for letting their children watch television when they are very young," said Dixon.

The newly reported research involved looking at the same data as the 2004 study and using multiverse analyses -- a technique that involves asking a research question hundreds of different ways to determine if the answers are similar each time. This method was used to create 848 analyses to find out if early TV viewing causes later attention problems. A vast majority of results showed no link between the two. The few that did, the authors believe, reflect some oddities in the data set that are not likely to represent the real world.

From Science Daily

Apr 1, 2021

First X-rays from Uranus discovered

Astronomers have detected X-rays from Uranus for the first time, using NASA's Chandra X-ray Observatory. This result may help scientists learn more about this enigmatic ice giant planet in our solar system.

Uranus is the seventh planet from the Sun and has two sets of rings around its equator. The planet, which has four times the diameter of Earth, rotates on its side, making it different from all other planets in the solar system. Since Voyager 2 was the only spacecraft to ever fly by Uranus, astronomers currently rely on telescopes much closer to Earth, like Chandra and the Hubble Space Telescope, to learn about this distant and cold planet that is made up almost entirely of hydrogen and helium.

In the new study, researchers used Chandra observations taken in Uranus in 2002 and then again in 2017. They saw a clear detection of X-rays from the first observation, just analyzed recently, and a possible flare of X-rays in those obtained fifteen years later. The main graphic shows a Chandra X-ray image of Uranus from 2002 (in pink) superimposed on an optical image from the Keck-I Telescope obtained in a separate study in 2004. The latter shows the planet at approximately the same orientation as it was during the 2002 Chandra observations.

What could cause Uranus to emit X-rays? The answer: mainly the Sun. Astronomers have observed that both Jupiter and Saturn scatter X-ray light given off by the Sun, similar to how Earth's atmosphere scatters the Sun's light. While the authors of the new Uranus study initially expected that most of the X-rays detected would also be from scattering, there are tantalizing hints that at least one other source of X-rays is present. If further observations confirm this, it could have intriguing implications for understanding Uranus.

One possibility is that the rings of Uranus are producing X-rays themselves, which is the case for Saturn's rings. Uranus is surrounded by charged particles such as electrons and protons in its nearby space environment. If these energetic particles collide with the rings, they could cause the rings to glow in X-rays. Another possibility is that at least some of the X-rays come from auroras on Uranus, a phenomenon that has previously been observed on this planet at other wavelengths.

On Earth, we can see colorful light shows in the sky called auroras, which happen when high-energy particles interact with the atmosphere. X-rays are emitted in Earth's auroras, produced by energetic electrons after they travel down the planet's magnetic field lines to its poles and are slowed down by the atmosphere. Jupiter has auroras, too. The X-rays from auroras on Jupiter come from two sources: electrons traveling down magnetic field lines, as on Earth, and positively charged atoms and molecules raining down at Jupiter's polar regions. However, scientists are less certain about what causes auroras on Uranus. Chandra's observations may help figure out this mystery.

Read more at Science Daily

Ancient meteoritic impact over Antarctica 430,000 years ago

 A research team of international space scientists, led by Dr Matthias van Ginneken from the University of Kent's School of Physical Sciences, has found new evidence of a low-altitude meteoritic touchdown event reaching the Antarctic ice sheet 430,000 years ago.

Extra-terrestrial particles (condensation spherules) recovered on the summit of Walnumfjellet (WN) within the Sør Rondane Mountains, Queen Maud Land, East Antarctica, indicate an unusual touchdown event where a jet of melted and vaporised meteoritic material resulting from the atmospheric entry of an asteroid at least 100 m in size reached the surface at high velocity.

This type of explosion caused by a single-asteroid impact is described as intermediate, as it is larger than an airburst, but smaller than an impact cratering event.

The chondritic bulk major, trace element chemistry and high nickel content of the debris demonstrate the extra-terrestrial nature of the recovered particles. Their unique oxygen isotopic signatures indicate that their interacted with oxygen derived from the Antarctic ice sheet during their formation in the impact plume.

The findings indicate an impact much more hazardous that the Tunguska and Chelyabinsk events over Russia in 1908 and 2013, respectively.

This research, published by Science Advances, guides an important discovery for the geological record where evidence of such events in scarce. This is primarily due to the difficult in identifying and characterising impact particles.

The study highlights the importance of reassessing the threat of medium-sized asteroids, as it likely that similar touchdown events will produce similar particles. Such an event would be entirely destructive over a large area, corresponding to the area of interaction between the hot jet and the ground.

Dr van Ginneken said: 'To complete Earth's asteroid impact record, we recommend that future studies should focus on the identification of similar events on different targets, such as rocky or shallow oceanic basements, as the Antarctic ice sheet only covers 9% of Earth's land surface. Our research may also prove useful for the identification of these events in deep sea sediment cores and, if plume expansion reaches landmasses, the sedimentary record.

'While touchdown events may not threaten human activity if occurring over Antarctica, if it was to take place above a densely populated area, it would result in millions of casualties and severe damages over distances of up to hundreds of kilometres.'

Read more at Science Daily

Multilingual people have an advantage over those fluent in only two languages

 Multilingual people have trained their brains to learn languages, making it easier to acquire more new languages after mastering a second or third. In addition to demystifying the seemingly herculean genius of multilinguals, researchers say these results provide some of the first neuroscientific evidence that language skills are additive, a theory known as the cumulative?enhancement model of language acquisition.

"The traditional idea is, if you understand bilinguals, you can use those same details to understand multilinguals. We rigorously checked that possibility with this research and saw multilinguals' language acquisition skills are not equivalent, but superior to those of bilinguals," said Professor Kuniyoshi L. Sakai from the University of Tokyo, an expert in the neuroscience of language and last author of the research study recently published in Scientific Reports. This joint research project includes collaboration with Professor Suzanne Flynn from the Massachusetts Institute of Technology (MIT), a specialist in linguistics and multilanguage acquisition, who first proposed the cumulative?enhancement model.

Neuroscientists measured brain activity while 21 bilingual and 28 multilingual adult volunteers tried to identify words and sentences in Kazakh, a language brand new to them.

All participants were native speakers of Japanese whose second language was English. Most of the multilingual participants had learned Spanish as a third language, but others had learned Chinese, Korean, Russian or German. Some knew up to five languages.

Fluency in multiple languages requires command of different sounds, vocabularies, sentence structures and grammar rules. Sentences in English and Spanish are usually structured with the noun or verb at the start of a phrase, but Japanese and Kazakh consistently place nouns or verbs at the end of a phrase. English, Spanish and Kazakh grammars require subject-verb agreement (she walks, they walk), but Japanese grammar does not.

Instead of grammar drills or conversation skills in a classroom, researchers simulated a more natural language learning environment where volunteers had to figure out the fundamentals of a new language purely by listening. Volunteers listened to recordings of individual Kazakh words or short sentences including those words while watching a screen with plus or minus symbols to signal if the sentence was grammatically correct or not. Volunteers were given a series of four increasingly difficult listening tests while researchers measured their brain activity using functional magnetic resonance imaging (fMRI).

In the simplest test, volunteers had to determine if they were hearing a word from the earlier learning session or if it was a grammatically different version of the same word; for example: run/ran or take/takes. In the next test levels, volunteers listened to example sentences and were asked if the sentences were grammatically correct and to decipher sentence structures by identifying noun-verb pairs. For example, "We understood that John thought," is translated in Kazakh as "Biz John oyladï dep tu?sindik." The sentence would be grammatically incorrect if volunteers heard tu?sindi instead of tu?sindik. The correct noun-verb pairs are we understood (Biz tu?sindik) and John thought (John oyladï).

Volunteers could retake the learning session and repeat the test an unlimited number of times until they passed and progressed to the next level of difficulty.

Multilingual participants who were more fluent in their second and third languages were able to pass the Kazakh tests with fewer repeated learning sessions than their less-fluent multilingual peers. More-fluent multilinguals also became faster at choosing an answer as they progressed from the third to fourth test level, a sign of increased confidence and that knowledge acquired during easier tests was successfully transferred to higher levels.

"For multilinguals, in Kazakh, the pattern of brain activation is similar to that for bilinguals, but the activation is much more sensitive, and much faster," said Sakai.

The pattern of brain activation in bilingual and multilingual volunteers fits current understanding of how the brain understands language, specifically that portions of the left frontal lobe become more active when understanding both the content and meaning of a sentence. When learning a second language, it is normal for the corresponding areas on the right side of the brain to become active and assist in efforts to understand.

Multilingual volunteers had no detectable right-side activation during the initial, simple Kazakh grammar test level, but brain scans showed strong activity in those assisting areas of bilingual volunteers' brains.

Researchers also detected differences in the basal ganglia, often considered a more fundamental area of the brain. Bilingual volunteers' basal ganglia had low levels of activation that spiked as they progressed through the test and then returned to a low level at the start of the next test. Multilingual volunteers began the first test level with similarly low basal ganglia activity that spiked and then remained high throughout the subsequent test levels.

The UTokyo-MIT research team says this activation pattern in the basal ganglia shows that multilingual people can make generalizations and build on prior knowledge, rather than approach each new grammar rule as a separate idea to understand from scratch.

Prior studies by Sakai and others have found a three-part timeline of changes in brain activation while learning a new language: an initial increase, a high plateau and a decline to the same low level of activation required to understand the native language.

These new results confirm that pattern in multilinguals and support the possibility that prior experience progressing through those stages of language learning makes it easier to do again, supporting the cumulative-enhancement model of language acquisition.

Read more at Science Daily

Undetected coronavirus variant was in at least 15 countries before its discovery, study finds

 A highly contagious SARS-CoV-2 variant was unknowingly spreading for months in the United States by October 2020, according to a new study from researchers with The University of Texas at Austin COVID-19 Modeling Consortium. Scientists first discovered it in early December in the United Kingdom, where the highly contagious and more lethal variant is thought to have originated. The journal Emerging Infectious Diseases, which has published an early-release version of the study, provides evidence that the coronavirus variant B117 (501Y) had spread across the globe undetected for months when scientists discovered it.

"By the time we learned about the U.K. variant in December, it was already silently spreading across the globe," said Lauren Ancel Meyers, the director of the COVID-19 Modeling Consortium at The University of Texas at Austin and a professor of integrative biology. "We estimate that the B117 variant probably arrived in the U.S. by October of 2020, two months before we knew it existed."

Analyzing data from 15 countries, researchers estimated the chance that travelers from the U.K. introduced the variant into 15 countries between Sept. 22 and Dec. 7, 2020. They found that the virus variant had almost certainly arrived in all 15 countries by mid-November. In the U.S., the variant probably had arrived by mid-October.

"This study highlights the importance of laboratory surveillance," Meyers said. "Rapid and extensive sequencing of virus samples is critical for early detection and tracking of new variants of concern."

In conjunction with the paper's publication, consortium members developed a new tool that decision-makers anywhere in the United States can use in planning for genetic sequencing that helps to detect the presence of variants. To help the U.S. expand national surveillance of variants, the new online calculator indicates the number of virus samples that must be sequenced in order to detect new variants when they first emerge. For example, if the goal is to detect an emerging variant by the time it is causing 1 out of every 1,000 new COVID-19 infections, approximately 3,000 SARS-CoV-2 positive specimens per week need to be sequenced.

"Health officials are looking for better ways to manage the unpredictability of this virus and future variants," said Spencer Woody, a postdoctoral fellow at the UT COVID-19 Modeling Consortium. "Our new calculator determines how many positive SARS-CoV-2 specimens must be sequenced to ensure that new threats are identified as soon as they start spreading."

He explained that the calculator has a second feature. "It also helps labs figure out how quickly they will detect new variants, given their current sequencing capacity."

"We created this tool to support federal, state and local health officials in building credible early warning systems for this and future pandemic threats," Meyers said.

In addition to Meyers, authors of the Emerging Infectious Disease paper are Zhanwei Du, Bingyi Yang, Sheikh Taslim Ali, Tim K. Tsang, Songwei Shan, Peng Wu, Eric H.Y. Lau and Benjamin J. Cowling of the WHO Collaborating Centre for Infectious Disease Epidemiology and Control in Hong Kong and Lin Wang of the University of Cambridge.

Read more at Science Daily

Mar 31, 2021

Preconditions for life present 3.5 billion years ago

Microbial life already had the necessary conditions to exist on our planet 3.5 billion years ago. This was the conclusion reached by a research team after studying microscopic fluid inclusions in barium sulfate (barite) from the Dresser Mine in Marble Bar, Australia. In their publication "Ingredients for microbial life preserved in 3.5-billion-year-old fluid inclusions," the researchers suggest that organic carbon compounds which could serve as nutrients for microbial life already existed at this time. The study by first author Helge Mißbach (University of Göttingen, Germany) was published in the journal Nature Communications. Co-author Volker Lüders from the GFZ German Research Center for Geosciences carried out carbon isotope analyses on gases in fluid inclusions.

Fluid inclusions show potential for prehistoric life

Lüders assesses the results as surprising, although he cautions against misinterpreting them. "One should not take the study results as direct evidence for early life," says the GFZ researcher. Rather, the findings on the 3.5-billion-year-old fluids showed the existence of the potential for just such prehistoric life. Whether life actually arose from it at that time cannot be determined. Based on the results, "we now know a point in time from which we can say it would have been possible," explains Lüders.

Australian barites as geo-archives

Fluid inclusions in minerals are microscopic geo-archives for the migration of hot solutions and gases in the Earth's crust. Primary fluid inclusions were formed directly during mineral growth and provide important information about the conditions under which they were formed. This includes the pressure, temperature and the solution composition. In addition to an aqueous phase, fluid inclusions can also contain gases whose chemistry can persist for billions of years. The fluid inclusions examined in this study were trapped during crystallization of the host minerals. The fluid inclusions investigated in this study originate from the Dresser Mine in Australia. They were trapped during crystallisation of the host minerals of barium sulphate (barite). The research team analysed them extensively for their formation conditions, biosignatures and carbon isotopes.

In the course of the analyses, it turned out that they contained primordial metabolism -- and thus energy sources for life. The results of Lüders' carbon isotope analysis provided additional evidence for different carbon sources. While the gas-rich inclusions of gray barites contained traces of magmatic carbon, clear evidence of an organic origin of the carbon could be found in the fluid inclusions of black barites.

Read more at Science Daily

Early Earth's hot mantle may have led to Archean 'water world'

 A vast global ocean may have covered early Earth during the early Archean eon, 4 to 3.2 billion years ago, a side effect of having a hotter mantle than today, according to new research.

The new findings challenge earlier assumptions that the size of the Earth's global ocean has remained constant over time and offer clues to how its size may have changed throughout geologic time, according to the study's authors.

Most of Earth's surface water exists in the oceans. But there is a second reservoir of water deep in Earth's interior, in the form of hydrogen and oxygen attached to minerals in the mantle.

A new study in AGU Advances, which publishes high-impact, open-access research and commentary across the Earth and space sciences, estimates how much water the mantle potentially could hold today and how much water it could have stored in the past.

The findings suggest that, since early Earth was hotter than it is today, its mantle may have contained less water because mantle minerals hold onto less water at higher temperatures. Assuming that the mantle currently has more than 0.3-0.8 times the mass of the ocean, a larger surface ocean might have existed during the early Archean. At that time, the mantle was about 1,900-3,000 degrees Kelvin (2,960-4,940 degrees Fahrenheit), compared to 1,600-2,600 degrees Kelvin (2,420-4,220 degrees Fahrenheit) today.

If early Earth had a larger ocean than today, that could have altered the composition of the early atmosphere and reduced how much sunlight was reflected back into space, according to the authors. These factors would have affected the climate and the habitat that supported the first life on Earth.

"It's sometimes easy to forget that the deep interior of a planet is actually important to what's going on with the surface," said Rebecca Fischer, a mineral physicist at Harvard University and co-author of the new study. "If the mantle can only hold so much water, it's got to go somewhere else, so what's going on thousands of kilometers below the surface can have pretty big implications."

Earth's sea level has remained fairly constant during the last 541 million years. Sea levels from earlier in Earth's history are more challenging to estimate, however, because little evidence has survived from the Archean eon. Over geologic time, water can move from the surface ocean to the interior through plate tectonics, but the size of that water flux is not well understood. Because of this lack of information, scientists had assumed the global ocean size remained constant over geologic time.

In the new study, co-author Junjie Dong, a mineral physicist at Harvard University, developed a model to estimate the total amount of water that Earth's mantle could potentially store based on its temperature. He incorporated existing data on how much water different mantle minerals can store and considered which of these 23 minerals would have occurred at different depths and times in Earth's past. He and his co-authors then related those storage estimates to the volume of the surface ocean as Earth cooled.

Jun Korenaga, a geophysicist at Yale University who was not involved in the research, said this is the first time scientists have linked mineral physics data on water storage in the mantle to ocean size. "This connection has never been raised in the past," he said.

Dong and Fischer point out that their estimates of the mantle's water storage capacity carry a lot of uncertainty. For example, scientists don't fully understand how much water can be stored in bridgmanite, the main mineral in the mantle.

The new findings shed light on how the global ocean may have changed over time and can help scientists better understand the water cycles on Earth and other planets, which could be valuable for understanding where life can evolve.

"It is definitely useful to know something quantitative about the evolution of the global water budget," said Suzan van der Lee, a seismologist at Northwestern University who did not participate in the study. "I think this is important for nitty-gritty seismologists like myself, who do imaging of current mantle structure and estimate its water content, but it's also important for people hunting for water-bearing exoplanets and asking about the origins of where our water came from."

Dong and Fischer are now using the same approach to calculate how much water may be held inside Mars.

Read more at Science Daily

New study sews doubt about the composition of 70 percent of our universe

 Until now, researchers have believed that dark energy accounted for nearly 70 percent of the ever-accelerating, expanding universe.

For many years, this mechanism has been associated with the so-called cosmological constant, developed by Einstein in 1917, that refers to an unknown repellent cosmic power.

But because the cosmological constant -- known as dark energy -- cannot be measured directly, numerous researchers, including Einstein, have doubted its existence -- without being able to suggest a viable alternative.

Until now. In a new study by researchers at the University of Copenhagen, a model was tested that replaces dark energy with a dark matter in the form of magnetic forces.

"If what we discovered is accurate, it would upend our belief that what we thought made up 70 percent of the universe does not actually exist. We have removed dark energy from the equation and added in a few more properties for dark matter. This appears to have the same effect upon the universe's expansion as dark energy," explains Steen Harle Hansen, an associate professor at the Niels Bohr Institute's DARK Cosmology Centre.

The universe expands no differently without dark energy

The usual understanding of how the universe's energy is distributed is that it consists of five percent normal matter, 25 percent dark matter and 70 percent dark energy.

In the UCPH researchers' new model, the 25 percent share of dark matter is accorded special qualities that make the 70 percent of dark energy redundant.

"We don't know much about dark matter other than that it is a heavy and slow particle. But then we wondered -- what if dark matter had some quality that was analogous to magnetism in it? We know that as normal particles move around, they create magnetism. And, magnets attract or repel other magnets -- so what if that's what's going on in the universe? That this constant expansion of dark matter is occurring thanks to some sort of magnetic force?" asks Steen Hansen.

Computer model tests dark matter with a type of magnetic energy

Hansen's question served as the foundation for the new computer model, where researchers included everything that they know about the universe -- including gravity, the speed of the universe's expansion and X, the unknown force that expands the universe.

"We developed a model that worked from the assumption that dark matter particles have a type of magnetic force and investigated what effect this force would have on the universe. It turns out that it would have exactly the same effect on the speed of the university's expansion as we know from dark energy," explains Steen Hansen.

However, there remains much about this mechanism that has yet to be understood by the researchers.

Read more at Science Daily

Open-label placebo works as well as double-blind placebo in irritable bowel syndrome

 For decades, the power of the placebo effect was thought to lie in patients' belief that they were -- or at least, could be -- receiving a pharmacologically active treatment. A new study by physician-researchers at Beth Israel Deaconess Medical Center (BIDMC) suggests that patients don't need to be deceived to receive benefit from treatment with placebo.

In a randomized clinical trial published in the journal PAIN, researchers found participants with moderate to severe irritable bowel syndrome (IBS) who were knowingly treated with a pharmacologically inactive pill -- referred to as an honest or open-label placebo -- reported clinically meaningful improvements in their IBS symptoms. People who received the open-label placebo experienced improvements that were significantly greater than those reported by participants assigned to a no-pill control group. There was no difference in symptom improvement between those who received open-label or double-blind placebos. The results build on the research team's previous findings and challenge the long-held notion that concealment or deception are key elements in the placebo effect.

"The clinical response to open-label placebo in this six-week trial was high, with 69 percent of participants who received open-label placebo reporting a clinically meaningful improvement in their symptoms," said first and corresponding author Anthony J. Lembo, MD, Professor of Medicine in the Division of Gastroenterology at BIDMC. "IBS is one of the most common reasons for healthcare consultations and absenteeism from work or school. Effective treatment options for IBS are limited, and we hypothesized it may be possible to ethically harness the placebo effect for clinical benefit."

For the rigorously designed clinical trial, researchers enrolled 262 adult participants, 18 to 80 years old with at least moderately severe irritable bowel syndrome, as measured by the validated IBS-Severity Scoring System (IBS-SSS), which measures frequency and severity of abdominal pain and distention, quality of life and other relevant factors across a scale of 0-500. Participants were examined, filled out baseline questionnaires and were randomized into one of three study arms; open-label placebo; double-blind (which included double-blind placebo or double-blind peppermint oil); or no-pill control. During their examinations, all participants discussed the placebo effect, the trial and its aims with their physicians.

The open-label group received pill bottles labeled "open-label placebo," and were told that the pills inside were pharmacologically inert, but could make them feel better through the placebo effect. The double-blind group received pill bottles labeled "double-blind placebo or peppermint oil." Participants in the double-blind group received either a placebo or an identical pill containing peppermint oil, but neither they nor the research team knew which they received. All participants who received pills were instructed to take one pill three times a day, 30 minutes before meals. The no-pill control group received no pills but otherwise followed identical study protocol. During return visits three and six weeks into the study, all participants completed questionnaires, were verbally asked about adverse events and briefly met with a study physician.

Lembo and colleagues -- including senior author, Ted J. Kaptchuk, Director of the Program in Placebo Studies and the Therapeutic Encounter at BIDMC -- found that improvement in IBS-SSS scores from baseline to the six-week endpoint was significantly greater in the open-label placebo group compared to the no-pill control group. Additionally, participants in the double-blind placebo group also saw superior symptom improvement compared to the no-pill control group, but the double-blind and open-label groups were not different from each other.

Next, the researchers performed a post hoc analysis of the participants who experienced large clinical improvements -- those who improved by at least 50 points and by at least 150 points, considered strong and very strong clinical responses, respectively. A greater percentage of participants in the open-label placebo and double-blind placebo groups reported a 50 point reduction in IBS severity score compared to the no-pill control group (approximately 70 percent in each placebo group compared to 54 percent in the no pill control group). Similarly, approximately 30 percent of open-label placebo and double-blind placebo participants reported a 150 point reduction in IBS symptoms, compared to only 12 percent of the no-pill group.

Read more at Science Daily

Infants' language skills more advanced than first words suggest

 Babies can recognise combinations of words even before they have uttered their first word, a study suggests, challenging ideas of how children learn language.

Assessments in 11-12 month-olds show that infants at the cusp of talking are already processing multiword phrases such as 'clap your hands'.

Researchers say the study is the first to provide evidence that young children can pick up and understand multiword sequences before they can talk or begin producing such combinations themselves.

The findings suggest that babies learn individual words and more complex phrases at the same time, which challenges the perspective that they progress from single words to phrases and sentences, experts say.

It may also explain why adults who learn a new language in later life by focusing on individual words often do not achieve native-like proficiency.

Linguists at the University of Edinburgh assessed 36 infants' language learning behaviour in a series of attention tests using recorded adult speech.

They looked at how the babies responded to multiword combinations of three-word sequences used in parent-child conversations.

The researchers compared the infants' responses using a testing method called central fixation, which measures infants' looking behaviour in response to sounds.

They assessed if the babies could distinguish more frequently used three-word sequences such as 'clap your hands' from similar but less common phrases such as 'take your hands'.

On average, fixation times were longer for the frequently used phrases. This pattern was found in 23 of the 36 infants.

Researchers say this suggests babies who are still learning their first words are simultaneously learning word combinations.

This development happens months before parents hear their children's first attempts at sequences of words, experts say.

Dr Barbora Skarabela, of the School of Philosophy, Psychology and Languages Sciences, said: "Previous research has shown that young infants recognise many common words. But this is the first study that shows that infants extract and store more than just single words from everyday speech. This suggests that when children learn language, they build on linguistic units of varying sizes, including multiword sequences, and not just single words as we often assume. This may explain why adults learning a second language, who tend to rely on individual words, often fall short of reaching native-like proficiency in the way they string words together into phrases and sentences."

Read more at Science Daily

Mar 30, 2021

Mystery of photosynthetic algae evolution finally solved

 An evolutionary mystery that had eluded molecular biologists for decades may never have been solved if it weren't for the COVID-19 pandemic.

"Being stuck at home was a blessing in disguise, as there were no experiments that could be done. We just had our computers and lots of time," says Professor Paul Curmi, a structural biologist and molecular biophysicist with UNSW Sydney.

Prof. Curmi is referring to research published this month in Nature Communications that details the painstaking unravelling and reconstruction of a key protein in a single-celled, photosynthetic organism called a cryptophyte, a type of algae that evolved over a billion years ago.

Up until now, how cryptophytes acquired the proteins used to capture and funnel sunlight to be used by the cell had molecular biologists scratching their heads. They already knew that the protein was part of a sort of antenna that the organism used to convert sunlight into energy. They also knew that the cryptophyte had inherited some antenna components from its photosynthetic ancestors -- red algae, and before that cyanobacteria, one of the earliest lifeforms on earth that are responsible for stromatolites.

But how the protein structures fit together in the cryptophyte's own, novel antenna structure remained a mystery -- until Prof. Curmi, PhD student Harry Rathbone and colleagues from University of Queensland and University of British Columbia pored over the electron microscope images of the antenna protein from a progenitor red algal organism made public by Chinese researchers in March 2020.

Unravelling the mystery meant the team could finally tell the story of how this protein had enabled these ancient single-celled organisms to thrive in the most inhospitable conditions -- metres under water with very little direct sunlight to convert into energy.

Prof. Curmi says the major implications of the work are for evolutionary biology.

"We provide a direct link between two very different antenna systems and open the door for discovering exactly how one system evolved into a different system -- where both appear to be very efficient in capturing light," he says.

"Photosynthetic algae have many different antenna systems which have the property of being able to capture every available light photon and transferring it to a photosystem protein that converts the light energy to chemical energy."

By working to understand the algal systems, the scientists hope to uncover the fundamental physical principles that underlie the exquisite photon efficiency of these photosynthetic systems. Prof. Curmi says these may one day have application in optical devices including solar energy systems.

EATING FOR TWO


To better appreciate the significance of the protein discovery, it helps to understand the very strange world of single-celled organisms which take the adage "you are what you eat" to a new level.

As study lead author, PhD student Harry Rathbone explains, when a single-celled organism swallows another, it can enter a relationship of endosymbiosis, where one organism lives inside the other and the two become inseparable.

"Often with algae, they'll go and find some lunch -- another alga -- and they'll decide not to digest it. They'll keep it to do its bidding, essentially," Mr Rathbone says. "And those new organisms can be swallowed by other organisms in the same way, sort of like a matryoshka doll."

In fact, this is likely what happened when about one and a half billion years ago, a cyanobacterium was swallowed by another single-celled organism. The cyanobacteria already had a sophisticated antenna of proteins that trapped every photon of light. But instead of digesting the cyanobacterium, the host organism effectively stripped it for parts -- retaining the antenna protein structure that the new organism -- the red algae -- used for energy.

And when another organism swallowed a red alga to become the first cryptophyte, it was a similar story. Except this time the antenna was brought to the other side of the membrane of the host organism and completely remoulded into new protein shapes that were equally as efficient at trapping sunlight photons.

EVOLUTION

As Prof. Curmi explains, these were the first tiny steps towards the evolution of modern plants and other photosynthetic organisms such as seaweeds.

"In going from cyanobacteria that are photosynthetic, to everything else on the planet that is photosynthetic, some ancient ancestor gobbled up a cyanobacteria which then became the cell's chloroplast that converts sunlight into chemical energy.

"And the deal between the organisms is sort of like, I'll keep you safe as long as you do photosynthesis and give me energy."

One of the collaborators on this project, Dr Beverley Green, Professor Emerita with the University of British Columbia's Department of Botany says Prof. Curmi was able to make the discovery by approaching the problem from a different angle.

"Paul's novel approach was to search for ancestral proteins on the basis of shape rather than similarity in amino acid sequence," she says.

Read more at Science Daily

Early Universe explosion sheds light on elusive black hole

 Scientists discover one of the first black holes of its kind. Intermediate mass black holes (100 to 100,000 times the mass of the sun) have only been directly detected once before (LIGO, last year). They form an important link between the smaller black holes left behind after the deaths of stars, and the supermassive black holes which lurk in the hearts of every galaxy.

The astrophysicists also find that there are about 40,000 of these objects in the neighbourhood of our galaxy.

A new black hole breaks the record -- not for being the smallest or the biggest -- but for being right in the middle. The recently discovered 'Goldilocks' black hole is part of a missing link between two populations of black holes: small black holes made from stars and supermassive giants in the nucleus of most galaxies.

In a joint effort, researchers from the University of Melbourne and Monash University have uncovered a black hole approximately 55,000 times the mass of the sun, a fabled "intermediate-mass" black hole.

The discovery was published today in the paper Evidence for an intermediate mass black hole from a gravitationally lensed gamma-ray burst in the journal Nature Astronomy.

Lead author and University of Melbourne PhD student, James Paynter, said the latest discovery sheds new light on how supermassive black holes form. "While we know that these supermassive black holes lurk in the cores of most, if not all galaxies, we don't understand how these behemoths are able to grow so large within the age of the Universe," he said.

The new black hole was found through the detection of a gravitationally lensed gamma-ray burst. The gamma-ray burst, a half-second flash of high-energy light emitted by a pair of merging stars, was observed to have a tell-tale 'echo'. This echo is caused by the intervening intermediate-mass black hole, which bends the path of the light on its way to Earth, so that astronomers see the same flash twice.

Powerful software developed to detect black holes from gravitational waves was adapted to establish that the two flashes are images of the same object.

"This newly discovered black hole could be an ancient relic -- a primordial black hole -- created in the early Universe before the first stars and galaxies formed," said study co-author, Professor Eric Thrane from the Monash University School of Physics and Astronomy and Chief Investigator for the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav).

"These early black holes may be the seeds of the supermassive black holes that live in the hearts of galaxies today." Paper co-author, gravitational lensing pioneer, Professor Rachel Webster from the University of Melbourne, said the findings have the potential to help scientists make even greater strides.

"Using this new black hole candidate, we can estimate the total number of these objects in the Universe. We predicted that this might be possible 30 years ago, and it is exciting to have discovered a strong example."

Read more at Science Daily

Researchers discover new type of ancient crater lake on Mars

 Researchers from Brown University have discovered a previously unknown type of ancient crater lake on Mars that could reveal clues about the planet's early climate.

In a study published in Planetary Science Journal, a research team led by Brown Ph.D. student Ben Boatwright describes an as-yet unnamed crater with some puzzling characteristics. The crater's floor has unmistakable geologic evidence of ancient stream beds and ponds, yet there's no evidence of inlet channels where water could have entered the crater from outside, and no evidence of groundwater activity where it could have bubbled up from below.

So where did the water come from?

The researchers conclude that the system was likely fed by runoff from a long-lost Martian glacier. Water flowed into the crater atop the glacier, which meant it didn't leave behind a valley as it would have had it flowed directly on the ground. The water eventually emptied into the low-lying crater floor, where it left its geological mark on the bare Martian soil.

The type of lake described in this study differs starkly from other Martian crater lakes, like those at Gale and Jezero craters where NASA rovers are currently exploring.

"This is a previously unrecognized type of hydrological system on Mars," Boatwright said. "In lake systems characterized so far, we see evidence of drainage coming from outside the crater, breaching the crater wall and in some cases flowing out the other side. But that's not what is happening here. Everything is happening inside the crater, and that's very different than what's been characterized before."

Importantly, Boatwright says, the crater provides key clues about the early climate of Mars. There's little doubt that the Martian climate was once warmer and wetter than the frozen desert the planet is today. What's less clear, however, is whether Mars had an Earthlike climate with continually flowing water for millennia, or whether it was mostly cold and icy with fleeting periods of warmth and melting. Climate simulations for early Mars suggest temperatures rarely peaking above freezing, but geological evidence for cold and icy conditions has been sparse, Boatwright says. This new evidence of ancient glaciation could change that.

"The cold and icy scenario has been largely theoretical -- something that arises from climate models," Boatwright said. "But the evidence for glaciation we see here helps to bridge the gap between theory and observation. I think that's really the big takeaway here."

Boatwright was able to map out the details of the crater's lake system using high-resolution images taken by NASA's Mars Reconnaissance Orbiter. The images revealed a telltale signature of ancient streambeds -- features called inverted fluvial channels. When water flows across a rocky surface, it can leave behind course-grained sediment inside the valley it erodes. When these sediments interact with water, they can form minerals that are harder than the surrounding rock. As further erosion over millions of years whittles the surrounding rock away, the mineralized channels are left behind as raised ridges spidering across the landscape. These features, along with sediment deposits and shoreline features, clearly show where water flowed and ponded on the crater floor.

ut without any sign of an inlet channel where water entered the crater, "the question becomes 'how did these get here?"' Boatwright said.

To figure it out, Boatwright worked with Jim Head, his advisor and a research professor at Brown. They ruled out groundwater activity, as the crater lacked telltale sapping channels that form in groundwater systems. These channels usually appear as short, stubby channels that lack tributaries -- completely opposite from the dense, branching networks of inverted channels observed in the crater. A careful examination of the crater wall also revealed a distinct set of ridges that face upward toward the crater wall. The features are consistent with ridges formed where a glacier terminates and deposits mounds of rocky debris. Taken together, the evidence points to a glacier-fed system, the researchers concluded.

Subsequent research has shown that this crater isn't the only one of its kind. At this month's Lunar and Planetary Science Conference, Boatwright presented research revealing more than 40 additional craters that appear to have related features.

Head says that these new findings could be critical in understanding the climate of early Mars.

Read more at Science Daily

Fasting can be an effective way to start a diet

 One in four Germans suffers from metabolic syndrome. Several of four diseases of affluence occur at the same time in this 'deadly quartet': obesity, high blood pressure, lipid metabolism disorder and diabetes mellitus. Each of these is a risk factor for severe cardiovascular conditions, such as heart attack and stroke. Treatment aims to help patients lose weight and normalise their lipid and carbohydrate metabolism and blood pressure. In addition to exercise, doctors prescribe a low-calorie and healthy diet. Medication is often also required. However, it is not fully clear what effects nutrition has on the microbiome, immune system and health.

A research group led by Dr Sofia Forslund and Professor Dominik N. Müller from the Max Delbrück Center for Molecular Medicine in the Helmholtz Association (MDC) and the Experimental and Clinical Research Center (ECRC) has now examined the effect a change of diet has on people with metabolic syndrome. The ECRC is jointly run by the MDC and Charité Universitätsmedizin Berlin. "Switching to a healthy diet has a positive effect on blood pressure," says Andras Maifeld, summarising the results. "If the diet is preceded by a fast, this effect is intensified." Maifeld is the first author of the paper, which was recently published in the journal Nature Communications.

Broccoli over roast beef

Dr Andreas Michalsen, Senior Consultant of the Naturopathy Department at Immanuel Hospital Berlin and Endowed Chair of Clinical Naturopathy at the Institute for Social Medicine, Epidemiology and Health Economics at Charité -- Universitätsmedizin Berlin, and Professor Gustav J. Dobos, Chair of Naturopathy and Integrative Medicine at the University of Duisburg-Essen, recruited 71 volunteers with metabolic syndrome and raised systolic blood pressure. The researchers divided them into two groups at random.

Both groups followed the DASH (Dietary Approach to Stop Hypertension) diet for three months, which is designed to combat high blood pressure. This Mediterranean-style diet includes lots of fruit and vegetables, wholemeal products, nuts and pulses, fish and lean white meat. One of the two groups did not consume any solid food at all for five days before starting the DASH diet.

On the basis of immunophenotyping, the scientists observed how the immune cells of the volunteers changed when they altered their diet. "The innate immune system remains stable during the fast, whereas the adaptive immune system shuts down," explains Maifeld. During this process, the number of proinflammatory T cells drops, while regulatory T cells multiply.

A Mediterranean diet is good, but to also fast is better

The researchers used stool samples to examine the effects of the fast on the gut microbiome. Gut bacteria work in close contact with the immune system. Some strains of bacteria metabolise dietary fibre into anti-inflammatory short-chain fatty acids that benefit the immune system. The composition of the gut bacteria ecosystem changes drastically during fasting. Health-promoting bacteria that help to reduce blood pressure multiply. Some of these changes remain even after resumption of food intake. The following is particularly noteworthy: "Body mass index, blood pressure and the need for antihypertensive medication remained lower in the long term among volunteers who started the healthy diet with a five-day fast," explains Dominik Müller. Blood pressure normally shoots back up again when even one antihypertensive tablet is forgotten.

Blood pressure remains lower in the long term -- even three months after fasting

Together with scientists from the Helmholtz Centre for Infection Research and McGill University, Montreal, Canada, Forslund's working group conducted a statistical evaluation of these results using artificial intelligence to ensure that this positive effect was actually attributable to the fast and not to the medication that the volunteers were taking. They used methods from a previous study in which they had examined the influence of antihypertensive medication on the microbiome. "We were able to isolate the influence of the medication and observe that whether someone responds well to a change of diet or not depends on the individual immune response and the gut microbiome," says Forslund.

Read more at Science Daily

Mar 29, 2021

Scientists discover a new auroral feature on Jupiter

The SwRI-led Ultraviolet Spectrograph (UVS) orbiting Jupiter aboard NASA's Juno spacecraft has detected new faint aurora features, characterized by ring-like emissions, which expand rapidly over time. SwRI scientists determined that charged particles coming from the edge of Jupiter's massive magnetosphere triggered these auroral emissions.

"We think these newly discovered faint ultraviolet features originate millions of miles away from Jupiter, near the Jovian magnetosphere's boundary with the solar wind," said Dr. Vincent Hue, lead author of a paper accepted by the Journal of Geophysical Research: Space Physics. "The solar wind is a supersonic stream of charged particles emitted by the Sun. When they reach Jupiter, they interact with its magnetosphere in a way that is still not well understood."

Both Jupiter and Earth have magnetic fields that provide protection from the solar wind. The stronger the magnetic field, the larger the magnetosphere. Jupiter's magnetic field is 20,000 times stronger than Earth's and creates a magnetosphere so large it begins to deflect the solar wind 2-4 million miles before it reaches Jupiter.

"Despite decades of observations from Earth combined with numerous in-situ spacecraft measurements, scientists still do not fully understand the role the solar wind plays in moderating Jupiter's auroral emissions," said SwRI's Dr. Thomas Greathouse, a co-author on this study. "Jupiter's magnetospheric dynamics, the motion of charged particles within its magnetosphere, is largely controlled by Jupiter's 10-hour rotation, the fastest in the solar system. The solar wind's role is still debated."

One of the goals of the Juno mission, recently approved by NASA for an extension until 2025, is to explore Jupiter's magnetosphere by measuring its auroras with the UVS instrument. Previous observations with the Hubble Space Telescope and Juno have allowed scientists to determine that most of Jupiter's powerful auroras are generated by internal processes, that is the motion of charged particles within the magnetosphere. However, on numerous occasions, UVS has detected a faint type of aurora, characterized by rings of emissions expanding rapidly with time.

"The high-latitude location of the rings indicates that the particles causing the emissions are coming from the distant Jovian magnetosphere, near its boundary with the solar wind," said Bertrand Bonfond, a co-author on this study from Belgium's Liège University. In this region, plasma from the solar wind often interacts with the Jovian plasma in a way that is thought to form "Kelvin-Helmholtz" instabilities. These phenomena occur when there are shear velocities, such as at the interface between two fluids moving at different speeds. Another potential candidate to produce the rings are dayside magnetic reconnection events, where oppositely directed Jovian and interplanetary magnetic fields converge, rearrange and reconnect.

Read more at Science Daily

Stellar eggs near galactic center hatching into baby stars

Astronomers found a number of stellar eggs containing baby stars around the center of the Milky Way using the Atacama Large Millimeter/submillimeter Array (ALMA). Previous studies had suggested that the environment there is too harsh to form stars. These findings indicate that star formation is more resilient than researchers thought.

Stars form in stellar eggs, cosmic clouds of gas and dust which collapse due to gravity. If something interferes with the gravity driven contraction, star formation will be suppressed. There are many potential sources of interference near the Galactic Center. Strong turbulence can stir up the clouds and prevent them from contracting, or strong magnetic fields can support the gas against self-gravitational collapse. Previous observations indicated that star formation near the Galactic Center is much less efficient.

To investigate the mysteries of the suppressed star formation, a team led by Xing Lu, an astronomer at the National Astronomical Observatory of Japan, used ALMA to observe regions near the Galactic Center which contain ample gas, but no known star formation. Surprisingly, the team discovered more than 800 dense cores of gas and dust.

"The discovery leads to the question of whether they are actually 'stellar eggs' or not." explains Lu. To answer this question, the team again used ALMA to search for energetic gas outflows which are indicative of stars forming in stellar eggs. Thanks to ALMA's high sensitivity and high spatial resolution, they detected 43 small and faint outflows in the clouds. Lu comments, "our observations prove that even in the strongly disturbed areas around the Galactic Center, baby stars still form."

The research team is now analyzing ALMA's higher resolution observation data to better understand the processes driving the gas outflows and star formation near the Galactic Center.

From Science Daily

First steps towards revolutionary ULTRARAM™ memory chips

A new type of universal computer memory -- ULTRARAM™ -- has taken a step closer towards development with a successful experiment by Lancaster physicists.

Professor Manus Hayne, who is leading the research, commented: "These new results confirm the astonishing properties of ULTRARAM™, allowing us to demonstrate its potential as a fast and efficient non-volatile memory with high-endurance."

Currently, the two main types of memory, dynamic RAM (DRAM) and flash, have complementary characteristics and roles:-
 

  • DRAM is fast, so used for active (working) memory but it is volatile, meaning that information is lost when power is removed. Indeed, DRAM continually 'forgets' and needs to be constantly refreshed.
  • Flash is non-volatile, allowing you to carry data in your pocket, but is very slow and wears out. It is well-suited for data storage but can't be used for active memory.


"Universal memory" is a memory where the data is very robustly stored, but can also easily be changed; something that was widely considered to be unachievable until now.

The Lancaster team has solved the paradox of universal memory by exploiting a quantum mechanical effect called resonant tunnelling that allows a barrier to switch from opaque to transparent by applying a small voltage.

Their new non-volatile RAM, called ULTRARAM™, is a working implementation of so-called 'universal memory', promising all the advantages of DRAM and flash, with none of the drawbacks.

In their latest work, published in IEEE Transactions on Electron Devices, the researchers have integrated ULTRARAM™ devices into small (4-bit) arrays for the first time. This has allowed them to experimentally verify a novel, patent-pending, memory architecture that would form the basis of future ULTRARAM™ memory chips.

They have also modified the device design to take full advantage of the physics of resonant tunnelling, resulting in devices that are 2,000 times faster than the first prototypes, and with program/erase cycling endurance that is at least ten times better than flash, without any compromise in data retention.

From Science Daily

The brain area with which we interpret the world

 Usually, the different areas in the cerebrum take on a very specific function. For example, they process our movements or things we see or hear, i.e. direct physical information. However, some areas of the brain come into play when dealing with more advanced mental tasks. They process incoming information that has already been pre-processed and is thus already at an abstract level.

It was already known that the inferior parietal lobe (IPL) is one of these regions in the human brain. Nevertheless, it was unclear how this area is able to process such very different functions. In a large study, scientists from the Max Planck Institute for Human Cognitive and Brain Sciences (MPI CBS) in Leipzig and McGill University in Montreal have helped to solve this question. According to their findings, the different parts of the IPL specialize in different cognitive functions -- such as attention, language, and social cognition, with the latter reflecting the ability for perspective taking. At the same time, these areas work together with many other brain regions in a process-specific way. When it comes to language comprehension, the anterior IPL in the left hemisphere of the brain becomes active. For attention, it is the anterior IPL in the right side of the brain. If, on the other hand, social skills are required, the posterior parts of the IPL in both hemispheres of the brain spring into action simultaneously. "Social cognition requires the most complex interpretation," explains Ole Numssen, first author of the underlying study, which has now been published in the journal eLife. "Therefore, the IPLs on both sides of the brain probably work together here."

Moreover, these individual sub-areas then cooperate with different regions of the rest of the brain. In the case of attention and language, each IPL subregion links primarily to areas on one side of the brain. With social skills, it's areas on both sides. Again, this shows that the more complex the task, the more intensive the interaction with other areas.

"Our results provide insight into the basic functioning of the human brain. We show how our brains dynamically adapt to changing requirements. To do this, it links specialized individual areas, such as the IPL, with other more general regions. The more demanding the tasks, the more intensively the individual areas interact with each other. This makes highly complex functions such as language or social skills possible," says Ole Numssen. "The IPL may ultimately be considered as one of the areas with which we interpret the world."

Even in great apes, Numssen says, brain regions that correspond to the IPL do not only process purely physical stimuli, but also more complex information. Throughout evolution, they seem to have always been responsible for processing increasingly complex content. However, parts of the IPL are unique to the human brain and are not found in great apes -- a hint that this region has evolved in the course of evolution to enable key functions of human cognition.

The researchers from Leipzig and Montreal investigated such brain-behavior correlations with the help of three tasks that the study participants had to solve while lying in the MRI scanner. In the first task, they had to prove their understanding of language. To do this, they saw meaningful words such as "pigeon" and "house," but also words without meaning (known as pseudowords) such as "pulre," and had to decide whether it was a real word or not. A second task tested visual-spatial attention. For this task, they had to react to stimuli on one side of the screen, although they expected something to happen on the other side. The third task probed their ability for perspective taking using the so-called Sally Anne test. This is a comic strip consisting of four pictures in which two people interact with each other. A question in the end could only be answered correctly if the study participants were able to put themselves in the shoes of the respective persons.

From Science Daily

Scientists use nanotechnology to detect bone-healing stem cells

 Researchers at the University of Southampton have developed a new way of using nanomaterials to identify and enrich skeletal stem cells -- a discovery which could eventually lead to new treatments for major bone fractures and the repair of lost or damaged bone.

Working together, a team of physicists, chemists and tissue engineering experts used specially designed gold nanoparticles to 'seek out' specific human bone stem cells -- creating a fluorescent glow to reveal their presence among other types of cells and allow them to be isolated or 'enriched'.

The researchers concluded their new technique is simpler and quicker than other methods and up to 50-500 times more effective at enriching stem cells.

The study, led by Professor of Musculoskeletal Science, Richard Oreffo and Professor Antonios Kanaras of the Quantum, Light and Matter Group in the School of Physics and Astronomy, is published in ACS Nano -- an internationally recognised multidisciplinary journal.

In laboratory tests, the researchers used gold nanoparticles -- tiny spherical particles made up of thousands of gold atoms -- coated with oligonucleotides (strands of DNA), to optically detect the specific messenger RNA (mRNA) signatures of skeletal stem cells in bone marrow. When detection takes place, the nanoparticles release a fluorescent dye, making the stem cells distinguishable from other surrounding cells, under microscopic observation. The stem cells can then be separated using a sophisticated fluorescence cell sorting process.

Stem cells are cells that are not yet specialised and can develop to perform different functions. Identifying skeletal stems cells allows scientists to grow these cells in defined conditions to enable the growth and formation of bone and cartilage tissue -- for example, to help mend broken bones.

Among the challenges posed by our ageing population is the need for novel and cost-effective approaches to bone repair. With one in three women and one in five men at risk of osteoporotic fractures worldwide, the costs are significant, with bone fractures alone costing the European economy €17 billion and the US economy $20 billion annually.

Within the University of Southampton's Bone and Joint Research Group, Professor Richard Oreffo and his team have been looking at bone stem cell based therapies for over 15 years to understand bone tissue development and to generate bone and cartilage. Over the same time-period, Professor Antonios Kanaras and his colleagues in the Quantum, Light and Matter Group have been designing novel nanomaterials and studying their applications in the fields of biomedical sciences and energy. This latest study effectively brings these disciplines together and is an exemplar of the impact collaborative, interdisciplinary working can bring.

Professor Oreffo said: "Skeletal stem cell based therapies offer some of the most exciting and promising areas for bone disease treatment and bone regenerative medicine for an aging population. The current studies have harnessed unique DNA sequences from targets we believe would enrich the skeletal stem cell and, using Fluorescence Activated Cell Sorting (FACS) we have been able to enrich bone stem cells from patients. Identification of unique markers is the holy grail in bone stem cell biology and, while we still have some way to go; these studies offer a step change in our ability to target and identify human bone stem cells and the exciting therapeutic potential therein."

Professor Oreffo added: "Importantly, these studies show the advantages of interdisciplinary research to address a challenging problem with state of the art molecular/cell biology combined with nanomaterials' chemistry platform technologies."

Professor Kanaras said: "The appropriate design of materials is essential for their application in complex systems. Customizing the chemistry of nanoparticles we are able to program specific functions in their design.

"In this research project, we designed nanoparticles coated with short sequences of DNA, which are able to sense HSPA8 mRNA and Runx2 mRNA in skeletal stem cells and together with advanced FACS gating strategies, to enable the assortment of the relevant cells from human bone marrow.

"An important aspect of the nanomaterial design involves strategies to regulate the density of oligonucleotides on the surface of the nanoparticles, which help to avoid DNA enzymatic degradation in cells. Fluorescent reporters on the oligonucleotides enable us to observe the status of the nanoparticles at different stages of the experiment, ensuring the quality of the endocellular sensor."

Both lead researchers also recognise that the accomplishments were possible due to the work of all the experienced research fellows and PhD students involved in this research as well as collaboration with Professor Tom Brown and Dr Afaf E-Sagheer of the University of Oxford, who synthesised a large variety of functional oligonucleotides.

The scientists are currently applying single cell RNA sequencing to the platform technology developed with partners in Oxford and the Institute for Life Sciences (IfLS) at Southampton to further refine and enrich bone stem cells and assess functionality. The team propose to then move to clinical application with preclinical bone formation studies to generate proof of concept studies.

Read more at Science Daily

Mar 28, 2021

California's diesel emissions rules reduce air pollution, protect vulnerable communities

 Extending California's stringent diesel emissions standards to the rest of the U.S. could dramatically improve the nation's air quality and health, particularly in lower income communities of color, finds a new analysis published today in the journal Science.

Since 1990, California has used its authority under the federal Clean Air Act to enact more aggressive rules on emissions from diesel vehicles and engines compared to the rest of the U.S. These policies, crafted by the California Air Resources Board (CARB), have helped the state reduce diesel emissions by 78% between 1990 and 2014, while diesel emissions in the rest of the U.S. dropped by just 51% during the same time period, the new analysis found.

The study estimates that by 2014, improved air quality cut the annual number of diesel-related cardiopulmonary deaths in the state in half, compared to the number of deaths that would have occurred if California had followed the same trajectory as the rest of the U.S. Adopting similar rules nationwide could produce the same kinds of benefits, particularly for communities that have suffered the worst impacts of air pollution.

"Everybody benefits from cleaner air, but we see time and again that it's predominantly lower income communities of color that are living and working in close proximity to sources of air pollution, like freight yards, highways and ports. When you target these sources, it's the highly exposed communities that stand to benefit most," said study lead author Megan Schwarzman, a physician and environmental health scientist at the University of California, Berkeley's School of Public Health. "It's about time, because these communities have suffered a disproportionate burden of harm."

The study also points out that exposure to fine particulate matter (PM2.5) has been linked to poor outcomes from COVID-19, adding urgency to the need to reduce air pollution, par¬ticularly for communities of color that are disproportionately affected by both.

Diesel exhaust consists of both particles and gases and contributes significantly to PM2.5 air pollution worldwide. PM2.5 exposure from any source can compromise children's lung development and can trigger airway inflammation and exacerbate asthma and cardiopulmonary diseases. Diesel exhaust has also been designated a human carcinogen by California's Office of Environmental Health Hazard Assessment (OEHHA).

"There are hundreds of studies around the world that link particulate matter exposure and premature death," said study co-author Álvaro Alvarado, a former air pollution specialist at CARB who now works for OEHHA. "In cities with higher levels of air pollution, there are also higher hospitalization rates for respiratory and cardiovascular illnesses and more emergency room visits for asthma."

To improve air quality, CARB's policies have gone beyond federal standards to limit diesel emissions from a variety of mobile sources, including heavy-duty trucks and buses, ships and port equipment, train locomotives, and the engines that power construction equipment and agricultural machinery.

In their study, Schwarzman and colleagues catalogued the wide range of CARB policies that target each emissions sector and tracked how changes in diesel emissions corresponded to the implementation of those rules. They then show the impact of CARB policies by comparing California's reductions in diesel emissions to those in the rest of the U.S. Their analysis reveals that CARB's policies reduced emissions to the extent that, by 2014, California was emitting less than half the diesel particulate matter, as would be expected had the state followed the same trajectory as the rest of the U.S.

One key policy approach that sets California apart is the requirement that older diesel engines be retrofitted to meet strict emissions standards, Schwarzman said. In the rest of the U.S., new diesel engines must meet updated emissions standards, but older, dirtier engines are allowed to operate without upgrades.

"The average lifetime of a diesel engine is about 20 years, or a million miles, so waiting for fleet turnover is just too slow," Schwarzman said. "California requires retrofits for existing trucks so that all diesel engines are held to a higher standard. This has made an enormous difference for air quality."

Requiring upgrades for the engines that power heavy-duty trucks and buses has reduced California's diesel emissions in that sector by 85% since 1990, the study found. By comparison, the study estimates that if California's heavy-duty vehicle sector had followed the trajectory of other U.S. states, the sector's emissions would have dropped by only 58% in that period.

Because the highways, ports and rail yards where diesel engines operate are more likely to be located near lower income communities of color than affluent, white communities, regulating diesel emissions can help correct persistent disparities in air quality and health, said senior study author John Balmes, a Berkeley Public Health professor and professor of medicine at the University of California, San Francisco.

"There are truly different levels of exposure to air pollution, and those differences in exposure have been linked to differential health outcomes," said Balmes, who also serves as the physician member of CARB.

The study reports that every dollar the state has spent controlling air pollution has generated $38 in benefits attributable to lower rates of illness, premature death and lost productivity attributable to air pollution. As a result, there is no reason why the U.S. as a whole shouldn't adopt diesel emissions standards similar to California's, the authors argue.

"In terms of public health, federal air quality policy should be moving toward that of California, because we've shown that it works, and we've also shown that greening transportation can be good for economic growth," Balmes said. "These environmental regulations not only save lives and improve public health, they actually drive innovation and grow the green economy, which is the future."

Read more at Science Daily

Cephalopods: Older than was thought?

The possibly oldest cephalopods in the earth's history stem from the Avalon Peninsula in Newfoundland (Canada). They were discovered by earth scientists from Heidelberg University. The 522 million-year-old fossils could turn out to be the first known form of these highly evolved invertebrate organisms, whose living descendants today include species such as the cuttlefish, octopus and nautilus. In that case, the find would indicate that the cephalopods evolved about 30 million years earlier than has been assumed.

"If they should actually be cephalopods, we would have to backdate the origin of cephalopods into the early Cambrian period," says Dr Anne Hildenbrand from the Institute of Earth Sciences. Together with Dr Gregor Austermann, she headed the research projects carried out in cooperation with the Bavarian Natural History Collections. "That would mean that cephalopods emerged at the very beginning of the evolution of multicellular organisms during the Cambrian explosion."

The chalky shells of the fossils found on the eastern Avalon Peninsula are shaped like a longish cone and subdivided into individual chambers. These are connected by a tube called the siphuncle. The cephalopods were thus the first organisms able to move actively up and down in the water and thus settle in the open ocean as their habitat. The fossils are distant relatives of the spiral-shaped nautilus, but clearly differ in shape from early finds and the still existing representatives of that class.

"This find is extraordinary," says Dr Austermann. "In scientific circles it was long suspected that the evolution of these highly developed organisms had begun much earlier than hitherto assumed. But there was a lack of fossil evidence to back up this theory." According to the Heidelberg scientists, the fossils from the Avalon Peninsula might supply this evidence, as on the one hand, they resemble other known early cephalopods but, on the other, differ so much from them that they might conceivably form a link leading to the early Cambrian.

The former and little explored micro-continent of Avalonia, which -- besides the east coast of Newfoundland -- comprises parts of Europe, is particularly suited to paleontological research, since many different creatures from the Cambrian period are still preserved in its rocks. The researchers hope that other, better preserved finds will confirm the classification of their discoveries as early cephalopods.

Read more at Science Daily