Jul 15, 2022

Deep dive into the dusty Milky Way

An animated dive into the dusty Milky Way reveals the outlines of our galaxy taking shape as we look out further and further from Earth.

Based on new data from an interactive tool that exploits data from the European Space Agency's Gaia mission and other space science data sets, astronomers have created an animation to model dust in the Milky Way. The work was presented this week at the National Astronomy Meeting (NAM 2022) at the University of Warwick.

The animation shows the cumulative build-up of dust looking from Earth's local neighbourhood to ~13000 lightyears towards the galactic centre -- around 10% of the overall distance across the Milky Way. Close by, dust swirls all around but, further out, the concentration of dust along the galactic plane becomes clear. Two 'windows', one above and one below the galactic plane, are also revealed.

"Dust clouds are related to the formation and death of stars, so their distribution tells a story of how structures formed in the galaxy and how the galaxy evolves," said Nick Cox, coordinator of the EXPLORE project which is developing the tools. "The maps are also important for cosmologists in revealing regions where there is no dust and we can have a clear, unobstructed view out of the Milky Way to study the Universe beyond, such as to make Deep Field observations with Hubble or the new James Webb Space Telescope."

The tools used to create the animation combine data from the Gaia mission and the 2MASS All Sky Survey. The tools are part of a suite of applications designed to support studies of stars and galaxies, as well as lunar exploration, and have been developed through funding from the European Union's Horizon 2020 Programme.

"State-of-the-art machine learning and visual analytics have the power to greatly enhance scientific return and discovery for space science missions, but their use is still relatively novel in the field of astronomy," said Albert Zijlstra, of the University of Manchester and the EXPLORE project. "With a constant stream of new data, such as the recent third release of Gaia data in June 2022, we have an increasing wealth of information to mine -- beyond the scope of what humans could process in a lifetime. We need tools like the ones we are developing for EXPLORE to support scientific discovery, such as by helping us to characterise properties within the data, or to pick out the most interesting or unusual features and structures."

Read more at Science Daily

Virtual reality app trial shown to reduce common phobias

Results from a University of Otago, Christchurch trial suggest fresh hope for the estimated one-in-twelve people worldwide suffering from a fear of flying, needles, heights, spiders and dogs.

The trial, led by Associate Professor Cameron Lacey, from the Department of Psychological Medicine, studied phobia patients using a headset and a smartphone app treatment programme -- a combination of Virtual Reality (VR) 360-degree video exposure therapy and cognitive behavioural therapy (CBT).

Participants downloaded a fully self-guided smartphone app called "oVRcome," developed by Christchurch tech entrepreneur Adam Hutchinson, aimed at treating patients with phobia and anxiety.

The app was paired with a headset to immerse participants in virtual environments to help treat their phobia.

The results from the trial, just published in the Australian and New Zealand Journal of Psychiatry, showed a 75 per cent reduction in phobia symptoms after six weeks of the treatment programme.

"The improvements they reported suggests there's great potential for the use of VR and mobile phone apps as a means of self-guided treatment for people struggling with often-crippling phobias," Associate Professor Lacey says.

"Participants demonstrated a strong acceptability of the app, highlighting its potential for delivering easily accessible, cost-effective treatment at scale, of particular use for those unable to access in-person exposure therapy to treat their phobias."

A total of 129 people took part in the six-week randomised, controlled trial, between May 2021 and December 2021, with a 12-week follow-up. Participants needed to be aged between 18-64 years, have a fear of either flying, heights, needles, spiders and dogs. They were emailed weekly questionnaires to record their progress. Those experiencing adverse events could request contact from a clinical psychologist at any stage.

"Participants experiencing all five types of phobia showed comparable improvements in the Severity Measures for Specific Phobia scale over the course of the trial. The average severity score decreased from 28/40 (moderate to severe symptoms) to 7/40 (minimal symptoms) after six weeks. There were no participant withdrawals due to intervention-related adverse events.

"The oVRcome app involves what's called "exposure therapy," a form of CBT exposing participants to their specific phobias in short bursts, to build up their tolerance to the phobia in a clinically-approved and controlled way," Associate Professor Lacey says.

"Some participants reported significant progress in overcoming their phobias after the trial period, with one feeling confident enough to now book an overseas family holiday, another lining up for a Covid vaccine and another reporting they now felt confident not only knowing there was a spider in the house but that they could possibly remove it themselves."

The app programme consisted of standard CBT components including psychoeducation, relaxation, mindfulness, cognitive techniques, exposure through VR, and a relapse prevention model. Participants were able to select their own exposure levels to their particular phobia from a large library of VR videos.

"This means the levels of exposure therapy could be tailored to an individual's needs which is a particular strength. The more traditional in-person exposure treatment for specific phobias have a notoriously high dropout rate due to discomfort, inconvenience and a lack of motivation in people seeking out fears to expose themselves to. With this VR app treatment, triallists had increased control in exposure to their fears, as well as control over when and where exposure occurs," says Associate Professor Lacey.

The researchers say this trial was novel, due to the cost-effective availability of the app and headsets and the fact that multiple phobias were tested at once. They say most comparative VR studies to date have investigated high-end VR devices which are only available in research and limited clinical settings. One Dutch study examined a low-cost VR Dutch-language program using animated imagery that demonstrated improvement in fear-of-height symptoms, however this study only examined a single type of specific phobia.

Associate Professor Lacey says public demand to take part in the trial was unprecedented, demonstrating the increasing need and desire for phobia treatment in the community.

"An estimated ten per cent of New Zealanders have been hesitant to take part in the government's COVID-19 vaccination programme due to needle phobia. This hasn't been helped by a significant shortage of psychologists. A petition to Parliament last year claimed New Zealand is 1,000 psychologists short, causing ballooning wait times nationwide, making it difficult for people to access help if needed. We need to further research and explore the use of more cost-effective, easily-accessible, home-based solutions such as this oVRcome app, to provide people with the treatment and support they need."

Read more at Science Daily

Link between air pollution and child brain development strengthened

Air pollution is not just a problem for lungs. Increasingly, research suggests air pollution can influence childhood behavioral problems and even IQ. A new study led by the University of Washington has added evidence showing that both prenatal and postnatal exposure to air pollution can harm kids.

The study, published in Environmental Health Perspectives, found that children whose mothers experienced higher nitrogen dioxide (NO2) exposure during pregnancy, particularly in the first and second trimester, were more likely to have behavioral problems.

Researchers also reported that higher exposures to small-particle air pollution (PM2.5) when children were 2 to 4 years old was associated with poorer child behavioral functioning and cognitive performance.

"Even in cities like Seattle or San Francisco, which have a lot of traffic but where the pollution levels are still relatively low, we found that children with higher prenatal NO2 exposure had more behavioral problems, especially with NO2exposure in the first and second trimester," said Yu Ni, lead author and a postdoctoral scholar in the Department of Environmental & Occupational Health Sciences.

The study involved data gathered from 1,967 mothers recruited during pregnancy from six cities: Memphis, Tennessee; Minneapolis; Rochester, N.Y.; San Francisco; and two in Washington, Seattle and Yakima. Originally, these participants were enrolled as part of three separate studies: CANDLE, GAPPS and TIDES. The three studies have been combined under a major NIH initiative called ECHO, which brings together multiple pregnancy cohorts to address key child health concerns. These three combined cohorts are known as the ECHO PATHWAYS consortium.

The study employed a state-of-the-art model of air pollution levels in the United States over time and space that was developed at the University of Washington. Using participant address information, the researchers were able to estimate each mother and child's exposures during the pregnancy period and early childhood.

Exposure to NO2 and PM2.5 pollution in early life is important to understand, Ni said, because "there are known biological mechanisms that can link a mother's inhalation of these pollutants to effects on placenta and fetal brain development."

Furthermore, once the child is born, the first few years are a critical time of ongoing brain development as the number of neural connections explodes and the brain reaches 90% of its future adult size, the researchers write. For young children, inhaled pollutants that invade deep in the lung and enter the central nervous system can cause damage in areas relevant for behavioral and cognitive function.

"This study reinforces the unique vulnerability of children to air pollution -- both in fetal life where major organ development and function occurs as well as into childhood when those processes continue. These early life perturbations can have lasting impacts on lifelong brain function. This study underscores the importance of air pollution as a preventable risk factor for healthy child neurodevelopment," said senior author Dr. Catherine Karr, a professor in the UW School of Public Health and School of Medicine.

More specifically, the researchers found that exposure to PM2.5 pollution was generally associated with more behavioral problems in girls than in boys, and that the adverse effect of PM2.5 exposure in the second trimester on IQ was stronger in boys.

"We hope the evidence from this study will contribute to informed policymaking in the future," Ni said. "In terms of reducing air pollution, the U.S. has gone a long way under the Clean Air Act, but there are threats to continued improvement in the nation's air quality. The evidence suggests there is reason to bring the level of air pollution down even further as we better understand the vulnerability of pregnant women and children."

Read more at Science Daily

Loss of male sex chromosome leads to earlier death for men

The loss of the male sex chromosome as many men age causes the heart muscle to scar and can lead to deadly heart failure, new research from the University of Virginia School of Medicine shows. The finding may help explain why men die, on average, several years younger than women.

UVA researcher Kenneth Walsh, PhD, says the new discovery suggests that men who suffer Y chromosome loss -- estimated to include 40% of 70-year-olds -- may particularly benefit from an existing drug that targets dangerous tissue scarring. The drug, he suspects, may help counteract the harmful effects of the chromosome loss -- effects that may manifest not just in the heart but in other parts of the body as well.

On average, women live five years longer than men in the United States. The new finding, Walsh estimates, may explain nearly four of the five-year difference.

"Particularly past age 60, men die more rapidly than women. It's as if they biologically age more quickly," said Walsh, the director of UVA's Hematovascular Biology Center. "There are more than 160 million males in the United States alone. The years of life lost due to the survival disadvantage of maleness is staggering. This new research provides clues as to why men have shorter lifespans than women."

Chromosome Loss and Heart Health

While women have two X chromosomes, men have an X and a Y. But many men begin to lose their Y chromosome in a fraction of their cells as they age. This appears to be particularly true for smokers. The loss occurs predominantly in cells that undergo rapid turnover, such as blood cells. (Loss of the Y chromosome does not occur in male reproductive cells, so it is not inherited by the children of men who exhibit Y chromosome loss.) Scientists previously observed that men who suffer Y chromosome loss are more likely to die at a younger age and suffer age-associated maladies such as Alzheimer's disease. Walsh's new research, however, is believed to be the first hard evidence that the chromosome loss directly causes harmful effects on men's health.

Walsh, of UVA's Division of Cardiovascular Medicine and the Robert M. Berne Cardiovascular Research Center, and his team used cutting-edge CRISPR gene-editing technology to develop a special mouse model to better understand the effects of Y chromosome loss in the blood. They found that the loss accelerated age-related diseases, made the mice more prone to heart scarring and led to earlier death. This wasn't the result of just inflammation, the scientists determined. Instead, the mice suffered a complex series of responses in the immune system, leading to a process referred to as fibrosis throughout the body. This tug-of-war within the immune system, the researchers believe, may accelerate disease development.

The scientists also looked at the effects of Y chromosome loss in human men. They conducted three analyses of data compiled from the UK Biobank, a massive biomedical database, and found that Y chromosome loss was associated with cardiovascular disease and heart failure. As chromosome loss increased, the scientists found, so did the risk of death.

Potential Treatment

The findings suggest that targeting the effects of Y chromosome loss could help men live longer, healthier lives. Walsh notes that one potential treatment option might be a drug, pirfenidone, that has already been approved by the federal Food and Drug Administration for the treatment of idiopathic pulmonary fibrosis, a form of lung scarring. The drug is also being tested for the treatment of heart failure and chronic kidney disease, two conditions for which tissue scarring is a hallmark. Based on his research, Walsh believes that men with Y chromosome loss could respond particularly well to this drug, and other classes of antifibrotic drugs that are being developed, though more research will be needed to determine that.

At the moment, doctors have no easy way to determine which men suffer Y chromosome loss. Walsh's collaborator Lars A. Forsberg, of Uppsala University in Sweden, has developed an inexpensive polymerase chain reaction (PCR) test, like those used for COVID-19 testing, that can detect Y chromosome loss, but the test is largely confined to his and Walsh's labs. Walsh, however, can foresee that changing: "If interest in this continues and it's shown to have utility in terms of being prognostic for men's disease and can lead to personalized therapy, maybe this becomes a routine diagnostic test," he said.

"The DNA of all our cells inevitably accumulate mutations as we age. This includes the loss of the entire Y chromosome within a subset of cells within men. Understanding that the body is a mosaic of acquired mutations provides clues about age-related diseases and the aging process itself," said Walsh, a member of UVA's Department of Biochemistry and Molecular Genetics. "Studies that examine Y chromosome loss and other acquired mutations have great promise for the development of personalized medicines that are tailored to these specific mutations."

Read more at Science Daily

Jul 14, 2022

What a Martian meteorite can teach us about Earth's origins

What do Mars and Iceland have in common?

These days, not so much. But more than 4.5 billion years ago, it's possible the Red Planet had a crust comparable to Iceland today. This discovery, hidden in the oldest martian fragments found on Earth, could provide information about our planet that was lost over billions of years of geological movement and could help explain why the Earth developed into a planet that sustains a broad diversity of life and Mars did not.

These insights into Earth's past came out of a new study, published today in Nature Communications, by an international team that includes an NAU researcher. The study details how they found the likely martian origin of the 4.48-billion-year-old meteorite, informally named Black Beauty. Its origin is one of the oldest regions of Mars.

"This meteorite recorded the first stage of the evolution of Mars and, by extension, of all terrestrial planets, including the Earth," said Valerie Payré, a postdoctoral researcher in the Department of Astronomy and Planetary Science. "As the Earth lost its old surface mainly due to plate tectonics, observing such settings in extremely ancient terrains on Mars is a rare window into the ancient Earth surface that we lost a long time ago."

What Mars can tell us about Earth

The team, led by Anthony Lagain from Curtin University in Australia, searched for the location of origin of a martian meteorite (officially named NWA -- Northwest Africa -- 7034 for where it was found on Earth). This meteorite, the chemistry of which indicates that Mars had volcanic activity to that found on Earth, recorded the first stage of Mars' evolution. Although it was ejected from the surface of Mars five to 10 million years ago after an asteroid impact, its source region and geological context has remained a mystery.

This team studied chemical and physical properties of Black Beauty to pinpoint where it came from; they determined it was from Terra Cimmeria-Sirenum, one of the most ancient regions of Mars. It may have a surface similar to Earth's continents. Planetary bodies like Mars have impacts craters all over their surface, so finding the right one is challenging. In a previous study, Lagain's team developed a crater detection algorithm that uses high-resolution images of the surface of Mars to identify small impact craters, finding about 90 million as small as 50 meters in diameter. In this study, they were able to isolate the most plausible ejection site -- the Karratha crater that excavated ejecta of an older crater named Khujirt.

"For the first time, we know the geological context of the only brecciated Martian sample available on Earth, 10 years before the NASA's Mars Sample Return mission is set to send back samples collected by the Perseverance rover currently exploring the Jezero crater," said Lagain, a research fellow in the School of Earth and Planetary Sciences at Curtin. "This research paved the way to locate the ejection site of other Martian meteorites, in order to create the most exhaustive view of the Red Planet's geological history."

Payré studies the nature and formation of Mars' crust to determine if Earth and Mars share a common past that include both a continent-like and ocean-like crust. She uses orbital observations captured in this region to investigate whether traces of volcanism similar to Iceland exist on Mars.

"As of today, Mars' crust complexity is not understood, and knowing about the origin of these amazing ancient fragments could lead future rover and spatial missions to explore the Terra Sirenum-Cimmeria region that hides the truth of Mars' evolution, and perhaps the Earth's," she said. "This work paves the road to locate the ejection site of other martian meteorites that will provide the most exhaustive view of the geological history of Mars and will answer one of the most intriguing questions: why Mars, now dry and cold, evolved so differently from Earth, a flourishing planet for life?"

Read more at Science Daily

An ocean of galaxies awaits

Sometime around 400 million years after the birth of our universe, the first stars began to form. The universe's so-called dark ages came to an end and a new light-filled era began. More and more galaxies began to take shape and served as factories for churning out new stars, a process that reached a peak about 4 billion years after the Big Bang.

Luckily for astronomers, this bygone era can be observed. Distant light takes time to reach us, and our telescopes can pick up light emitted by galaxies and stars billions of years ago (our universe is 13.8 billion years old). But the details of this chapter in our universe's history are murky since most of the stars being formed are faint and hidden by dust.

A new Caltech project, called COMAP (CO Mapping Array Project), will offer us a new glimpse into this epoch of galaxy assembly, helping to answer questions about what really caused the universe's rapid increase in the production of stars.

"Most instruments might see the tip of an iceberg when looking at galaxies from this period," says Kieran Cleary, the project's principal investigator and the associate director of Caltech's Owens Valley Radio Observatory (OVRO). "But COMAP will see what lies underneath, hidden from view."

The current phase of the project uses a 10.4-meter "Leighton" radio dish at OVRO to study the most common kinds of star-forming galaxies spread across space and time, including those that are too difficult to view in other ways because they are too faint or hidden by dust. The radio observations trace the raw material from which stars are made: cold hydrogen gas. This gas is not easy to pinpoint directly, so instead COMAP measures bright radio signals from carbon monoxide (CO) gas, which is always present along with the hydrogen. COMAP's radio camera is the most powerful ever built to detect these radio signals.

The first science results from the project have just been published in seven papers in The Astrophysical Journal. Based on observations taken one year into a planned five-year survey, COMAP set upper limits on how much cold gas must be present in galaxies at the epoch being studied, including the ones that are normally too faint and dusty to see. While the project has not yet made a direct detection of the CO signal, these early results demonstrate that it is on track to do so by the end of the initial five-year survey and ultimately will paint the most comprehensive picture yet of the universe's history of star formation.

"Looking to the future of the project, we aim to use this technique to successively look further and further back in time," Cleary says. "Starting 4 billion years after the Big Bang, we will keep pushing back in time until we reach the epoch of the first stars and galaxies, a couple of billion years earlier."

Anthony Readhead, the co-principal investigator and the Robinson Professor of Astronomy, Emeritus, says that COMAP will see the not only the first epoch of stars and galaxies, but also their epic decline. "We will observe star formation rising and falling like an ocean tide," he says.

COMAP works by capturing blurry radio images of clusters of galaxies over cosmic time rather than sharp images of individual galaxies. This blurriness enables the astronomers to efficiently catch all the radio light coming from a larger pool of galaxies, even the faintest and dustiest ones that have never been seen.

"In this way, we can find the average properties of typical, faint galaxies without needing to know very precisely where any individual galaxy is located," explains Cleary. "This is like finding the temperature of a large volume of water using a thermometer rather than analyzing the motions of the individual water molecules."

Read more at Science Daily

Researchers use quantum-inspired approach to increase lidar resolution

Researchers have shown that a quantum-inspired technique can be used to perform lidar imaging with a much higher depth resolution than is possible with conventional approaches. Lidar, which uses laser pulses to acquire 3D information about a scene or object, is usually best suited for imaging large objects such as topographical features or built structures due to its limited depth resolution.

"Although lidar can be used to image the overall shape of a person, it typically doesn't capture finer details such as facial features," said research team leader Ashley Lyons from the University of Glasgow in the United Kingdom. "By adding extra depth resolution, our approach could capture enough detail to not only see facial features but even someone's fingerprints."

In the Optica Publishing Group journal Optics Express, Lyons and first author Robbie Murray describe the new technique, which they call imaging two-photon interference lidar. They show that it can distinguish reflective surfaces less than 2 millimeters apart and create high-resolution 3D images with micron-scale resolution.

"This work could lead to much higher resolution 3D imaging than is possible now, which could be useful for facial recognition and tracking applications that involve small features," said Lyons. "For practical use, conventional lidar could be used to get a rough idea of where an object might be and then the object could be carefully measured with our method."

Using classically entangled light

The new technique uses "quantum inspired" interferometry, which extracts information from the way that two light beams interfere with each other. Entangled pairs of photons -- or quantum light -- are often used for this type of interferometry, but approaches based on photon entanglement tend to perform poorly in situations with high levels of light loss, which is almost always the case for lidar. To overcome this problem, the researchers applied what they've learned from quantum sensing to classical (non-quantum) light.

"With quantum entangled photons, only so many photon pairs per unit time can be generated before the setup becomes very technically demanding," said Lyons. "These problems don't exist with classical light, and it is possible to get around the high losses by turning up the laser power."

When two identical photons meet at a beam splitter at the same time they will always stick together, or become entangled, and leave in the same direction. Classical light shows the same behavior but to a lesser degree -- most of the time classical photons go in the same direction. The researchers used this property of classical light to very precisely time the arrival of one photon by looking at when two photons simultaneously arrive at detectors.

Enhancing depth resolution

"The time information gives us the ability to perform depth ranging by sending one of those photons out onto the 3D scene and then timing how long it takes for that photon to come back," said Lyons. "Thus, two-photon interference lidar works much like conventional lidar but allows us to more precisely time how long it takes for that photon to reach the detector, which directly translates into greater depth resolution."

The researchers demonstrated the high depth resolution of two-photon interference lidar by using it to detect the two reflective surfaces of a piece of glass about 2 millimeters thick. Traditional lidar wouldn't be able to distinguish these two surfaces, but the researchers were able to clearly measure the two surfaces. They also used the new method to create a detailed 3D map of a 20-pence coin with 7-micron depth resolution. This shows that the method could capture the level of detail necessary to differentiate key facial features or other differences between people.

Two-photon interference lidar also works very well at the single-photon level, which could enhance more complex imaging approaches used for non-line-of-sight imaging or imaging through highly scattering media.

Read more at Science Daily

Study confirms lead-in-water causes adverse fetal health outcomes

Lehigh University and Bentley University health economics researchers have published the first study to confirm a causal relationship between lead-in-water and adverse fetal health outcomes. Although many studies have found a correlation between lead exposure and health, a causal link had been lacking in the literature -- until now.

The study has recently been published in the Journal of Health Economics in an article titled: Lead in Drinking Water and Birth Outcomes: A Tale of Two Water Treatment Plants.

The researchers,Muzhe Yang, professor of economics at Lehigh University and Dhaval M. Dave of Bentley University, used data on the exact home addresses of pregnant women living in the City of Newark together with information on the spatial boundary separating areas within the city serviced by two water treatment plants. Their study exploits an exogenous, or external, change in the water's pH level that caused lead to leach into the drinking water of one plant's service area, but not the other's, to identify the causal effect of prenatal lead exposure on fetal health.

Yang and his colleague found robust evidence of adverse health impacts. Among the findings: prenatal lead exposure increased the chance of low-birth-weight by 18% and increased the probability of preterm birth by 19%.

"These findings have important policy implications," says Yang, "especially in light of the substantial number of lead water pipes that remain in use as part of the aging infrastructure and the cost-benefit calculus of lead abatement interventions."

Yang notes that the crisis in Newark is not singular, but rather emblematic of the nation's aging water infrastructure.

According to the American Academy of Pediatrics, there is no safe threshold for lead exposure that has been identified for children. Lead collects over time in the human body through repeated exposure and is stored in the bones alongside calcium. In utero exposure is of particular concern as lead in the mother's bones can be mobilized during pregnancy and released as a calcium substitute to aid in the formation of the bones of the fetus, and lead in a mother's blood can also cross the placenta, exposing the fetus to lead poisoning. Prenatal lead exposure has been associated with impaired neural development putting children at risk for cognitive impairment later.

The Environmental Protection Agency (EPA) estimates that drinking water may account for more than 20 percent of total lead exposure for adults and 40 to 60 percent for infants.

In the introduction to their paper, Yang and Dave write: "Drinking water contamination is becoming an increasingly important and widespread source of prenatal exposure to environmental pollution. Between 2018 and 2020, nearly 30 million people received their drinking water from community water systems that were in violation of the EPA's Lead and Copper Rule, which sets maximum enforceable levels of these metals in drinking water…"

Read more at Science Daily

Jul 13, 2022

Astronomers detect a radio 'heartbeat' billions of light-years from Earth

Astronomers at MIT and elsewhere have detected a strange and persistent radio signal from a far-off galaxy that appears to be flashing with surprising regularity.

The signal is classified as a fast radio burst, or FRB -- an intensely strong burst of radio waves of unknown astrophysical origin, that typically lasts for a few milliseconds at most. However, this new signal persists for up to three seconds, about 1,000 times longer than the average FRB. Within this window, the team detected bursts of radio waves that repeat every 0.2 seconds in a clear periodic pattern, similar to a beating heart.

The researchers have labeled the signal FRB 20191221A, and it is currently the longest-lasting FRB, with the clearest periodic pattern, detected to date.

The source of the signal lies in a distant galaxy, several billion light-years from Earth. Exactly what that source might be remains a mystery, though astronomers suspect the signal could emanate from either a radio pulsar or a magnetar, both of which are types of neutron stars -- extremely dense, rapidly spinning collapsed cores of giant stars.

"There are not many things in the universe that emit strictly periodic signals," says Daniele Michilli, a postdoc in MIT's Kavli Institute for Astrophysics and Space Research. "Examples that we know of in our own galaxy are radio pulsars and magnetars, which rotate and produce a beamed emission similar to a lighthouse. And we think this new signal could be a magnetar or pulsar on steroids."

The team hopes to detect more periodic signals from this source, which could then be used as an astrophysical clock. For instance, the frequency of the bursts, and how they change as the source moves away from Earth, could be used to measure the rate at which the universe is expanding.

The discovery is reported today in the journal Nature, and is authored by members of the CHIME/FRB Collaboration, including MIT co-authors Calvin Leung, Juan Mena-Parra, Kaitlyn Shin, and Kiyoshi Masui at MIT, along with Michilli, who led the discovery first as a researcher at McGill University, and then as a postdoc at MIT.

"Boom, boom, boom"

Since the first FRB was discovered in 2007, hundreds of similar radio flashes have been detected across the universe, most recently by the Canadian Hydrogen Intensity Mapping Experiment, or CHIME, an interferometric radio telescope consisting of four large parabolic reflectors that is located at the Dominion Radio Astrophysical Observatory in British Columbia, Canada.

CHIME continuously observes the sky as the Earth rotates, and is designed to pick up radio waves emitted by hydrogen in the very earliest stages of the universe. The telescope also happens to be sensitive to fast radio bursts, and since it began observing the sky in 2018, CHIME has detected hundreds of FRBs emanating from different parts of the sky.

The vast majority of FRBs observed to date are one-offs -- ultrabright bursts of radio waves that last for a few milliseconds before blinking off. Recently, researchers discovered the first periodic FRB that appeared to emit a regular pattern of radio waves. This signal consisted of a four-day window of random bursts that then repeated every 16 days. This 16-day cycle indicated a periodic pattern of activity, though the signal of the actual radio bursts was random rather than periodic.

On Dec. 21, 2019, CHIME picked up a signal of a potential FRB, which immediately drew the attention of Michilli, who was scanning the incoming data.

"It was unusual," he recalls. "Not only was it very long, lasting about three seconds, but there were periodic peaks that were remarkably precise, emitting every fraction of a second -- boom, boom, boom -- like a heartbeat. This is the first time the signal itself is periodic."

Brilliant bursts

In analyzing the pattern of FRB 20191221A's radio bursts, Michilli and his colleagues found similarities with emissions from radio pulsars and magnetars in our own galaxy. Radio pulsars are neutron stars that emit beams of radio waves, appearing to pulse as the star rotates, while a similar emission is produced by magnetars due to their extreme magnetic fields.

The main difference between the new signal and radio emissions from our own galactic pulsars and magnetars is that FRB 20191221A appears to be more than a million times brighter. Michilli says the luminous flashes may originate from a distant radio pulsar or magnetar that is normally less bright as it rotates and for some unknown reason ejected a train of brilliant bursts, in a rare three-second window that CHIME was luckily positioned to catch.

"CHIME has now detected many FRBs with different properties," Michilli says. "We've seen some that live inside clouds that are very turbulent, while others look like they're in clean environments. From the properties of this new signal, we can say that around this source, there's a cloud of plasma that must be extremely turbulent."

The astronomers hope to catch additional bursts from the periodic FRB 20191221A, which can help to refine their understanding of its source, and of neutron stars in general.

Read more at Science Daily

Undead planets: The unusual conditions of the first exoplanet detection

The first ever exoplanets were discovered 30 years ago around a rapidly rotating star, called a pulsar. Now, astronomers have revealed that these planets may be incredibly rare. The new work will be presented tomorrow (Tuesday 12 July) at the National Astronomy Meeting (NAM 2022) by Iuliana Nițu, a PhD student at the University of Manchester.

The processes that cause planets to form, and survive, around pulsars are currently unknown. A survey of 800 pulsars followed by the Jodrell Bank Observatory over the last 50 years has revealed that this first detected exoplanet system may be extraordinarily uncommon: less than 0.5% of all known pulsars could host Earth-mass planets.

Pulsars are a type of neutron star, the densest stars in the universe, born during powerful explosions at the end of a typical star's life. They are exceptionally stable, rapidly rotating, and have incredibly strong magnetic fields. Pulsars emit beams of bright radio emission from their magnetic poles that appear to pulse as the star rotates.

"[Pulsars] produce signals which sweep the Earth every time they rotate, similarly to a cosmic lighthouse," says Nițu "These signals can then be picked up by radio telescopes and turned into a lot of amazing science."

In 1992, the first ever exoplanets were discovered orbiting a pulsar called PSR B1257+12. The planetary system is now known to host at least three planets similar in mass to the rocky planets in our Solar System. Since then, a handful of pulsars have been found to host planets. However, the extremely violent conditions surrounding the births and lives of pulsars make 'normal' planet formation unlikely, and many of these detected planets are exotic objects (such as planets made mostly of diamond) unlike those we know in our Solar System.

A team of astronomers at the University of Manchester performed the largest search for planets orbiting pulsars to date. In particular, the team looked for signals that indicate the presence of planetary companions with masses up to 100 times that of the Earth, and orbital time periods between 20 days and 17 years. Of the 10 potential detections, the most promising is the system PSR J2007+3120 with the possibility of hosting at least two planets, with masses a few times bigger than the Earth, and orbital periods of 1.9 and ~3.6 years.

The results of the work indicate no bias for particular planet masses or orbital periods in pulsar systems. However, the results do yield information of the shape of these planets' orbits: in contrast to the near-circular orbits found in our Solar System, these planets would orbit their stars on highly elliptical paths. This indicates that the formation process for pulsar-planet systems is vastly different than traditional star-planet systems.

Read more at Science Daily

Longer lasting sodium-ion batteries on the horizon

Cheap and abundant, sodium is a prime promising candidate for new battery technology. But limited performance of sodium-ion batteries has hindered their large-scale applications.

Now, a research team from the Department of Energy's Pacific Northwest National Laboratory has developed a sodium-ion battery with greatly extended longevity in laboratory tests. An ingenious shift in the ingredients that make up the liquid core of the battery prevents the performance issues that have bedeviled sodium-based batteries. The findings, described in the journal Nature Energy, provide a promising recipe for a battery that may one day power electric vehicles and store energy from the sun.

"Here, we have shown in principle that sodium-ion batteries have the potential to be a long lasting and environmentally friendly battery technology," said PNNL lead author Jiguang (Jason) Zhang, a pioneer of battery technologies with more than 23 patented inventions in energy storage technology.

The right salt

In batteries, electrolyte is the circulating "blood" that keeps the energy flowing. The electrolyte forms by dissolving salts in solvents, resulting in charged ions that flow between the positive and negative electrodes. Over time, the electrochemical reactions that keep the energy flowing get sluggish, and the battery can no longer recharge. In current sodium-ion battery technologies, this process happens much faster than in similar lithium-ion batteries.

The PNNL team, led by scientists Yan Jin and Phung Le, attacked that problem by switching out the liquid solution and the type of salt flowing through it to create a wholly new electrolyte recipe. In laboratory tests, the new design proved durable, holding 90 percent of its cell capacity after 300 cycles at 4.2 V, which is higher than most sodium-ion batteries previously reported.

The current electrolyte recipe for sodium-ion batteries results in the protective film on the negative end (the anode) dissolving over time. This film is critical because it allows sodium ions to pass through while preserving battery life. The PNNL-designed technology works by stabilizing this protective film. The new electrolyte also generates an ultra-thin protective layer on the positive pole (the cathode) that contributes to additional stability of the entire unit.

Non-flammable technology

The new PNNL-developed sodium-ion technology uses a naturally fire-extinguishing solution that is also impervious to temperature changes and can operate at high voltages. One key to this feature is the ultra-thin protective layer that forms on the anode. This ultra-thin layer remains stable once formed, providing the long cycle life reported in the research article.

"We also measured the production of gas vapor at the cathode," said Phung Le, a PNNL battery chemist and one of the lead authors of the study. "We found very minimal gas production. This provides new insights to develop stable electrolyte for sodium-ion batteries that may operate at elevated temperatures."

For now, the sodium-ion technology still lags behind lithium in energy density. But it has its own advantages, such as imperviousness to temperature changes, stability and long cycle life, which are valuable for applications of certain light-duty electric vehicles and even grid energy storage in the future.

The research team continues to refine their design. Le noted that the team is experimenting with other designs in an effort to reduce -- and eventually eliminate -- the need to include cobalt, which is toxic and expensive if not recovered or recycled.

Read more at Science Daily

During sleep the brain's reaction to sound remains strong, but one critical feature of conscious attention disappears

A new discovery from Tel Aviv University may provide a key to a great scientific enigma: How does the awake brain transform sensory input into a conscious experience? The groundbreaking study relied on data collected from electrodes implanted, for medical purposes, deep in the human brain. The information was utilized to examine differences between the response of the cerebral cortex to sounds in sleep vs. wakefulness, at a resolution of single neurons.

The researchers were surprised to discover that the brain's response to sound remains powerful during sleep in all parameters but one: the level of alpha-beta waves associated with attention to the auditory input and related expectations. This means that during sleep, the brain analyzes the auditory input but is unable to focus on the sound or identify it, and therefore no conscious awareness ensues.

The study was led by Dr. Hanna Hayat and with major contribution from Dr. Amit Marmelshtein, at the lab of Prof. Yuval Nir from the School of Medicine, the Sagol School of Neuroscience, and the Department of Biomedical Engineering, and co-supervised by Prof. Itzhak Fried from the UCLA Medical Center. Other participants included: Dr. Aaron Krom and Dr. Yaniv Sela from Prof. Nir's group, and Dr. Ido Strauss and Dr. Firas Fahoum from the Tel Aviv Sourasky Medical Center (Ichilov). The paper was published in the journal Nature Neuroscience.

Prof. Nir: "This study is unique in that it builds upon rare data from electrodes implanted deep inside the human brain, enabling high-resolution monitoring, down to the level of individual neurons, of the brain's electrical activity. For understandable reasons, electrodes cannot be implanted in the brain of living humans just for the sake of scientific research. But in this study, we were able to utilize a special medical procedure in which electrodes were implanted in the brains of epilepsy patients, monitoring activity in different parts of their brain for purposes of diagnosis and treatment. The patients volunteered to help examine the brain's response to auditory stimulation in wakefulness vs. sleep."

The researchers placed speakers emitting various sounds at the patients' bedside and compared data from the implanted electrodes -- neural activity and electrical waves in different areas of the brain -- during wakefulness vs. various stages of sleep. Altogether, the team collected data from over 700 neurons, about 50 neurons in each patient, over the course of 8 years.

Dr. Hayat: "After sounds are received in the ear, the signals are relayed from one station to the next within the brain. Until recently it was believed that during sleep these signals decay rapidly once they reach the cerebral cortex. But looking at the data from the electrodes, we were surprised to discover that the brain's response during sleep was much stronger and richer than we had expected. Moreover, this powerful response spread to many regions of the cerebral cortex. The strength of brain response during sleep was similar to the response observed during wakefulness, in all but one specific feature, where a dramatic difference was recorded: the level of activity of alpha-beta waves."

The researchers explain that alpha-beta waves (10-30Hz) are linked to processes of attention and expectation that are controlled by feedback from higher regions in the brain. As signals travel 'bottom-up' from the sensory organs to higher regions, a 'top-down' motion also occurs: the higher regions, relying on prior information that had accumulated in the brain, act as a guide, sending down signals to instruct the sensory regions as to which input to focus on, which should be ignored, etc. Thus, for example, when a certain sound is received in the ear, the higher regions can tell whether it is new or familiar, and whether it deserves attention or not. This kind of brain activity is manifested in the suppression of alpha-beta waves, and indeed, previous studies have shown a high level of these waves in states of rest and anesthesia. According to the current study, the strength of alpha-beta waves is the main difference between the brain's response to auditory inputs in states of wakefulness vs. sleep.

Prof Nir summarizes: "Our findings have wide implications beyond this specific experiment. First, they provide an important key to an ancient, fascinating enigma: What is the secret of consciousness? What is the 'X-factor', the brain activity that is unique to consciousness, allowing us to be aware of things happening around us when we are awake, and disappearing when we sleep? In this study we discovered a new lead, and in future research we intend to further explore the mechanisms responsible for this difference.

Read more at Science Daily

Jul 12, 2022

President Biden reveals first image from NASA's Webb Telescope

President Joe Biden released the first full-color image from NASA's James Webb Space Telescope Monday, during a public event at the White House in Washington. This first image showcases the powerful capabilities of the Webb mission, a partnership with ESA (European Space Agency) and CSA (Canadian Space Agency).

"These images are going to remind the world that America can do big things, and remind the American people -- especially our children -- that there's nothing beyond our capacity," said President Biden in remarks during the event. "We can see possibilities no one has ever seen before. We can go places no one has ever gone before."

Webb's first full-color image reveals thousands of galaxies, including the faintest objects ever observed in the infrared.

"Webb's First Deep Field is not only the first full-color image from the James Webb Space Telescope, it's the deepest and sharpest infrared image of the distant universe, so far. This image covers a patch of sky approximately the size of a grain of sand held at arm's length. It's just a tiny sliver of the vast universe," said NASA Administrator Bill Nelson. "This mission was made possible by human ingenuity -- the incredible NASA Webb team and our international partners at the European Space Agency and the Canadian Space Agency. Webb is just the start of what we can accomplish in the future when we work together for the benefit of humanity."

This record-setting deep field provides a preview of the full set of Webb's first images, which will be released at 10:30 a.m. EDT Tuesday, July 12, in a live broadcast on NASA Television. The images will be available at: https://www.nasa.gov/webbfirstimages

More information about how to watch the live reveal of the full set of Webb's first images on Tuesday, July 12, is available online: https://www.nasa.gov/press-release/nasa-updates-coverage-for-webb-telescope-s-first-images-reveal

"Scientists are thrilled that Webb is alive and as powerful as we hoped, far beyond Hubble, and that it survived all hazards to be our golden eye in the sky," said John Mather, Webb senior project scientist at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "What happened after the big bang? How did the expanding universe cool down and make black holes and galaxies and stars and planets and people? Astronomers see everything twice: first with pictures, and then with imagination and calculation. But there's something out there that we've never imagined, and I will be as amazed as you are when we find it."

Learn more about this deep field image: https://www.nasa.gov/image-feature/goddard/2022/nasa-s-webb-delivers-deepest-infrared-image-of-universe-yet/

 Read more at Science Daily

The ultimate fate of a star shredded by a black hole

In 2019, astronomers observed the nearest example to date of a star that was shredded, or "spaghettified," after approaching too close to a massive black hole.

That tidal disruption of a sun-like star by a black hole 1 million times more massive than itself took place 215 million light years from Earth. Luckily, this was the first such event bright enough that astronomers from the University of California, Berkeley, could study the optical light from the stellar death, specifically the light's polarization, to learn more about what happened after the star was torn apart.

Their observations on Oct. 8, 2019, suggest that a lot of the star's material was blown away at high speed -- up to 10,000 kilometers per second -- and formed a spherical cloud of gas that blocked most of the high-energy emissions produced as the black hole gobbled up the remainder of the star.

Earlier, other observations of optical light from the blast, called AT2019qiz, revealed that much of the star's matter was launched outward in a powerful wind. But the new data on the light's polarization, which was essentially zero at visible or optical wavelengths when the event was at its brightest, tells astronomers that the cloud was likely spherically symmetric.

"This is the first time anyone has deduced the shape of the gas cloud around a tidally spaghetiffied star," said Alex Filippenko, UC Berkeley professor of astronomy and a member of the research team.

The results support one answer to why astronomers don't see high-energy radiation, such as X-rays, from many of the dozens of tidal disruption events observed to date: The X-rays, which are produced by material ripped from the star and dragged into an accretion disk around the black hole before falling inward, are obscured from view by the gas blown outward by powerful winds from the black hole.

"This observation rules out a class of solutions that have been proposed theoretically and gives us a stronger constraint on what happens to gas around a black hole," said UC Berkeley graduate student Kishore Patra, lead author of the study. "People have been seeing other evidence of wind coming out of these events, and I think this polarization study definitely makes that evidence stronger, in the sense that you wouldn't get a spherical geometry without having a sufficient amount of wind. The interesting fact here is that a significant fraction of the material in the star that is spiraling inward doesn't eventually fall into the black hole -- it's blown away from the black hole."

Polarization reveals symmetry

Many theorists have hypothesized that the stellar debris forms an eccentric, asymmetric disk after disruption, but an eccentric disk is expected to show a relatively high degree of polarization, which would mean that perhaps several percent of the total light is polarized. This was not observed for this tidal disruption event.

"One of the craziest things a supermassive black hole can do is to shred a star by its enormous tidal forces," said team member Wenbin Lu, UC Berkeley assistant professor of astronomy. "These stellar tidal disruption events are one of very few ways astronomers know the existence of supermassive black holes at the centers of galaxies and measure their properties. However, due to the extreme computational cost in numerically simulating such events, astronomers still do not understand the complicated processes after a tidal disruption."

A second set of observations on Nov. 6, 29 days after the October observation, revealed that the light was very slightly polarized, about 1%, suggesting that the cloud had thinned enough to reveal the asymmetric gas structure around the black hole. Both observations came from the 3-meter Shane telescope at Lick Observatory near San Jose, California, that is fitted with the Kast spectrograph, an instrument that can determine the polarization of light over the full optical spectrum. The light becomes polarized -- its electrical field vibrates primarily in one direction -- when it scatters off electrons in the gas cloud.

"The accretion disk itself is hot enough to emit most of its light in X-rays, but that light has to come through this cloud, and there are many scatterings, absorptions and reemissions of light before it can escape out of this cloud," Patra said. "With each of these processes, the light loses some of its photon energy, going all the way down to ultraviolet and optical energies. The final scatter then determines the polarization state of the photon. So, by measuring polarization, we can deduce the geometry of the surface where the final scatter happens."

Patra noted that this deathbed scenario may apply only to normal tidal disruptions -- not "oddballs," in which relativistic jets of material are expelled out the poles of the black hole. Only more measurements of the polarization of light from these events will answer that question.

"Polarization studies are very challenging, and very few people are well-versed enough in the technique around the world to utilize this," he said. "So, this is uncharted territory for tidal disruption events."

Patra, Filippenko, Lu and UC Berkeley researcher Thomas Brink, graduate student Sergiy Vasylyev and postdoctoral fellow Yi Yang reported their observations in a paper that has been accepted for publication in the journal Monthly Notices of the Royal Astronomical Society.

A cloud 100 times larger than Earth's orbit

The UC Berkeley researchers calculated that the polarized light was emitted from the surface of a spherical cloud with a radius of about 100 astronomical units (au), 100 times farther from the star than Earth is from the sun. An optical glow from hot gas emanated from a region at about 30 au.

The 2019 spectropolarimetric observations -- a technique that measures polarization across many wavelengths of light -- were of AT2019qiz, a tidal disruption located in a spiral galaxy in the constellation of Eridanus. The zero polarization of the entire spectrum in October indicates a spherically symmetric cloud of gas -- all the polarized photons balance one another. The slight polarization of the November measurements indicates a small asymmetry. Because these tidal disruptions occur so far away, in the centers of distant galaxies, they appear as only a point of light, and polarization is one of few indications of the shapes of objects.

"These disruption events are so far away that you can't really resolve them, so you can't study the geometry of the event or the structure of these explosions," Filippenko said. "But studying polarized light actually helps us to deduce some information about the distribution of the matter in that explosion or, in this case, how the gas -- and possibly the accretion disk -- around this black hole is shaped."

Read more at Science Daily

Video game players show enhanced brain activity, decision-making skill study

Frequent players of video games show superior sensorimotor decision-making skills and enhanced activity in key regions of the brain as compared to non-players, according to a recent study by Georgia State University researchers.

The authors, who used functional magnetic resonance imaging (FMRI) in the study, said the findings suggest that video games could be a useful tool for training in perceptual decision-making.

"Video games are played by the overwhelming majority of our youth more than three hours every week, but the beneficial effects on decision-making abilities and the brain are not exactly known," said lead researcher Mukesh Dhamala, associate professor in Georgia State's Department of Physics and Astronomy and the university's Neuroscience Institute.

"Our work provides some answers on that," Dhamala said. "Video game playing can effectively be used for training -- for example, decision-making efficiency training and therapeutic interventions -- once the relevant brain networks are identified."

Dhamala was the adviser for Tim Jordan, the lead author of the paper, who offered a personal example of how such research could inform the use of video games for training the brain.

Jordan, who received a Ph.D. in physics and astronomy from Georgia State in 2021, had weak vision in one eye as a child. As part of a research study when he was about 5, he was asked to cover his good eye and play video games as a way to strengthen the vision in the weak one. Jordan credits video game training with helping him go from legally blind in one eye to building strong capacity for visual processing, allowing him to eventually play lacrosse and paintball. He is now a postdoctoral researcher at UCLA.

The Georgia State research project involved 47 college-age participants, with 28 categorized as regular video game players and 19 as non-players.

The subjects laid inside an FMRI machine with a mirror that allowed them to see a cue immediately followed by a display of moving dots. Participants were asked to press a button in their right or left hand to indicate the direction the dots were moving, or resist pressing either button if there was no directional movement.

The study found that video game players were faster and more accurate with their responses.

Analysis of the resulting brain scans found that the differences were correlated with enhanced activity in certain parts of the brain.

"These results indicate that video game playing potentially enhances several of the subprocesses for sensation, perception and mapping to action to improve decision-making skills," the authors wrote. "These findings begin to illuminate how video game playing alters the brain in order to improve task performance and their potential implications for increasing task-specific activity."

The study also notes there was no trade-off between speed and accuracy of response -- the video game players were better on both measures.

Read more at Science Daily

How breastfeeding offers immune benefits

When infants breastfeed, they receive an immune boost that helps them fight off infectious diseases, according to recent research from Binghamton University Associate Professor of Anthropology Katherine Wander.

She is the lead author of "Tradeoffs in milk immunity affect infant infectious disease risk," published this June in Evolution, Medicine, and Public Health. Co-authors include Masako Fujita from Michigan State University's Anthropology Department, Siobhan Mattison from the University of New Mexico's Anthropology Department and the National Science Foundation; and Frida Mowo, Ireen Kiwelu and Blandina Mmbaga in Tanzania, whose associations include the Kilimanjaro Christian Medical Centre and the Kilimanjaro Clinical Research Institute. Binghamton University graduate students were also part of the research team, with tasks ranging from data collection in Tanzania to data-cleaning and analysis. They include Margaret Duris, Megan Gauck, Tessa Hopt, Katherine Lacy, Angela Foligno, Rebecca Ulloa and Connor Dodge.

For the project, the research team studied almost 100 mother and baby pairs in rural Kilimanjaro. Prolonged breastfeeding is the norm in this population and infectious diseases during infancy are very common, even compared to other areas of East Africa. This makes Kilimanjaro an ideal setting to begin to understand how immune protection from milk might affect infectious disease risk, Wander said.

"You most often hear about the immune system of milk in terms of transferring maternal antibodies to infants via milk -- which is probably very important -- but it seems there's much more going on as well. The immune system of milk is a whole system, capable of mounting immune responses," Wander said. "We're only beginning to understand the full extent and role of the immune system of milk."

Milk and immunity

Mother's milk contains everything needed to mount immune responses, from antibodies to multiple types of immune cells and more. While they originate from the mother's immune system, these components of milk appear to be curated rather than selected at random from the mother's blood, although that mechanism remains poorly understood, Wander explained.

To test the impact of milk's immune system on infant health, the researchers combined a few milliliters of milk with a small amount of bacteria, then placed the mixture in an incubator overnight. They then measured the increase of interleukin-6, an immune cell communication molecule that promotes inflammation. This in-vitro response gives an indication of how the milk's immune system is likely to respond to bacteria encountered in the infant's body -- the gut, for example.

The research team also followed the Tanzanian infants to assess whether those who received milk that mounted stronger immune responses during the in-vitro tests were at lower risk for infectious diseases. That appeared to be the case: infants whose mothers' milk mounted larger responses to Salmonella had fewer infectious diseases, particularly respiratory infections such as pneumonia.

But milk that mounted larger responses to Salmonella also tended to mount stronger responses to a benign strain of E. coli, which is common in the human intestinal tract, and these responses weren't beneficial to infants. Infants who received milk that mounted stronger responses to E. coli were at higher risk for gastrointestinal infections. This may indicate that inappropriate responses by milk's immune system -- for example, to bacteria normally present in the gut -- can be disruptive. Gut bacteria play an important role in preventing diarrhea and other infectious disease, the authors note.

While all immune responses have tradeoffs, the downside of milk -- both immediate and common -- was a surprising discovery.

"With so much at stake, we really expected the immune system of milk to be very finely tuned to protecting infants against infection," Wander said.

Researchers expected to see, at most, negative effects of inappropriate immune responses somewhere down the line, such as in slower growth or less than ideal microbial flora. But differentiating between microbial friend or foe is a tricky business even for adults' mature immune systems, as is eliminating an infection without damaging the person's own tissues. So, the authors say, maybe they shouldn't have been surprised to see these tradeoffs play out in infants, as well.

In addition to reducing risk for respiratory infectious, milk immune responses may help "train" the infant's developing immune system to respond to dangerous bacteria. More research is needed to determine how immune development calibrates to input, such as experience with infectious diseases, microbial flora and the immune system within milk.

"These findings are interesting, but the implications for public health and healthcare will only become clear with additional research," said co-author Mmbaga of the Kilimanjaro Clinical Research Institute. "We need to understand how milk immune responses are affected by things we can design public health programs around, like HIV infection or malnutrition."

This research may have applications that go beyond infancy and breastfeeding. Figuring out how the immune system has evolved to strike a balance between protection and harm could help shed light on health problems from infant diarrhea and pneumonia to autoimmune diseases.

Read more at Science Daily

Jul 11, 2022

Shedding new light on dark matter

A team of physicists has developed a method for predicting the composition of dark matter -- invisible matter detected only by its gravitational pull on ordinary matter and whose discovery has been long sought by scientists.

Its work, which appears in the journal Physical Review Letters, centers on predicting "cosmological signatures" for models of dark matter with a mass between that of the electron and the proton. Previous methods had predicted similar signatures for simpler models of dark matter. This research establishes new ways to find these signatures in more complex models, which experiments continue to search for, the paper's authors note.

"Experiments that search for dark matter are not the only way to learn more about this mysterious type of matter," says Cara Giovanetti, a Ph.D. student in New York University's Department of Physics and the lead author of the paper.

"Precision measurements of different parameters of the universe -- for example, the amount of helium in the universe, or the temperatures of different particles in the early universe -- can also teach us a lot about dark matter," adds Giovanetti, outlining the method described in the Physical Review Letters paper.

In the research, conducted with Hongwan Liu, an NYU postdoctoral fellow, Joshua Ruderman, an associate professor in NYU's Department of Physics, and Princeton physicist Mariangela Lisanti, Giovanetti and her co-authors focused on big bang nucleosynthesis (BBN) -- a process by which light forms of matter, such as helium, hydrogen, and lithium, are created. The presence of invisible dark matter affects how each of these elements will form. Also vital to these phenomena is the cosmic microwave background (CMB) -- electromagnetic radiation, generated by combining electrons and protons, that remained after the universe's formation.

The team sought a means to spot the presence of a specific category of dark matter -- that with a mass between that of the electron and the proton -- by creating models that took into account both BBN and CMB.

"Such dark matter can modify the abundances of certain elements produced in the early universe and leave an imprint in the cosmic microwave background by modifying how quickly the universe expands," Giovanetti explains.

In its research, the team made predictions of cosmological signatures linked to the presence of certain forms of dark matter. These signatures are the result of dark matter changing the temperatures of different particles or altering how fast the universe expands.

Their results showed that dark matter that is too light will lead to different amounts of light elements than what astrophysical observations see.

"Lighter forms of dark matter might make the universe expand so fast that these elements don't have a chance to form," says Giovanetti, outlining one scenario.

Read more at Science Daily

Nanoparticles can save historic buildings

Many historical buildings were built of limestone, such as Vienna's St. Stephen's Cathedral. Limestone is easy to work with, but does not withstand weathering well. It consists mainly of calcite minerals that are relatively weakly bound to each other, which is why parts of the stone keep crumbling away over the years, often requiring costly restoration and conservation treatments.

However, it is possible to increase the resistance of the stone by treating it with special silicate nanoparticles. The method is already being used, but what exactly happens in the process and which nanoparticles are best suited for this purpose has been unclear until now. A research team from TU Wien and the University of Oslo has now been able to clarify exactly how this artificial hardening process takes place through elaborate experiments at the DESY synchrotron in Hamburg and with microscopic examinations in Vienna. That way, the team could determine which nanoparticles are best suited for this purpose.

An aqueous suspension with nanoparticles

"We use a suspension, a liquid, in which the nanoparticles initially float around freely," says Prof. Markus Valtiner from the Institute of Applied Physics at TU Wien. "When this suspension gets into the rock, then the aqueous part evaporates, the nanoparticles form stable bridges between the minerals and give the rock additional stability."

This method is already used in restoration technology, but until now it was not known exactly what physical processes take place. When the water evaporates, a very special kind of crystallisation occurs: Normally, a crystal is a regular arrangement of individual atoms. However, not only atoms, but also entire nanoparticles can arrange themselves in a regular structure -- this is then referred to as a "colloidal crystal."

The silicate nanoparticles come together to form such colloidal crystals when they dry in the rock and thus jointly create new connections between the individual mineral surfaces. This increases the strength of the natural stone.

Measurements at the large-scale research facility DESY and in Vienna

To observe this crystallisation process in detail, the TU Wien research team used the DESY synchrotron facility in Hamburg. Extremely strong X-rays can be generated there, which can be used to analyse the crystallisation during the drying process.

"This was very important to understand exactly what the strength of the bonds that form depends on," says Joanna Dziadkowiec (University of Oslo and TU Wien), the first author of the publication in which the research results have now been presented. "We used nanoparticles of different sizes and concentrations and studied the crystallisation process with X-ray analyses." It was shown that the size of the particles is decisive for optimal strength gain.

To this end, the TU Vienna also measured the adhesive force created by the colloidal crystals. For this purpose, a special interference microscope was used, which is perfectly suited for measuring tiny forces between two surfaces.

Read more at Science Daily

Long term high-fat diet expands waistline and shrinks brain

New research shows that fatty foods may not only be adding to your waistline but also playing havoc with your brain.

An international study led by UniSA neuroscientists Professor Xin-Fu Zhou and Associate Professor Larisa Bobrovskaya has established a clear link between mice fed a high-fat diet for 30 weeks, resulting in diabetes, and a subsequent deterioration in their cognitive abilities, including developing anxiety, depression and worsening Alzheimer's disease.

Mice with impaired cognitive function were also more likely to gain excessive weight due to poor metabolism caused by brain changes.

Researchers from Australia and China have published their findings in Metabolic Brain Disease.

UniSA neuroscientist and biochemist Associate Professor Larisa Bobrovskaya says the research adds to the growing body of evidence linking chronic obesity and diabetes with Alzheimer's disease, predicted to reach 100 million cases by 2050.

"Obesity and diabetes impair the central nervous system, exacerbating psychiatric disorders and cognitive decline. We demonstrated this in our study with mice," Assoc Prof Bobrovskaya says.

In the study, mice were randomly allocated to a standard diet or a high-fat diet for 30 weeks, starting at eight weeks of age. Food intake, body weight and glucose levels were monitored at different intervals, along with glucose and insulin tolerance tests and cognitive dysfunction.

The mice on the high-fat diet gained a lot of weight, developed insulin resistance and started behaving abnormally compared to those fed a standard diet.

Genetically modified Alzheimer's disease mice showed a significant deterioration of cognition and pathological changes in the brain while fed the high fat diet.

"Obese individuals have about a 55 per cent increased risk of developing depression, and diabetes will double that risk," Assoc Prof Bobrovskaya says.

Read more at Science Daily

Adding salt to your food at the table is linked to higher risk of premature death

People who add extra salt to their food at the table are at higher risk of dying prematurely from any cause, according to a study of more than 500,000 people, published in the European Heart Journal today (Monday).

Compared to those who never or rarely added salt, those who always added salt to their food had a 28% increased risk of dying prematurely. In the general population about three in every hundred people aged between 40 and 69 die prematurely. The increased risk from always adding salt to food seen in the current study suggests that one more person in every hundred may die prematurely in this age group.

In addition, the study found a lower life expectancy among people who always added salt compared to those who never, or rarely added salt. At the age of 50, 1.5 years and 2.28 years were knocked off the life expectancy of women and men, respectively, who always added salt to their food compared to those who never, or rarely, did.

The researchers, led by Professor Lu Qi, of Tulane University School of Public Health and Tropical Medicine, New Orleans, USA, say their findings have several public health implications.

"To my knowledge, our study is the first to assess the relation between adding salt to foods and premature death," he said. "It provides novel evidence to support recommendations to modify eating behaviours for improving health. Even a modest reduction in sodium intake, by adding less or no salt to food at the table, is likely to result in substantial health benefits, especially when it is achieved in the general population."

Assessing overall sodium intake is notoriously difficult as many foods, particularly pre-prepared and processed foods, have high levels of salt added before they even reach the table. Studies assessing salt intake by means of urine tests often only take one urine test and so do not necessarily reflect usual behaviour. In addition, foods that are high in salt are often accompanied by foods rich in potassium, such as fruit and vegetables, which is good for us. Potassium is known to protect against the risk of heart diseases and metabolic diseases such as diabetes, whereas sodium increases the risk of conditions such as cancer, high blood pressure and stroke.

For these reasons, the researchers chose to look at whether or not people added salt to their foods at the table, independent of any salt added during cooking.

"Adding salt to foods at the table is a common eating behaviour that is directly related to an individual's long-term preference for salty-tasting foods and habitual salt intake," said Prof. Qi. "In the Western diet, adding salt at the table accounts for 6-20% of total salt intake and provides a unique way to evaluate the association between habitual sodium intake and the risk of death."

The researchers analysed data from 501,379 people taking part in the UK Biobank study. When joining the study between 2006 and 2010, the participants were asked, via a touch-screen questionnaire, whether they added salt to their foods (i) never/rarely, (ii) sometimes, (iii) usually, (iv) always, or (v) prefer not to answer. Those who preferred not to answer were not included in the analysis. The researchers adjusted their analyses to take account of factors that could affect outcomes, such as age, sex, race, deprivation, body mass index (BMI), smoking, alcohol intake, physical activity, diet and medical conditions such as diabetes, cancer and heart and blood vessel diseases. They followed the participants for a median (average) of nine years. Premature death was defined as death before the age of 75 years.

As well as finding that always adding salt to foods was linked to a higher risk of premature death from all causes and a reduction in life expectancy, the researchers found that these risks tended to be reduced slightly in people who consumed the highest amounts of fruit and vegetables, although these results were not statistically significant.

"We were not surprised by this finding as fruits and vegetables are major sources of potassium, which has protective effects and is associated with a lower risk of premature death," said Prof. Qi.

He added: "Because our study is the first to report a relation between adding salt to foods and mortality, further studies are needed to validate the findings before making recommendations."

In an editorial to accompany the paper, Professor Annika Rosengren, a senior researcher and professor of medicine at the Sahlgrenska Academy, University of Gothenburg, Sweden, who was not involved with the research, writes that the net effect of a drastic reduction in salt intake for individuals remains controversial.

"Given the various indications that a very low intake of sodium may not be beneficial, or even harmful, it is important to distinguish between recommendations on an individual basis and actions on a population level," she writes.

She concludes: "Classic epidemiology argues that a greater net benefit is achieved by the population-wide approach (achieving a small effect in many people) than from targeting high-risk individuals (a large effect but only achieved in a small number of people). The obvious and evidence-based strategy with respect to preventing cardiovascular disease in individuals is early detection and treatment of hypertension, including lifestyle modifications, while salt-reduction strategies at the societal level will lower population mean blood pressure levels, resulting in fewer people developing hypertension, needing treatment, and becoming sick. Not adding extra salt to food is unlikely to be harmful and could contribute to strategies to lower population blood pressure levels."

A strength of Prof. Qi's study is the large number of people included. It also has some limitations, which include: the possibility that adding salt to food is an indication of an unhealthy lifestyle and lower socio-economic status, although analyses attempted to adjust for this; there was no information on the quantity of salt added; adding salt may be related to total energy intake and intertwined with intake of other foods; participation in UK Biobank is voluntary and therefore the results are not representative of the general population, so further studies are needed to confirm the findings in other populations.

Read more at Science Daily