Mar 11, 2023

Underused satellite, radar data may improve thunderstorm forecasts

Tens of thousands of thunderstorms may rumble around the world each day, but accurately predicting the time and location where they will form remains a grand challenge of computer weather modeling. A new technique combining underused satellite and radar data in weather models may improve these predictions, according to a Penn State-led team of scientists.

"Thunderstorms are so ubiquitous it's hard to count how many you get in Pennsylvania, or the United States or globally every day," said Keenan Eure, doctoral student in the Department of Meteorology and Atmospheric Science at Penn State. "A lot of our challenges, even today, are figuring out how to correctly predict the time and location of the initiation of thunderstorms."

The scientists found that by combining data from the geostationary weather satellite GOES-16 and ground-based Doppler radar they could capture a more accurate picture of initial conditions in the boundary layer, the lowest part of the atmosphere, where storms form.

"There's value in improving thunderstorm predictions from both Doppler radar observations and satellite observations that are currently underused and we showed that not only can they be used to improve predictions but putting them together has lots of benefits," said Eure, lead author on the study. "The sum is greater than the individual parts."

The technique showed promise in improving forecasts of convection initiation, the conditions that spawn storms, several hours before the thunderstorms occurred in a case study from May 2018 in the Texas panhandle. The scientists reported their findings in the journal Monthly Weather Review.

"Keenan focused on using satellite observations to better define the environment in which the storms would later form, and on using radar observations to improve the low-level wind fields that eventually helped to create the storms," said David Stensrud, professor of meteorology at Penn State and Eure's advisor and co-author on the study. "This observation combination had not been studied previously and ended up adding significant value to the model forecasts on this day."

The scientists used data assimilation, a statistical method that can paint the most accurate possible picture of current weather conditions in the weather model, important because even small changes in the atmosphere can lead to large discrepancies in forecasts over time.

Understanding conditions in the boundary layer is particularly important because it strongly influences the ingredients for convection -- near-surface moisture, lift and instability -- a process that causes warm air near the Earth's surface to rise and form clouds.

"We obviously can't model every molecule in the atmosphere, but we want to get as close as possible," Eure said. We really believe this work adds a lot of valuable information that models currently don't have and that we can help the depiction of the lowest part of the atmosphere."

The team assimilated satellite and radar data separately and simultaneously and found the best results came from combining infrared brightness temperature observations from the satellite and radial wind velocity and boundary height observations from the radar.

The work uses all-sky satellite data assimilation, developed by Penn State's Center for Advanced Data Assimilation and Predictability Techniques, that assimilates satellite data from all weather conditions, including cloudy and clear skies. Forecasting previously relied on clear-sky observations, due to challenges in diagnosing the complex physical processes within clouds, the scientists said.

"While more cases need to be explored, these observations are currently available and could be used to improve thunderstorm prediction over the coming decade as NOAA continues to advance its Warn-on-Forecast paradigm in which computer model predictions help to make severe weather warnings more accurate and timely," Stensrud said.

Other Penn State researchers on the project were Matthew Kumjian and Steven Greybush, associate professors, Yunji Zhang, assistant professor and Paul Mykolajtchuk, former graduate student, in the Department of Meteorology and Atmospheric Science.

Read more at Science Daily

Surprising similarities in stone tools of early humans and monkeys

Researchers from the Max Planck Institute for Evolutionary Anthropology have discovered artefacts produced by old world monkeys in Thailand that resemble stone tools, which historically have been identified as intentionally made by early hominins. Until now, sharp-edged stone tools were thought to represent the onset of intentional stone tool production, one of the defining and unique characteristics of hominin evolution. This new study challenges long held beliefs about the origins of intentional tool production in our own lineage.

The research is based on new analyses of stone tools used by long-tailed macaques in the Phang Nga National Park in Thailand. These monkeys use stone tools to crack open hard-shelled nuts. In that process, the monkeys often break their hammerstones and anvils. The resulting assemblage of broken stones is substantial and widespread across the landscape. Moreover, many of these artefacts bear all of the same characteristics that are commonly used to identify intentionally made stone tools in some of the earliest archaeological sites in East Africa.

"The ability to intentionally make sharp stone flakes is seen as a crucial point in the evolution of hominins, and understanding how and when this occurred is a huge question that is typically investigated through the study of past artefacts and fossils. Our study shows that stone tool production is not unique to humans and our ancestors," says lead author Tomos Proffitt, a researcher at the Max Planck Institute for Evolutionary Anthropology. "The fact that these macaques use stone tools to process nuts is not surprising, as they also use tools to gain access to various shellfish as well. What is interesting is that, in doing so they accidently produce a substantial archaeological record of their own that is partly indistinguishable from some hominin artefacts."

New insights into the evolution of stone tool technology

By comparing the accidentally produced stone fragments made by the macaques with those from some of the earliest archaeological sites, the researchers were able to show that many of the artefacts produced by monkeys fall within the range of those commonly associated with early hominins. Co-lead author Jonathan Reeves highlights: "The fact that these artifacts can be produced through nut cracking has implications for the range of behaviours we associate with sharp edged flakes in the archaeological record.."

The newly discovered macaque stone tools offer new insights into how the first technology might have started in our earliest ancestors and that its origin may have been linked to similar nut cracking behaviour which could be substantially older than the current earliest archaeological record. "Cracking nuts using stone hammers and anvils, similar to what some primates do today, has been suggested by some as a possible precursor to intentional stone tool production. This study, along with previous ones published by our group, opens the door to being able to identify such an archaeological signature in the future," says Lydia Luncz, senior author of the study and head of the Technological Primates Research Group at the Max Planck Institute for Evolutionary Anthropology. "This discovery shows how living primates can help researchers investigate the origin and evolution of tool use in our own lineage."

Read more at Science Daily

Mar 10, 2023

Scientists call for global push to eliminate space junk

Scientists have called for a legally-binding treaty to ensure Earth's orbit isn't irreparably harmed by the future expansion of the global space industry.

In the week that nearly 200 countries agreed to a treaty to protect the High Seas after a 20-year process, the experts believe society needs to take the lessons learned from one part of our planet to another.

The number of satellites in orbit is expected to increase from 9,000 today to over 60,000 by 2030, with estimates suggesting there are already more than 100 trillion untracked pieces of old satellites circling the planet.

While such technology is used to provide a huge range of social and environmental benefits, there are fears the predicted growth of the industry could make large parts of Earth's orbit unusable.

Writing in the journal Science, an international collaboration of experts in fields including satellite technology and ocean plastic pollution say this demonstrates the urgent need for global consensus on how best to govern Earth's orbit.

They acknowledge that a number of industries and countries are starting to focus on satellite sustainability, but say this should be enforced to include any nation with plans to use Earth's orbit.

Any agreement, they add, should include measures to implement producer and user responsibility for satellites and debris, from the time they launch onwards. Commercial costs should also be considered when looking at ways to incentivise accountability. Such considerations are consistent with current proposals to address ocean plastic pollution as countries begin negotiations for the Global Plastics Treaty.

The experts also believe that unless action is taken immediately, large parts of our planet's immediate surroundings risk the same fate as the High Seas where insubstantial governance has led to overfishing, habitat destruction, deep-sea mining exploration, and plastic pollution.

The article was co-authored by researchers from the University of Plymouth, Arribada Initiative, The University of Texas at Austin, California Institute of Technology, NASA Jet Propulsion Laboratory, Spaceport Cornwall, and ZSL (Zoological Society of London).

They include the academic who led the first ever study into marine microplastics, also published in Science almost 20 years ago, and scientists who contributed to the commitment to develop a Global Plastics Treaty signed by 170 world leaders at the United Nations Environment Assembly in March 2022.

Dr Imogen Napper, Research Fellow at the University of Plymouth, led the newly-published study with funding from the National Geographical Society. She said: "The issue of plastic pollution, and many of the other challenges facing our ocean, is now attracting global attention. However, there has been limited collaborative action and implementation has been slow. Now we are in a similar situation with the accumulation of space debris. Taking into consideration what we have learnt from the high seas, we can avoid making the same mistakes and work collectively to prevent a tragedy of the commons in space. Without a global agreement we could find ourselves on a similar path."

Heather Koldewey, ZSL's Senior Marine Technical Advisor, said: "To tackle planetary problems, we need to bring together scientists from across disciplines to identify and accelerate solutions. As a marine biologist I never imagined writing a paper on space, but through this collaborative research identified so many parallels with the challenges of tackling environmental issues in the ocean. We just need to get better at the uptake of science into management and policy."

Dr Moriba Jah, Associate Professor of Aerospace Engineering and Engineering Mechanics at The University of Texas at Austin, said: "Ancient TEK (traditional ecological knowledge) informs us how we must embrace stewardship because our lives depend on it. I'm excited to work with others in highlighting the links and interconnectedness amongst all things and that marine debris and space debris are both an anthropogenic detriment that is avoidable."

Dr Kimberley Miner, Scientist at the NASA Jet Propulsion Laboratory, said: "Mirroring the new UN ocean initiative, minimizing the pollution of the lower Earth orbit will allow continued space exploration, satellite continuity, and the growth of life-changing space technology."

Melissa Quinn, Head of Spaceport Cornwall, said: "Satellites are vital to the health of our people, economies, security and Earth itself. However, using space to benefit people and planet is at risk. By comparing how we have treated our seas, we can be proactive before we damage the use of space for future generations. Humanity needs to take responsibility for our behaviours in space now, not later. I encourage all leaders to take note, to recognise the significance of this next step and to become jointly accountable."

Read more at Science Daily

Diverse approach key to carbon removal

Diversification reduces risk. That's the spirit of one key takeaway from a new study led by scientists at the Department of Energy's Pacific Northwest National Laboratory. The effective path to limiting global warming to 1.5 degrees Celsius by the end of this century likely requires a mix of technologies that can pull carbon dioxide from Earth's atmosphere and oceans.

Overreliance on any one carbon removal method may bring undue risk, the authors caution. And we'll likely need them all to remove the necessary amount of carbon dioxide -- 10 gigatons annually -- to secure just 1.5 degrees of warming by 2100.

The new work, published today in the journal Nature Climate Change, outlines the carbon-removing potential of six different methods. They range from restoring deforested lands to spreading crushed rock across landscapes, a method known as enhanced weathering.

This study marks the first attempt to incorporate all carbon dioxide removal approaches recognized in U.S. legislation into a single integrated model that projects how their interactions could measure up on a global scale. It does so while demonstrating how those methods could influence factors like water use, energy demand or available crop land.

The authors explore the potential of these carbon removal methods by modeling decarbonization scenarios: hypothetical futures that demonstrate what kind of interactions could crop up if the technologies were deployed under varying conditions. They explore pathways, for example, where no climate policy is applied (and warming rises to 3.5 degrees as a result).

A second pathway demonstrates what amount of carbon would need to be removed using the technologies under an ambitious policy in which carbon emissions are constrained to decline to net-zero by mid-century and net-negative by late-century to limit end-of-century warming to below 1.5 degrees.

The third scenario follows the same emissions pathway but is paired with behavioral and technological changes, like low material consumption and rapid electrification. In this scenario, these societal changes translate to fewer overall emissions released, which helps reduce the amount of residual greenhouse gas emissions that would need to be offset with carbon removal to meet the 1.5-degree goal.

To meet that target -- the original goal of the Paris Agreement -- the authors find that roughly 10 gigatons of carbon dioxide must be removed per year. That amount remains the same even if countries were to strengthen efforts to reduce carbon dioxide emissions from all sources.

"Bringing us back down to 1.5 degrees by the end of the century will require a balanced approach," said lead author PNNL scientist Jay Fuhrman, whose work stems from the Joint Global Change Research Institute. "If one of these technologies fails to materialize or scale up, we don't want too many eggs in that basket. If we use a globally diverse portfolio of carbon removal strategies, we can mitigate risk while mitigating emissions."

Some of the technologies stand to contribute a great deal, with the potential to remove several gigatons of carbon dioxide per year. Others offer less, yet still stand to play an important role. Enhanced weathering, for example, could remove up to four gigatons of carbon dioxide annually by mid-century.

Under this method, finely ground rock spread over cropland converts carbon dioxide in the atmosphere into carbonate minerals on the ground. It is among the most cost-effective methods identified in the study.

In comparison, direct ocean capture with carbon storage, where carbon dioxide is stripped from seawater and stored in Earth's subsurface, would likely remove much less carbon. On its own, the nascent technology is prohibitively expensive, according to the authors. Pairing this method with desalination plants in regions where demand for desalinated water is high, however, could drive down the cost while delivering more meaningful carbon reductions.

In addition to the removal methods mentioned above, the technologies under study include biochar, direct air capture with carbon storage, and bioenergy paired with carbon capture and storage.

Each of the technologies modeled brings unique advantages, costs and consequences. Many of those factors are tied to specific regions. The authors point out Sub-Saharan Africa as an example, where biochar, enhanced weathering and bioenergy with carbon capture and storage stand to contribute significant reductions.

Yet the authors find much work is needed to address greenhouse gases other than carbon dioxide, like methane and nitrous oxide. Many of these non-CO2 gases are several times more potent while simultaneously more difficult to target than carbon dioxide.

While some of the removal methods examined within the new paper are well-studied, their interactions with other, newer methods are less clearly understood. The work originates from the Joint Global Change Research Institute, a partnership between PNNL and the University of Maryland where researchers explore interactions between human, energy and environmental systems.

Their work focuses on projecting what tradeoffs may flow from a range of possible decarbonization scenarios. The authors seek to better understand how these methods interact so that policymakers may be informed in their efforts to decarbonize.

"This study underscores the need for continued research on carbon dioxide removal approaches and their potential impacts," said corresponding author and PNNL scientist Haewon McJeon. "While each approach has its own unique benefits and costs, a diverse portfolio of carbon dioxide removal approaches is essential for effectively addressing climate change. By better understanding the potential impacts of each approach, we can develop a more comprehensive and effective strategy for reducing greenhouse gas emissions and limiting global warming."

Read more at Science Daily

What 'Chornobyl dogs' can tell us about survival in contaminated environments

In the first step toward understanding how dogs -- and perhaps humans -- might adapt to intense environmental pressures such as exposure to radiation, heavy metals, or toxic chemicals, researchers atNorth Carolina State, Columbia University Mailman School of Public Health, University of South Carolina, and the National Institutes of Healthfound thattwo groups of dogs living within the Chornobyl Exclusion Zone, one at the site of the former Chornobyl reactors, and another 16.5 km away in Chornobyl City, showed significant genetic differences between them. The results indicate that these are two distinct populations that rarely interbreed. While earlier studiesfocused on the effects of the Chornobyl Nuclear Power Plant disaster on various species of wildlife, this is the firstinvestigation into the genetic structure of stray dogs living near the Chornobyl nuclear power plant.

The 1986 Chornobyl nuclear power plant disaster displaced more than 300,000 people living nearby and led to the establishment of an Exclusion Zone, a "no man's land" of an approximately 30 km radius surrounding the damaged reactor complex, While a massive steam explosion releasing enormous amounts of ionizing radiation into the air, water, and soil was the direct cause of the catastrophe, radiation exposure is not the only environmental hazard resulting from the disaster. Chemicals, toxic metals, pesticides, and organic compounds left behind by years-long cleanup efforts and from abandoned and decaying structures, including the nearby abandoned city of Pripyat and the Duga-1 military base, all contribute to an ecological and environmental disaster.

"Somehow, two small populations of dogs managed to survive in that highly toxic environment," noted Norman J. Kleiman, PhD, assistant professor of Environmental Health Sciences at Columbia Mailman School of Public Health, and a co-author. "In addition to classifying the population dynamics within these dogs at both locations, we took the first steps towards understanding how chronic exposure to multiple environmental hazards may have impacted these populations."

"The overarching question here is: does an environmental disaster of this magnitude have a genetic impact on life in the region?" says Matthew Breen, Oscar J. Fletcher Distinguished Professor of Comparative Oncology Genetics at NC State, and a corresponding author. "And we have two populations of dogs living at and near the site of a major environmental disaster that may provide key information to help us answer that question."

Earlier research by the co-authors, led by collaborators at NIH, used a much smaller set of genetic variants, but a larger number of dogs, to show that the two populations were separate and that each had complicated family structures.

In this parallel study, the team analyzed the dog DNA samples with four times the number of genetic variants, which provided a closer look at the genomes. In addition to confirming that the two populations are indeed genetically distinct, the team were also able to identify 391 outlier regions in the genomes of the dogs that differed between dogs living at the two locations. "Think of these regions as markers, or signposts, on a highway," Breen says. "They identify areas within the genome where we should look more closely at nearby genes. Moreover, some of these markers are pointing to genes associated with genetic repair; specifically, with genetic repair after exposures similar to those experienced by the dogs in Chornobyl." He went on to say "at this stage we cannot say for sure that any genetic alterations are in response to the multigenerational and complex exposures; we have a lot more work to do to determine if that is the case"

"The question we must answer now are why are there striking genetic differences between the two dog populations?" says Megan Dillion, PhD candidate at NC State and a lead author of the published study. "Are the differences just due to genetic drift, or are they due to the unique environmental stressors at each location?"

"The dog is a sentinel species," Breen says. "By and teasing out whether or not the genetic changes we detected in these dogs are the canine genome's response to the exposures the populations have faced, we may be able to understand how the dogs survived in such a hostile environment and what that might mean for any population -- animal or human -- that experiences similar exposures."

"Though 37 years have passed since the accident, the ~30-year-long half-lives of lingering radioisotopes means the danger posed by radiation exposure is still very much real," notes Kleiman, who is also director of the Columbia University Radiation Safety Officer Training course. "When radiation exposure is combined with a complex toxic chemical mixture of uncertain composition, there are very real human health concerns raised for the thousands of people who continue to work within the Exclusion Zone on continuing cleanup efforts as well as at two newly constructed nuclear fuel reprocessing plants."

Read more at Science Daily

Scientists complete first map of an insect brain

Researchers have completed the most advanced brain map to date, that of an insect, a landmark achievement in neuroscience that brings scientists closer to true understanding of the mechanism of thought.

The international team led by Johns Hopkins University and the University of Cambridge produced a breathtakingly detailed diagram tracing every neural connection in the brain of a larval fruit fly, an archetypal scientific model with brains comparable to humans.

The work, likely to underpin future brain research and to inspire new machine learning architectures, appears today in the journal Science.

"If we want to understand who we are and how we think, part of that is understanding the mechanism of thought," said senior author Joshua T. Vogelstein, a Johns Hopkins biomedical engineer who specializes in data-driven projects including connectomics, the study of nervous system connections. "And the key to that is knowing how neurons connect with each other."

The first attempt at mapping a brain -- a 14-year study of the roundworm begun in the 1970s, resulted in a partial map and a Nobel Prize. Since then, partial connectomes have been mapped in many systems, including flies, mice, and even humans, but these reconstructions typically only represent only a tiny fraction of the total brain. Comprehensive connectomes have only been generated for several small species with a few hundred to a few thousand neurons in their bodies-a roundworm, a larval sea squirt, and a larval marine annelid worm.

This team's connectome of a baby fruit fly, Drosophila melanogaster larva, is the most complete as well as the most expansive map of an entire insect brain ever completed. It includes 3,016 neurons and every connection between them: 548,000.

"It's been 50 years and this is the first brain connectome. It's a flag in the sand that we can do this," Vogelstein said. "Everything has been working up to this."

Mapping whole brains is difficult and extremely time-consuming, even with the best modern technology. Getting a complete cellular-level picture of a brain requires slicing the brain into hundreds or thousands of individual tissue samples, all of which have to be imaged with electron microscopes before the painstaking process of reconstructing all those pieces, neuron by neuron, into a full, accurate portrait of a brain. It took more than a decade to do that with the baby fruit fly. The brain of a mouse is estimated to be a million times larger than that of a baby fruit fly, meaning the chance of mapping anything close to a human brain isn't likely in the near future, maybe not even in our lifetimes.

The team purposely chose the fruit fly larva because, for an insect, the species shares much of its fundamental biology with humans, including a comparable genetic foundation. It also has rich learning and decision-making behaviors, making it a useful model organism in neuroscience. And for practical purposes, its relatively compact brain can be imaged and its circuits reconstructed within a reasonable time frame.

Even so, the work took the University of Cambridge and Johns Hopkins 12 years. The imaging alone took about a day per neuron.

Cambridge researchers created the high-resolution images of the brain and manually studied them to find individual neurons, rigorously tracing each one and linking their synaptic connections.

Cambridge handed off the data to Johns Hopkins, where the team spent more than three years using original code they created to analyze the brain's connectivity. The Johns Hopkins team developed techniques to find groups of neurons based on shared connectivity patterns, and then analyzed how information could propagate through the brain.

In the end, the full team charted every neuron and every connection, and categorized each neuron by the role it plays in the brain. They found that the brain's busiest circuits were those that led to and away from neurons of the learning center.

The methods Johns Hopkins developed are applicable to any brain connection project, and their code is available to whoever attempts to map an even larger animal brain, Vogelstein said, adding that despite the challenges, scientists are expected to take on the mouse, possibly within the next decade. Other teams are already working on a map of the adult fruit fly brain. Co-first author Benjamin Pedigo, a Johns Hopkins doctoral candidate in Biomedical Engineering, expects the team's code could help reveal important comparisons between connections in the adult and larval brain. As connectomes are generated for more larva and from other related species, Pedigo expects their analysis techniques could lead to better understanding of variations in brain wiring.

The fruit fly larva work showed circuit features that were strikingly reminiscent of prominent and powerful machine learning architectures. The team expects continued study will reveal even more computational principles and potentially inspire new artificial intelligence systems.

"What we learned about code for fruit flies will have implications for the code for humans," Vogelstein said. "That's what we want to understand -- how to write a program that leads to a human brain network."

Read more at Science Daily

Mar 9, 2023

Flat, pancake-sized metalens images lunar surface in an engineering first

Astronomers and amateurs alike know the bigger the telescope, the more powerful the imaging capability. To keep the power but streamline one of the bulkier components, a Penn State-led research team created the first ultrathin, compact metalens telescope capable of imaging far-away objects, including the moon.

Metalenses comprise tiny, antenna-like surface patterns that can focus light to magnify distant objects in the same way as traditional curved glass lenses, but they have the advantage of being flat. Though small, millimeters-wide metalenses have been developed in the past, the researchers scaled the size of the lens to eight centimeters in diameter, or about four inches wide, making it possible to use in large optical systems, such as telescopes. They published their approach in Nano Letters.

"Traditional camera or telescope lenses have a curved surface of varying thickness, where you have a bump in the middle and thinner edges, which causes the lens to be bulky and heavy," said corresponding author Xingjie Ni, associate professor of electrical engineering and computer science at Penn State. "Metalenses use nano-structures on the lens instead of curvature to contour light, which allows them to lay flat."

That is one of the reasons, Ni said, modern cellphone camera lenses protrude from the body of the phone: the thickness of the lenses take up space, though they appear flat since they are hidden behind a glass window.

Metalenses are typically made using electron beam lithography, which involves scanning a focused beam of electrons onto a piece of glass, or other transparent substrate, to create antenna-like patterns point by point. However, the scanning process of the electron beam limits the size of the lens that can be created, as scanning each point is time-consuming and has low throughput.

To create a bigger lens, the researchers adapted a fabrication method known as deep ultraviolet (DUV) photolithography, which is commonly used to produce computer chips.

"DUV photolithography is a high-throughput and high-yield process that can produce many computer chips within seconds," Ni said. "We found this to be a good fabrication method for metalenses because it allows for much larger pattern sizes while still maintaining small details, which allows the lens to work effectively."

The researchers modified the method with their own novel procedure, called rotating wafer and stitching. Researchers divided the wafer, on which the metalens was fabricated, into four quadrants, which were further divided into 22 by 22 millimeter regions -- smaller than a standard postage stamp. Using a DUV lithography machine at Cornell University, they projected a pattern onto one quadrant through projection lenses, which they then rotated by 90 degrees and projected again. They repeated the rotation until all four quadrants were patterned.

"The process is cost-effective because the masks containing the pattern data for each quadrant can be reused due to the rotation symmetry of the metalens," Ni said. "This reduces the manufacturing and environmental costs of the method."

As the size of the metalens increased, the digital files required to process the patterns became significantly larger, which would take a long time for the DUV lithography machine to process. To overcome this issue, the researchers compressed the files using data approximations and by referencing non-unique data.

"We utilized every possible method to reduce the file size," Ni said. "We identified identical data points and referenced existing ones, gradually reducing the data until we had a usable file to send to the machine for creating the metalens."

Using the new fabrication method, the researchers developed a single-lens telescope and captured clear images of the lunar surface -- achieving greater resolution of objects and much farther imaging distance than previous metalenses. Before the technology can be applied to modern cameras, however, researchers must address the issue of chromatic aberration, which causes image distortion and blurriness when different colors of light, which bend in different directions, enter a lens.

"We are exploring smaller and more sophisticated designs in the visible range, and will compensate for various optical aberrations, including chromatic aberration," Ni said.

Read more at Science Daily

Short-distance migration critical for climate change adaptation

Short-distance migration, which accounts for the vast majority of migratory movements in the world, is crucial for climate change adaptation, according to new research from the University of East Anglia (UEA).

Contrary to common assumptions, most migratory movements are people moving short distances, largely due to economic, social and environmental factors, such as climate change.

A study of people living in the drylands of India and parts of Africa was carried out by UEA researchers in the School of International Development.

The paper, 'Everyday mobility and changing livelihood trajectories: implications for vulnerability and adaptation in dryland regions', is published today in a special issue on Everyday Adaptations in the journal Ecology and Society.

The research was led by Dr Mark Tebboth, Associate Professor in the Environment and International Development.

Dr Tebboth said: "Most attention is on international migration and how climate change will lead to huge numbers of people fleeing across borders, but actually the vast majority of people move short distances within their own country in order to take advantage of opportunities or in response to shocks and stresses in their lives.

"Supporting and enabling this migration will help people to continue to adapt the pressures in their lives."

The research looked at drivers and outcomes of people's mobility in the drylands of India, Ghana, Kenya and Namibia. Interviews were conducted during 2016 and 2017 with people living in those regions.

Drylands are the largest global biome, covering about 45 per cent of the Earth's land surface and accommodating more than a third of the globe's population.

Drylands are characterized by low and highly variable water availability and high temperatures. These regions are experiencing multiple pressures, including increasing rates of aridity and soil degradation; poorly planned and implemented development interventions; rapid population growth; historically high rates of poverty; poor communication infrastructure; and isolation from national centres of power -- stressing livelihoods reliant on natural resources.

In India, the study sites were in North Karnataka's Kolar district, where diversification to non-farm labour and daily commuting to Bangalore is common, and the Gulburga district, where agricultural livelihoods dominate and there has been historical outmigration to large cities.

In Kenya, the study sites were in Isiolo, the 'gateway to the north', where pastoralism, farming and tourism are common. Water is a scarce resource and this looks like it will become more severe in the future.

The study also included locations in the Upper West region of Ghana and the Omusati region of north-central Namibia.

Dr Tebboth said: "Far from being exceptional, this everyday mobility is ubiquitous and much removed from alarmist discourses of 'climate migration' that views movement as solely climate-driven.

Read more at Science Daily

Major North American oil source yields clues to one of earth's deadliest mass extinctions

The Bakken Shale Formation -- a 200,000-square-mile shale deposit below parts of Canada and North Dakota -- has supplied billions of barrels of oil and natural gas to North America for 70 years. A new discovery reveals that the rocks also open a uniquely informative window into Earth's complicated geological history.

A research team, which included geologists from the University of Maryland, George Mason University and the Norwegian oil and gas company Equinor, developed a new framework for analyzing paleontological and biogeochemical data extracted from the formation's rock. Using this technique, the team pinpointed a major trigger of several closely spaced biotic crises during the late Devonian Period almost 350 million years ago: euxinia, or the depletion of oxygen and expansion of hydrogen sulfide in large bodies of water. Published in the journal Nature on March 8, 2023, the team's findings demonstrate links between sea level, climate, ocean chemistry and biotic disruption.

"For the first time, we can point to a specific kill mechanism responsible for a series of significant biotic disruptions during the late Devonian Period," said UMD Geology Professor Alan Jay Kaufman, a senior author of the paper. "There have been other mass extinctions presumably caused by expansions of hydrogen sulfide before, but no one has ever studied the effects of this kill mechanism so thoroughly during such a critical period of Earth's history."

According to Kaufman, the late Devonian Period was a "perfect storm" of factors that played a large role in how Earth is today. Vascular plants and trees were especially crucial to the process; as they expanded on land, plants stabilized soil structure, helped spread nutrients to the ocean, and added oxygen and water vapor to the atmosphere while pulling carbon dioxide out of it.

"The introduction of terrestrial plants capable of photosynthesis and transpiration stimulated the hydrological cycle, which kick-started the Earth's capacity for more complex life as we know it today," Kaufman said.

The Devonian Period ended around the same time the Bakken sediments accumulated, allowing the layers of organic-rich shale to 'record' the environmental conditions that occurred there. Because the Earth's continents were flooded during that time, various sediments including black shale gradually accumulated in inland seas that formed within geological depressions like the Williston Basin, the preserved the Bakken formation.

Undergraduate laboratory assistant Tytrice Faison (B.S. '22, geology) -- who joined Kaufman's lab after taking a course with him through the Carillon Communities living-learning program -- prepared and analyzed more than 100 shale and carbonate samples taken from the formation. After analyzing the samples, Kaufman, Faison and the rest of the Bakken team deciphered clear layers of sediment representing three key biotic crises known as the Annulata, Dasberg and Hangenberg events, with the last crisis associated with one of the greatest mass extinctions in Earth history.

"We could see anoxic events distinctly marked by black shale and other geochemical deposits, which are likely linked to a series of rapid rises in sea level," Kaufman explained. "We suspect that sea levels may have risen during the pulsed events due to the melting ice sheets around the South Pole at this time."

Higher sea levels would have resulted in the flooding of interior continental margins, or the transitional region between oceanic and continental crusts. In these settings, high levels of nutrients, such as phosphorus and nitrogen, could have triggered algal blooms which create low oxygen zones in large bodies of water. These zones in turn would have increased toxic hydrogen sulfide right where most marine animals would have lived. Under those conditions, animals in the oceans and on land around the shoreline would have died during these late Devonian events.

The team's research is not exclusive to global biotic disruptions from hundreds of millions of years ago. Kaufman suggests that their findings are not just applicable to the shallow inland seas of the Devonian Period, but perhaps also to the oceans of today affected by global warming. He compared the ocean's circulatory system to a "conveyor belt" carrying nutrients, oxygen and microorganisms from place to place.

"Cold, salty water develops in the North Atlantic region before it sinks and eventually makes its way to the Indian and Pacific Oceans, cycling around the globe. This oceanic jet stream helps to spread life-sustaining oxygen through the oceans," Kaufman explained. "If that conveyor belt were to be slowed down due to global warming, parts of the ocean might be deprived of oxygen and potentially become euxinic."

The collateral damage caused by global warming might then promote animal migration out of dead zones or put Earth on a path to decreased diversity and increased rates of extinction, he added.

Read more at Science Daily

Olive oil by-product could aid exercise

New research has found that a natural by-product of olive oil production could potentially have antioxidant benefits and support exercise.

The study, led by nutrition researchers at Anglia Ruskin University (ARU) and published in the journal Nutrients, is the first to examine the benefits of natural olive fruit water for recreationally active people.

Olive fruit water is a waste product derived from producing olive oil. Olives contain polyphenols which have antioxidant properties, and a commercially available olive fruit water product, called OliPhenolia, contains a number of phenolic compounds and is particularly rich in hydroxytyrosol.

The first study into its potential benefits for people who exercise involved 29 recreationally active participants who consumed either OliPhenolia or a placebo, matched for taste and appearance, over 16 consecutive days, and it found positive effects on several key markers of running performance.

OliPhenolia consumption improved respiratory parameters at the onset of exercise as well as oxygen consumption and running economy at lower levels of intensity (lactate threshold 1).

Respiratory parameters at higher intensity (lactate threshold 2) were largely unaffected, but perceived exertion -- how hard participants thought their body was working -- was improved, as was acute recovery following incremental exercise.

Lead author Dr Justin Roberts, Associate Professor in Health & Exercise Nutrition at Anglia Ruskin University (ARU), said: "For a long time I've been interested in the exercise benefits of polyphenols, such as those derived from cherries and beetroot. To gain similar benefits from olives you would have to consume large quantities daily, which isn't realistic, so we were keen to test this concentrated olive fruit water.

"Like olive oil it contains hydroxytyrosol, but this olive fruit water is a sustainable by-product. It's typically thrown away during the production of olive oil, and we found a company in Italy -- Fattoria La Vialla, a biodynamic farm in Tuscany -- who decided to turn this waste water into a dietary supplement.

"Ours is the first study to investigate the use of this olive fruit water in an exercise setting and we found that 16 days of supplementation could have a positive influence on aerobic exercise, most notably at submaximal levels.

"We found that reduced oxygen cost and improved running economy, as well as improvements in acute recovery, indicate it could potentially benefit those who are undertaking regular aerobic exercise training.

"We now intend to carry out further research at Anglia Ruskin University to corroborate these findings. We are also looking to investigate whether this product can be used for marathon training and recovery, as well as test its effectiveness in suppressing inflammation associated with exercise."

Read more at Science Daily

Mar 8, 2023

ALMA traces history of water in planet formation back to the interstellar medium

Scientists studying a nearby protostar have detected the presence of water in its circumstellar disk. The new observations made with the Atacama Large Millimeter/submillimeter Array (ALMA) mark the first detection of water being inherited into a protoplanetary disk without significant changes to its composition. These results further suggest that the water in our Solar System formed billions of years before the Sun. The new observations are published today in Nature.

V883 Orionis is a protostar located roughly 1,305 light-years from Earth in the constellation Orion. The new observations of this protostar have helped scientists to find a probable link between the water in the interstellar medium and the water in our Solar System by confirming they have similar composition.

"We can think of the path of water through the Universe as a trail. We know what the endpoints look like, which are water on planets and in comets, but we wanted to trace that trail back to the origins of water," said John Tobin, an astronomer at the National Science Foundation's National Radio Astronomy Observatory (NRAO) and the lead author on the new paper. "Before now, we could link the Earth to comets, and protostars to the interstellar medium, but we couldn't link protostars to comets. V883 Ori has changed that, and proven the water molecules in that system and in our Solar System have a similar ratio of deuterium and hydrogen."

Observing water in the circumstellar disks around protostars is difficult because in most systems water is present in the form of ice. When scientists observe protostars they're looking for the water snow line or ice line, which is the place where water transitions from predominantly ice to gas, which radio astronomy can observe in detail. "If the snow line is located too close to the star, there isn't enough gaseous water to be easily detectable and the dusty disk may block out a lot of the water emission. But if the snow line is located further from the star, there is sufficient gaseous water to be detectable, and that's the case with V883 Ori," said Tobin, who added that the unique state of the protostar is what made this project possible.

V883 Ori's disk is quite massive and is just hot enough that the water in it has turned from ice to gas. That makes this protostar an ideal target for studying the growth and evolution of solar systems at radio wavelengths.

"This observation highlights the superb capabilities of the ALMA instrument in helping astronomers study something vitally important for life on Earth: water," said Joe Pesce, NSF Program Officer for ALMA. "An understanding of the underlying processes important for us on Earth, seen in more distant regions of the galaxy, also benefits our knowledge of how nature works in general, and the processes that had to occur for our Solar System to develop into what we know today."

To connect the water in V883 Ori's protoplanetary disk to that in our own Solar System, the team measured its composition using ALMA's highly sensitive Band 5 (1.6mm) and Band 6 (1.3mm) receivers and found that it remains relatively unchanged between each stage of solar system formation: protostar, protoplanetary disk, and comets. "This means that the water in our Solar System was formed long before the Sun, planets, and comets formed. We already knew that there is plenty of water ice in the interstellar medium. Our results show that this water got directly incorporated into the Solar System during its formation," said Merel van 't 'Hoff, an astronomer at the University of Michigan and a co-author of the paper. "This is exciting as it suggests that other planetary systems should have received large amounts of water too."

Clarifying the role of water in the development of comets and planetesimals is critical to building an understanding of how our own Solar System developed. Although the Sun is believed to have formed in a dense cluster of stars and V883 Ori is relatively isolated with no nearby stars, the two share one critical thing in common: they were both formed in giant molecular clouds.

"It is known that the bulk of the water in the interstellar medium forms as ice on the surfaces of tiny dust grains in the clouds. When these clouds collapse under their own gravity and form young stars, the water ends up in the disks around them. Eventually, the disks evolve and the icy dust grains coagulate to form a new solar system with planets and comets," said Margot Leemker, an astronomer at Leiden University and a co-author of the paper. "We have shown that water that is produced in the clouds follows this trail virtually unchanged. So, by looking at the water in the V883 Ori disk, we essentially look back in time and see how our own Solar System looked when it was much younger."

Read more at Science Daily

Smoke particles from wildfires can erode the ozone layer

A wildfire can pump smoke up into the stratosphere, where the particles drift for over a year. A new MIT study has found that while suspended there these particles can trigger chemical reactions that erode the protective ozone layer shielding the Earth from the sun's damaging ultraviolet radiation.

The study, which will appear in Nature, focuses on the smoke from the "Black Summer" megafire in eastern Australia, which burned from December 2019 into January 2020. The fires -- the country's most devastating on record -- scorched tens of millions of acres and pumped more than 1 million tons of smoke into the atmosphere.

The MIT team identified a new chemical reaction by which smoke particles from the Australian wildfires made ozone depletion worse. By triggering this reaction, the fires likely contributed to a 3-5 percent depletion of total ozone at mid-latitudes in the southern hemisphere, in regions overlying Australia, New Zealand, and parts of Africa and South America.

The researchers' model also indicates the fires had an effect in the polar regions, eating away at the edges of the ozone hole over Antarctica. By late 2020, smoke particles from the Australian wildfires widened the Antarctic ozone hole by 2.5 million square kilometers -- 10 percent of its area compared to the previous year.

It's unclear what long-term effect wildfires will have on ozone recovery. The United Nations recently reported that the ozone hole, and ozone depletion around the world, is on a recovery track, thanks to a sustained international effort to phase out ozone-depleting chemicals. But the MIT study suggests that as long as these chemicals persist in the atmosphere, large fires could spark a reaction that temporarily depletes ozone.

"The Australian fires of 2020 were really a wake-up call for the science community," says Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT and a leading climate scientist who first identified the chemicals responsible for the Antarctic ozone hole. "The effect of wildfires was not previously accounted for in [projections of] ozone recovery. And I think that effect may depend on whether fires become more frequent and intense as the planet warms."

The study is led by Solomon and MIT graduate student Peidong Wang, along with collaborators from the Institute for Environmental and Climate Research in Guangzhou, China, the National Oceanic and Atmospheric Administration, the National Center for Atmospheric Research, and Colorado State University.

Chlorine cascade

The new study expands on a 2022 discovery by Solomon and her colleagues, in which they first identified a chemical link between wildfires and ozone depletion. The researchers found that chlorine-containing compounds, originally emitted by factories in the form of chlorofluorocarbons (CFCs), could react with the surface of fire aerosols. This interaction, they found, set off a chemical cascade that produced chlorine monoxide -- the ultimate ozone-depleting molecule. Their results showed that the Australian wildfires likely depleted ozone through this newly identified chemical reaction.

"But that didn't explain all the changes that were observed in the stratosphere," Solomon says. "There was a whole bunch of chlorine-related chemistry that was totally out of whack."

In the new study, the team took a closer look at the composition of molecules in the stratosphere following the Australian wildfires. They combed through three independent sets of satellite data and observed that in the months following the fires, concentrations of hydrochloric acid dropped significantly at mid-latitudes, while chlorine monoxide spiked.

Hydrochloric acid (HCl) is present in the stratosphere as CFCs break down naturally over time. As long as chlorine is bound in the form of HCl, it doesn't have a chance to destroy ozone. But if HCl breaks apart, chlorine can react with oxygen to form ozone-depleting chlorine monoxide.

In the polar regions, HCl can break apart when it interacts with the surface of cloud particles at frigid temperatures of about 155 Kelvin. However, this reaction was not expected to occur at mid-latitudes, where temperatures are much warmer.

"The fact that HCl at mid-latitudes dropped by this unprecedented amount was to me kind of a danger signal," Solomon says.

She wondered: What if HCl could also interact with smoke particles, at warmer temperatures and in a way that released chlorine to destroy ozone? If such a reaction was possible, it would explain the imbalance of molecules and much of the ozone depletion observed following the Australian wildfires.

Smoky drift

Solomon and her colleagues dug through the chemical literature to see what sort of organic molecules could react with HCl at warmer temperatures to break it apart.

"Lo and behold, I learned that HCl is extremely soluble in a whole broad range of organic species," Solomon says. "It likes to glom on to lots of compounds."

The question then, was whether the Australian wildfires released any of those compounds that could have triggered HCl's breakup and any subsequent depletion of ozone. When the team looked at the composition of smoke particles in the first days after the fires, the picture was anything but clear.

"I looked at that stuff and threw up my hands and thought, there's so much stuff in there, how am I ever going to figure this out?" Solomon recalls. "But then I realized it had actually taken some weeks before you saw the HCl drop, so you really need to look at the data on aged wildfire particles."

When the team expanded their search, they found that smoke particles persisted over months, circulating in the stratosphere at mid-latitudes, in the same regions and times when concentrations of HCl dropped.

"It's the aged smoke particles that really take up a lot of the HCl," Solomon says. "And then you get, amazingly, the same reactions that you get in the ozone hole, but over mid-latitudes, at much warmer temperatures."

When the team incorporated this new chemical reaction into a model of atmospheric chemistry, and simulated the conditions of the Australian wildfires, they observed a 5 percent depletion of ozone throughout the stratosphere at mid-latitudes, and a 10 percent widening of the ozone hole over Antarctica.

The reaction with HCl is likely the main pathway by which wildfires can deplete ozone. But Solomon guesses there may be other chlorine-containing compounds drifting in the stratosphere, that wildfires could unlock.

Read more at Science Daily

Hunter-gatherer childhoods may offer clues to improving education and wellbeing in developed countries

Hunter-gatherers can help us understand the conditions that children may be psychologically adapted to because we lived as hunter-gatherers for 95% of our evolutionary history. And paying greater attention to hunter-gatherer childhoods may help economically developed countries improve education and wellbeing.

Published today in the Journal of Child Psychology and Psychiatry, a new study by Dr Nikhil Chaudhary, an evolutionary anthropologist at the University of Cambridge, and Dr Annie Swanepoel, a child psychiatrist, calls for new research into child mental health in hunter-gatherer societies. They explore the possibility that some common aspects of hunter-gatherer childhoods could help families in economically developed countries. Eventually, hunter-gatherer behaviours could inform 'experimental intervention trials' in homes, schools and nurseries.

The authors acknowledge that children living in hunter-gatherer societies live in very different environments and circumstances than those in developed countries. They also stress that hunter-gatherer children invariably face many difficulties that are not experienced in developed countries and, therefore, caution that these childhoods should not be idealised.

Drawing on his own observations of the BaYaka people in Congo and the extensive research of anthropologists studying other hunter-gatherer societies, Dr Chaudhary highlights major differences in the ways in which hunter-gatherer children are cared for compared to their peers in developed countries. He stresses that "contemporary hunter-gatherers must not be thought of as 'living fossils', and while their ways of life may offer some clues about our prehistory, they are still very much modern populations each with a unique cultural and demographic history."

Physical contact and attentiveness


Despite increasing uptake of baby carriers and baby massage in developed countries, levels of physical contact with infants remain far higher in hunter-gatherer societies. In Botswana, for instance, 10-20 week old !Kung infants are in physical contact with someone for around 90% of daylight hours, and almost 100% of crying bouts are responded to, almost always with comforting or nursing -- scolding is extremely rare.

The study points out that this exceptionally attentive childcare is made possible because of the major role played by non-parental caregivers, or 'alloparents', which is far rarer in developed countries.

Non-parental caregivers

In many hunter-gatherer societies, alloparents provide almost half of a child's care. A previous study found that in the DRC, Efe infants have 14 alloparents a day by the time they are 18 weeks old, and are passed between caregivers eight times an hour.

Dr Chaudhary said: "Parents now have much less childcare support from their familial and social networks than would likely have been the case during most of our evolutionary history. Such differences seem likely to create the kind of evolutionary mismatches that could be harmful to both caregivers and children."

"The availability of other caregivers can reduce the negative impacts of stress within the nuclear family, and the risk of maternal depression, which has knock-on effects for child wellbeing and cognitive development."

The study emphasises that alloparenting is a core human adaptation, contradicting 'intensive mothering' narratives which emphasise that mothers should use their maternal instincts to manage childcare alone. Dr Chaudhary and Dr Swanepoel write that 'such narratives can lead to maternal exhaustion and have dangerous consequences'.

Care-giving ratios

The study points out that communal living in hunter-gatherer societies results in a very high ratio of available caregivers to infants/toddlers, which can even exceed 10:1.

This contrasts starkly with the nuclear family unit, and even more so with nursery settings, in developed countries. According to the UK's Department of Education regulations, nurseries require ratios of 1 carer to 3 children aged under 2 years, or 1 carer to 4 children aged 2-3.

Dr Chaudhary said: "Almost all day, hunter-gatherer infants and toddlers have a capable caregiver within a couple of metres of them. From the infant's perspective, that proximity and responsiveness, is very different from what is experienced in many nursery settings in the UK."

"If that ratio is stretched even thinner, we need to consider the possibility that this could have impacts on children's wellbeing."

Children providing care and mixed-age active learning

In hunter-gatherer societies, children play a significantly bigger role in providing care to infants and toddlers than is the case in developed countries. In some communities they begin providing some childcare from the age of four and are capable of sensitive caregiving; and it is common to see older, but still pre-adolescent children looking after infants.

By contrast, the NSPCC in the UK recommends that when leaving pre-adolescent children at home, babysitters should be in their late teens at least.

Dr Chaudhary said: "In developed countries, children are busy with schooling and may have less opportunity to develop caregiving competence. However, we should at least explore the possibility that older siblings could play a greater role in supporting their parents, which might also enhance their own social development."

The study also points out that instructive teaching is rare in hunter-gatherer societies and that infants primarily learn via observation and imitation. From around the age of two, hunter-gatherer children spend large portions of the day in mixed-age (2-16) 'playgroups' without adult supervision. There, they learn from one another, acquiring skills and knowledge collaboratively via highly active play practice and exploration.

Learning and play are two sides of the same coin, which contrasts with the lesson-time / play-time dichotomy of schooling in the UK and other developed countries.

Dr Chaudhary and Dr Swanepoel note that "Classroom schooling is often at odds with the modes of learning typical of human evolutionary history." The study acknowledges that children living in hunter-gather societies live in very different environments and circumstances than those in developed countries:

"Foraging skills are very different to those required to make a living in market-economies, and classroom teaching is certainly necessary to learn the latter. But children may possess certain psychological learning adaptations that can be practically harnessed in some aspects of their schooling. When peer and active learning can be incorporated, they have been shown to improve motivation and performance, and reduce stress." The authors also highlight that physical activity interventions have been shown to aid performance among students diagnosed with ADHD.

The study calls for more research into children's mental health in hunter-gatherer societies to test whether the hypothesised evolutionary mismatches actually exist. If they do, such insights could then be used to direct experimental intervention trials in developed countries.

Read more at Science Daily

How the brain senses infection

A new study led by researchers at Harvard Medical School illuminates how the brain becomes aware that there is an infection in the body.

Studying mice, the team discovered that a small group of neurons in the airway plays a pivotal role in alerting the brain about a flu infection. They also found signs of a second pathway from the lungs to the brain that becomes active later in the infection.

The study was published March 8 in Nature.

Although most people are sick several times a year, scientific knowledge of how the brain evokes the feeling of sickness has lagged behind research on other bodily states such as hunger and thirst. The paper represents a key first step in understanding the brain-body connection during an infection.

"This study helps us begin to understand a basic mechanism of pathogen detection and how that's related to the nervous system, which until now has been largely mysterious," said senior author Stephen Liberles, professor of cell biology in the Blavatnik Institute at HMS and an investigator at Howard Hughes Medical Institute.

The findings also shed light on how nonsteroidal anti-inflammatory drugs such as ibuprofen and aspirin alleviate influenza symptoms.

If the results can be translated into humans, the work could have important implications for developing more-effective flu therapies.

An infectious state of mind

The Liberles lab is interested in how the brain and body communicate to control physiology. For example, it has previously explored how the brain processes sensory information from internal organs, and how sensory cues can evoke or suppress the sensation of nausea.

In the new paper, the researchers turned their attention to another important type of sickness that the brain controls: sickness from a respiratory infection.

During an infection, Liberles explained, the brain orchestrates symptoms as the body mounts an immune response. These can include broad symptoms such as fever, decreased appetite, and lethargy, as well as specific symptoms such as congestion or coughing for a respiratory illness or vomiting or diarrhea for a gastrointestinal bug.

The team decided to focus on influenza, a respiratory virus that is the source of millions of illnesses and medical visits and causes thousands of deaths in the United States every year.

Through a series of experiments in mice, first author Na-Ryum Bin, HMS research fellow in the Liberles lab, identified a small population of neurons embedded in the glossopharyngeal nerve, which runs from the throat to the brain.

Importantly, he found that these neurons are necessary to signal to the brain that a flu infection is present and have receptors for lipids called prostaglandins. These lipids are made by both mice and humans during an infection, and they are targeted by drugs such as ibuprofen and aspirin.

Cutting the glossopharyngeal nerve, eliminating the neurons, blocking the prostaglandin receptors in those neurons, or treating the mice with ibuprofen similarly reduced influenza symptoms and increased survival.

Together, the findings suggest that these airway neurons detect the prostaglandins made during a flu infection and become a communication conduit from the upper part of the throat to the brain.

"We think that these neurons relay the information that there's a pathogen there and initiate neural circuits that control the sickness response," Liberles said.

The results provide an explanation for how drugs like ibuprofen and aspirin work to reduce flu symptoms -- and suggest that these drugs may even boost survival.

The researchers discovered evidence of another potential sickness pathway, this one traveling from the lungs to the brain. They found that it appears to become active in the second phase of infection as the virus infiltrates deeper into the respiratory system.

This additional pathway doesn't involve prostaglandins, the team was surprised to find. Mice in the second phase of infection didn't respond to ibuprofen.

The findings suggest an opportunity for improving flu treatment if scientists are able to develop drugs that target the additional pathway, the authors said.

A foundation for future research

The study raises a number of questions that Liberles and colleagues are eager to investigate.

One is how well the findings will translate to humans. Although mice and humans share a lot of basic sensory biology, including having a glossopharyngeal nerve, Liberles emphasized that researchers need to conduct further genetic and other experiments to confirm that humans have the same neuron populations and pathways seen in the mouse study.

If the findings can be replicated in humans, it raises the possibility of developing treatments that address both the prostaglandin- and nonprostaglandin pathways of flu infection.

"If you can find a way to inhibit both pathways and use them in synergy, that would be incredibly exciting and potentially transformative," Liberles said.

Bin is already delving into the details of the nonprostaglandin pathway, including the neurons involved, with the goal of figuring out how to block it. He also wants to identify the airway cells that produce prostaglandins in the initial pathway and study them in more depth.

Read more at Science Daily

Mar 7, 2023

The planet that could end life on Earth

A terrestrial planet hovering between Mars and Jupiter would be able to push Earth out of the solar system and wipe out life on this planet, according to a UC Riverside experiment.

UCR astrophysicist Stephen Kane explained that his experiment was meant to address two notable gaps in planetary science.

The first is the gap in our solar system between the size of terrestrial and giant gas planets. The largest terrestrial planet is Earth, and the smallest gas giant is Neptune, which is four times wider and 17 times more massive than Earth. There is nothing in between.

"In other star systems there are many planets with masses in that gap. We call them super-Earths," Kane said.

The other gap is in location, relative to the sun, between Mars and Jupiter. "Planetary scientists often wish there was something in between those two planets. It seems like wasted real estate," he said.

These gaps could offer important insights into the architecture of our solar system, and into Earth's evolution. To fill them in, Kane ran dynamic computer simulations of a planet between Mars and Jupiter with a range of different masses, and then observed the effects on the orbits of all other planets.

The results, published in the Planetary Science Journal, were mostly disastrous for the solar system. "This fictional planet gives a nudge to Jupiter that is just enough to destabilize everything else," Kane said. "Despite many astronomers having wished for this extra planet, it's a good thing we don't have it."

Jupiter is much larger than all the other planets combined; its mass is 318 times that of Earth, so its gravitational influence is profound. If a super-Earth in our solar system, a passing star, or any other celestial object disturbed Jupiter even slightly, all other planets would be profoundly affected.

Depending on the mass and exact location of a super-Earth, its presence could ultimately eject Mercury and Venus as well as Earth from the solar system. It could also destabilize the orbits of Uranus and Neptune, tossing them into outer space as well.

The super-Earth would change the shape of this Earth's orbit, making it far less habitable than it is today, if not ending life entirely.

If Kane made the planet's mass smaller and put it directly in between Mars and Jupiter, he saw it was possible for the planet to remain stable for a long period of time. But small moves in any direction and, "things would go poorly," he said.

The study has implications for the ability of planets in other solar systems to host life. Though Jupiter-like planets, gas giants far from their stars, are only found in about 10% of the time, their presence could decide whether neighboring Earths or super-Earths have stable orbits.

Read more at Science Daily

The Mozart effect myth: Listening to music does not help against epilepsy

Over the past fifty years, there have been remarkable claims about the effects of Wolfgang Amadeus Mozart's music. Reports about alleged symptom-alleviating effects of listening to Mozart's Sonata KV448 in epilepsy attracted a lot of public attention. However, the empirical validity of the underlying scientific evidence has remained unclear. Now, University of Vienna psychologists Sandra Oberleiter and Jakob Pietschnig show in a new study published in the journal Nature Scientific Reports that there is no evidence for a positive effect of Mozart's melody on epilepsy.

In the past, Mozart's music has been associated with numerous ostensibly positive effects on humans, animals, and even microorganisms. For instance, listening to his sonata has been said to increase the intelligence of adults, children, or fetuses in the womb. Even cows were said to produce more milk, and bacteria in sewage treatment plants were said to work better when they heard Mozart's composition.

However, most of these alleged effects have no scientific basis. The origin of these ideas can be traced back to the long-disproven observation of a temporary increase in spatial reasoning test performance among students after listening to the first movement allegro con spirito of Mozart's sonata KV448 in D major.

More recently, the Mozart effect experienced a further variation: Some studies reported symptom relief in epilepsy patients after they had listened to KV448. However, a new comprehensive research synthesis by Sandra Oberleiter and Jakob Pietschnig from the University of Vienna, based on all available scientific literature on this topic, showed that there is no reliable evidence for such a beneficial effect of Mozart's music on epilepsy. They found that this alleged Mozart effect can be mainly attributed to selective reporting, small sample sizes, and inadequate research practices in this corpus of literature. "Mozart's music is beautiful, but unfortunately, we cannot expect relief from epilepsy symptoms from it" conclude the researchers.

From Science Daily

Study into global daily air pollution shows almost nowhere on Earth is safe

In a world first study of daily ambient fine particulate matter (PM2.5) across the globe, a Monash University study has found that only 0.18% of the global land area and 0.001% of the global population are exposed to levels of PM2.5 - the world's leading environmental health risk factor - below levels of safety recommended by Word Health Organisation (WHO).

Importantly while daily levels have reduced in Europe and North America in the two decades to 2019, levels have increased Southern Asia, Australia, New Zealand, Latin America and the Caribbean, with more than 70% of days globally seeing levels above what is safe.

A lack of pollution monitoring stations globally for air pollution, has meant a lack of data on local, national, regional and global PM2.5 exposure. Now this study, led by Professor Yuming Guo, from the Monash University School of Public Health and Preventive Medicine, and published in the journal, Lancet Planetary Health, has provided a map of how PM2.5 has changed across the globe in the past decades.

The research team utilised traditional air quality monitoring observations, satellite-based meteorological and air pollution detectors, statistical and machine learning methods to more accurately assess PM2.5 concentrations globally, according to Professor Guo.

"In this study, we used an innovative machine learning approach to integrate multiple meteorological and geological information to estimate the global surface-level daily PM2.5 concentrations at a high spatial resolution of approximately 10km ×10km for global grid cells in 2000-2019, focusing on areas above 15 μg/m³ which is considered the safe limit by WHO (The threshold is still arguable)," he said.

The study reveals that annual PM2.5 concentration and high PM2.5 exposed days in Europe and northern America decreased over the two decades of the study -- whereas exposures increased in southern Asia, Australia and New Zealand, and Latin America and the Caribbean.

In addition, the study found that:

  •     Despite a slight decrease in high PM2.5 exposed days globally, by 2019 more than 70% of days still had PM2.5 concentrations higher than 15 μg/m³.
  •     In southern Asia and eastern Asia, more than 90% of days had daily PM2.5 concentrations higher than 15 μg/m³.
  •     Australia and New Zealand had a marked increase in the number of days with high PM2.5 concentrations in 2019.
  •     Globally, the annual average PM2.5 from 2000 to 2019 was 32.8 µg/m3.
  •     The highest PM2.5 concentrations were distributed in the regions of Eastern Asia (50.0 µg/m3) and Southern Asia (37.2 µg/m3), followed by northern Africa (30.1 µg/m3).
  •     Australia and New Zealand (8.5 μg/m³), other regions in Oceania (12.6 μg/m³), and southern America (15.6 μg/m³) had the lowest annual PM2.5 concentrations.
  •     Based on the new 2021 WHO guideline limit, only 0.18% of the global land area and 0.001% of the global population were exposed to an annual exposure lower than this guideline limit (annual average of 5 μg/m³) in 2019.


According to Professor Guo, the unsafe PM2.5 concentrations also show different seasonal patterns "included Northeast China and North India during their winter months (December, January, and February), whereas eastern areas in northern America had high PM2.5 in its summer months (June, July, and August)," he said.

"We also recorded relatively high PM2.5 air pollution in August and September in South America and from June to September in sub-Saharan Africa."

Read more at Science Daily

Does more money correlate with greater happiness?

Are people who earn more money happier in daily life? Though it seems like a straightforward question, research had previously returned contradictory findings, leaving uncertainty about its answer.

Foundational work published in 2010 from Princeton University's Daniel Kahneman and Angus Deaton had found that day-to-day happiness rose as annual income increased, but above $75,000 it leveled off and happiness plateaued. In contrast, work published in 2021 from the University of Pennsylvania's Matthew Killingsworth found that happiness rose steadily with income well beyond $75,000, without evidence of a plateau.

To reconcile the differences, the two paired up in what's known as an adversarial collaboration, joining forces with Penn Integrates Knowledge University Professor Barbara Mellers as arbiter. In a new Proceedings of the National Academy of Sciences paper, the trio shows that, on average, larger incomes are associated with ever-increasing levels of happiness. Zoom in, however, and the relationship becomes more complex, revealing that within that overall trend, an unhappy cohort within each income group shows a sharp rise in happiness up to $100,000 annually and then plateaus.

"In the simplest terms, this suggests that for most people larger incomes are associated with greater happiness," says Killingsworth, a senior fellow at Penn's Wharton School and lead paper author. "The exception is people who are financially well-off but unhappy. For instance, if you're rich and miserable, more money won't help. For everyone else, more money was associated with higher happiness to somewhat varying degrees."

Mellers digs into this last notion, noting that emotional well-being and income aren't connected by a single relationship. "The function differs for people with different levels of emotional well-being," she says. Specifically, for the least happy group, happiness rises with income until $100,000, then shows no further increase as income grows. For those in the middle range of emotional well-being, happiness increases linearly with income, and for the happiest group the association actually accelerates above $100,000.

Joining forces The researchers began this combined effort recognizing that their previous work had drawn different conclusions. Kahneman's 2010 study showed a flattening pattern where Killingsworth's 2021 study did not. As its name suggests, an adversarial collaboration of this type -- a notion originated by Kahneman -- aims to solve scientific disputes or disagreements by bringing together the differing parties, along with a third-party mediator.

Killingsworth, Kahneman, and Mellers focused on a new hypothesis that both a happy majority and an unhappy minority exist. For the former, they surmised, happiness keeps rising as more money comes in; the latter's happiness improves as income rises but only up to a certain income threshold, after which it progresses no further.

To test this new hypothesis, they looked for the flattening pattern in data from Killingworth's study, which he had collected through an app he created called Track Your Happiness. Several times a day, the app pings participants at random moments, asking a variety of questions including how they feel on a scale from "very good" to "very bad." Taking an average of the person's happiness and income, Killingsworth draws conclusions about how the two variables are linked.

A breakthrough in the new partnership came early on when the researchers realized that the 2010 data, which had revealed the happiness plateau, had actually been measuring unhappiness in particular rather than happiness in general. "It's easiest to understand with an example," Killingsworth says. Imagine a cognitive test for dementia that most healthy people pass easily. While such a test could detect the presence and severity of cognitive dysfunction, it wouldn't reveal much about general intelligence since most healthy people would receive the same perfect score.

"In the same way, the 2010 data showing a plateau in happiness had mostly perfect scores, so it tells us about the trend in the unhappy end of the happiness distribution, rather than the trend of happiness in general. Once you recognize that, the two seemingly contradictory findings aren't necessarily incompatible," Killingsworth says. "And what we found bore out that possibility in an incredibly beautiful way. When we looked at the happiness trend for unhappy people in the 2021 data, we found exactly the same pattern as was found in 2010; happiness rises relatively steeply with income and then plateaus."

"The two findings that seemed utterly contradictory actually result from data that are amazingly consistent," he says.

Implications of this work Drawing these conclusions would have been challenging had the two research teams not come together, says Mellers, who suggests there's no better way than adversarial collaborations to resolve scientific conflict.

"This kind of collaboration requires far greater self-discipline and precision in thought than the standard procedure," she says. "Collaborating with an adversary -- or even a non-adversary -- is not easy, but both parties are likelier to recognize the limits of their claims." Indeed, that's what happened, leading to a better understanding of the relationship between money and happiness.

And these findings have real-world implications, according to Killingsworth. For one, they could inform thinking about tax rates or how to compensate employees. And, of course, they matter to individuals as they navigate career choices or weigh a larger income against other priorities in life, Killingsworth says.

Read more at Science Daily

Mar 6, 2023

DART impact provided real-time data on evolution of asteroid's debris

When asteroids suffer natural impacts in space, debris flies off from the point of impact. The tail of particles that form can help determine the physical characteristics of the asteroid. NASA's Double Asteroid Redirection Test mission in September 2022 gave a team of scientists including Rahil Makadia, a Ph.D. student in the Department of Aerospace Engineering at the University of Illinois Urbana-Champaign, a unique opportunity -- to observe the evolution of an asteroid's ejecta as it happened for the first time.

"My work on this mission so far has been to study the heliocentric changes to the orbit of Didymos and its smaller moon Dimorphos -- the target of the DART spacecraft," said Makadia. "Even though it hit the secondary, there are still some changes in the entire system's orbit around the sun because the entire system feels the consequences of the impact. The ejecta that escapes the system provides an extra boost in addition to the impact. So, to accurately determine where the system will be in 100 years, you need to know the contribution of the ejecta that escaped the system."

The team observed a 33-minute change in the orbit after DART's impact. Makadia said, if there were no ejecta, the period change would have been less than 33 minutes. But because some ejecta escaped the gravitational pull of Dimorphos, the orbit period change is higher than if there were no ejecta at all.

These three panels capture the breakup of the asteroid Dimorphos when it was deliberately hit by NASA's 1,200-pound Double Asteroid Redirection Test mission spacecraft on September 26, 2022. Hubble Space Telescope had a ringside view of the space demolition derby. The top panel, taken 2 hours after impact, shows an ejecta cone of an estimated 1,000 tons of dust. The center frame shows the dynamic interaction within the asteroid's binary system that starts to distort the cone shape of the ejecta pattern about 17 hours after the impact. The most prominent structures are rotating, pinwheel-shaped features. The pinwheel is tied to the gravitational pull of the companion asteroid, Didymos. In the bottom frame Hubble next captures the debris being swept back into a comet-like tail by the pressure of sunlight on the tiny dust particles. This stretches out into a debris train where the lightest particles travel the fastest and farthest from the asteroid. The mystery is compounded when Hubble records the tail splitting in two for a few days.

The study, published in the journal Nature, focused on the Hubble Space Telescope's measurements of the ejecta, beginning 15 minutes after the impact to 18 ½ days after the impact. The images showed the exact evolution of the tail and how it evolved over time.

"After a few days, the primary force acting on these ejecta particles becomes solar radiation pressure," Makadia said. "The photons emitted from the sun exert an acceleration on these small particles, and they evolve into a straight tail in an anti-solar direction.

"There have been cases in which it was determined that a natural impact caused the observed active asteroid. But because this one was very much intended, we could have telescopes pointed at it before and after the impact and study its evolution."

He said they'll use the data about how this ejecta evolves to understand how the entire system's orbit changes as well.

"Now that we have this treasure trove of data, we can make educated guesses about other tails we might observe," Makadia said. "Depending on what kind of particles are in the tail and their sizes, we can figure out how long ago that impact happened. And we'll be able to understand the ejecta that escape the system and change the entire system's heliocentric orbit."

Makadia, who earned his B.S. in 2020 from UIUC, said almost all of his work is computational.

"To calculate where an asteroid will be on a given date, we need to propagate all the possible locations that the asteroid could be at an initial time, not just one nominal solution. That requires a lot of computational power and understanding of how orbits are affected by small forces, like solar radiation pressure as well as gravity from all kinds of sources within the solar system.

"I developed simulations to study the heliocentric changes when I first started working on my Ph.D. to make sure we have a propagator that can impart all these impulses that are coming from the escaping ejecta. Now I'm developing an orbit determination tool so once we do have enough observations, we can extract this information about the heliocentric change to the system."

About the project, Makadia said, "This is 100 percent the most exciting thing in my life. It's absolutely real but so astonishing. Even now, whenever people ask about it, it sounds like I'm talking about a movie plot rather than an actual thing that happened."

Read more at Science Daily

Catalyst purifies herbicide-tainted water and produces hydrogen

Researchers in the Oregon State University College of Science have developed a dual-purpose catalyst that purifies herbicide-tainted water while also producing hydrogen.

The project, which included researchers from the OSU College of Engineering and HP Inc. is important because water pollution is a major global challenge, and hydrogen is a clean, renewable fuel.

Findings of the study, which explored photoactive catalysts, were published today in the journal ACS Catalysis.

"We can combine oxidation and reduction into a single process to achieve an efficient photocatalytic system," OSU's Kyriakos Stylianou said. "Oxidation happens via a photodegradation reaction, and reduction through a hydrogen evolution reaction."

A catalyst is a substance that increases the rate of a chemical reaction without itself undergoing any permanent chemical change.

Photocatalysts are materials that absorb light to reach a higher energy level and can use that energy to break down organic contaminants through oxidation. Among photocatalysts' many applications are self-cleaning coatings for stain- and odor-resistant walls, floors, ceilings and furniture.

Stylianou, assistant professor of chemistry, led the study, which involved titanium dioxide photocatalysts derived from a metal-organic framework, or MOF.

Made up of positively charged metal ions surrounded by organic "linker" molecules, MOFs are crystalline, porous materials with tunable structural properties and nanosized pores. They can be designed with a variety of components that determine the MOF's properties.

Upon MOFs' calcination -- high heating without melting -- semiconducting materials like titanium dioxide can be generated. Titanium dioxide is the most commonly used photocatalyst, and it's found in the minerals anatase, rutile and brookite.

Stylianou and collaborators including Líney Árnadóttir of the OSU College of Engineering and William Stickle of HP discovered that anatase doped with nitrogen and sulfur was the best "two birds, one stone" photocatalyst for simultaneously producing hydrogen and degrading the heavily used herbicide glyphosate.

Glyphosate, also known as N-phosphonomethyl glycine or PMG, has been widely sprayed on agricultural fields over the last 50 years since first appearing on the market under the trade name Roundup.

"Only a small percentage of the total amount of PMG applied is taken up by crops, and the rest reaches the environment," Stylianou said. "That causes concerns regarding the leaching of PMG into soil and groundwater, as well it should -- contaminated water can be detrimental to the health of every living thing on the planet. And herbicides leaching into water channels are a primary cause of water pollution."

Among an array of compounds in which hydrogen is found, water is the most common, and producing hydrogen by splitting water via photocatalysis is cleaner and more sustainable than the conventional method of deriving hydrogen -- from natural gas via a carbon-dioxide-producing process known as methane-steam reforming.

Hydrogen serves many scientific and industrial purposes in addition to its energy-related roles. It's used in fuel cells for cars, in the manufacture of many chemicals including ammonia, in the refining of metals and in the production of plastics.

"Water is a rich hydrogen source, and photocatalysis is a way of tapping into the Earth's abundant solar energy for hydrogen production and environmental remediation," Stylianou said. "We are showing that through photocatalysis, it is possible to produce a renewable fuel while removing organic pollutants, or converting them into useful products."

Read more at Science Daily

Heart-healthy lifestyle linked to a longer life, free of chronic health conditions

Two new studies by related research groups have found that adults who live a heart-healthy lifestyle, as measured by the American Heart Association's Life's Essential 8 (LE8) cardiovascular health scoring, tend to live longer lives free of chronic disease. The preliminary studies will be presented at the American Heart Association's Epidemiology, Prevention, Lifestyle & Cardiometabolic Health Scientific Sessions 2023, held in Boston, February 28-March 3, 2023. The meeting offers the latest science on population-based health and wellness and implications for lifestyle and cardiometabolic health.

In June 2022, the American Heart Association updated the metrics for optimal cardiovascular health to include sleep -- Life's Essential 8. The tool measures 4 indicators related to cardiovascular and metabolic health status (blood pressure, cholesterol, blood sugar and body mass index); and 4 behavioral/lifestyle factors (smoking status, physical activity, sleep and diet).

"These two abstracts really give us some nice new insight into how we can understand at different stages across the life course just how important focusing on your cardiovascular health is going to be, particularly using the new American Heart Association Life's Essential 8 metrics," said Donald M. Lloyd-Jones, M.D., Sc.M., FAHA. Lloyd-Jones led the advisory writing group for Life's Essential 8 and is immediate past president of the American Heart Association President and chair of the department of preventive medicine, the Eileen M. Foell Professor of Heart Research and professor of preventive medicine, medicine and pediatrics at Northwestern University's Feinberg School of Medicine in Chicago. "The cardiovascular health construct studied in these two abstracts really does nail what patients are trying to do, which is find the fountain of youth. Yes, live longer, but more importantly, live healthier longer, and extend that healthspan so that you can really enjoy quality in your remaining life years."

Life's Essential 8 And Life Expectancy Free of Cardiovascular Disease, Diabetes, Cancer, And Dementia in Adults

The first study investigated whether levels of cardiovascular health estimated by the Association's Life's Essential 8 metrics were associated with life expectancy free of major chronic disease, including cardiovascular disease, Type 2 diabetes, cancer and dementia.

"Our study looked at the association of Life's Essential 8 and life expectancy free of major chronic disease in adults in the United Kingdom," said lead author Xuan Wang, M.D., Ph.D., a postdoctoral fellow and biostatistician in the department of epidemiology at Tulane University's School of Public Health and Tropical Medicine in New Orleans.

Wang and colleagues analyzed health information for 136,599 adults in the U.K. who did not have cardiovascular disease, Type 2 diabetes, cancer or dementia when they enrolled in the study and as measured by the Life's Essential 8 tool.

"We categorized Life's Essential 8 scores according to the American Heart Association's recommendations, with scores of less than 50 out of 100 being poor cardiovascular health, 50 to less than 80 being intermediate, and 80 and above being ideal," Wang said. Life's Essential 8 scores of 80 and above are defined as "high cardiovascular health" by the Association.

When the researchers compared life expectancy and disease-free years among the groups, they found:

  •     Adults who scored as having ideal cardiovascular health lived substantially longer than those scored in the poor heart health category. Men and women with ideal cardiovascular health at age 50 had an average 5.2 years and 6.3 years more of total life expectancy, respectively, when compared to the men and women who scored as having poor cardiovascular health.
  •     Adults with ideal cardiovascular health scores lived longer without chronic disease. Disease-free life expectancy accounted for nearly 76% of total life expectancy for men and more than 83% for women who had ideal cardiovascular health -- in contrast, disease-free life expectancy was only 64.9% of men and 69.4% of women with poor cardiovascular health.


"Moreover, we found disparities in disease-free life expectancy due to low socioeconomic status may be offset considerably by maintaining an ideal cardiovascular health score in all adults," Wang said. "Our findings may stimulate interest in individual self-assessment and motivate people to improve their cardiovascular health. These findings support improving population health by promoting adherence to ideal cardiovascular health, which may also narrow health disparities related to socioeconomic status."

The study's limitations were that the researchers only included CVD, diabetes, cancer and dementia in their definition of "disease-free life expectancy;" information on e-cigarettes was not available in the U.K. Biobank, which may lead to a slight overestimation of the LE8 score in this study; and participants in the U.K. Biobank are overwhelmingly white race, therefore, further studies are needed to confirm if these results are consistent among people from diverse racial and ethnic backgrounds who may experience negative social determinants of health throughout their lifetime.

"What's really important is that people maintaining high cardiovascular health into midlife are avoiding those chronic diseases of aging, things like cancer and dementia that we also worry about, not just cardiovascular disease," Lloyd-Jones said. "They're delayed until much later in the lifespan, so people can enjoy the life in their years as well as the years in their life."

Co-authors with Wang are Hao Ma, M.D., Ph.D.; Xiang Li, M.D., Ph.D.; Yoriko Heianza, R.D., Ph.D.; JoAnn E. Manson, M.D., M.P.H., Dr.P.H..; Oscar H. Franco, M.D., Ph.D.; and Lu Qi, M.D., Ph.D. Authors' disclosures are listed in the abstract.

The study was funded by the National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases, which are divisions of the National Institutes of Health; the Fogarty International Center; and Tulane Research Centers of Excellence Awards.

Life's Essential 8 And Life Expectancy Among Adults in the United States

The second study focused on whether the association of Life's Essential 8 with total life expectancy differed by sex or race in U.S. adults.

The researchers analyzed health information, including Life's Essential 8 scores, for more than 23,000 U.S. adults who took part in the National Health and Nutrition Examination Survey (NHANES) from 2005 to 2018.

The analysis found:

  •     Life expectancy for adults at age 50 was an average of an additional 33.4 years for those with ideal cardiovascular health, or scores of 80 or greater; in comparison, additional life expectancy was 25.3 years for adults with poor cardiovascular health, LE8 scores of less than 50.
  •     Adults with ideal cardiovascular health gained an estimated 8.1 years (7.5 additional years for men and 8.9 for women) of life expectancy at age 50, compared with those in the poor cardiovascular health category.


"We found that more than 40% of the increased life expectancy at age 50 from adhering to ideal cardiovascular health may be explained by the reduced incidence of cardiovascular disease death," said lead author Hao Ma, M.D., Ph.D., a postdoctoral fellow and biostatistician in epidemiology at Tulane University and co-author on Wang's study.

According to Ma, this indicates that maintaining one's cardiovascular health may improve one's lifespan. However, more research needs to be done on the impact of cardiovascular health on lifespan among people from diverse racial and ethnic groups, he said.

The study had several limitations such as the researchers did not consider potential changes of cardiovascular health during the follow-up because information on the cardiovascular health metrics was only available at baseline. Additionally, the researchers' analyses of different racial/ethnic groups only included non-Hispanic white, non-Hispanic Black and people of Mexican heritage due to the limited sample size for additional racial/ethnic groups.

"What struck me about this abstract particularly was that there's a really big jump going from individuals who have poor cardiovascular health to just intermediate levels of cardiovascular health," Lloyd-Jones said. "Overall, we see this seven-and-a-half-year difference going from poor to high cardiovascular health. That's a really big difference in life expectancy, and I think what it tells us is that we need to try to move people and get them to improve their cardiovascular health in mid-life, because that's really going to have a major influence on their total life expectancy."

Read more at Science Daily