May 21, 2022

Ghostly 'mirror world' might be cause of cosmic controversy

New research suggests an unseen 'mirror world' of particles that interacts with our world only via gravity that might be the key to solving a major puzzle in cosmology today -- the Hubble constant problem.

The Hubble constant is the rate of expansion of the universe today. Predictions for this rate -- from cosmology's standard model -- are significantly slower than the rate found by our most precise local measurements. This discrepancy is one that many cosmologists have been trying to solve by changing our current cosmological model. The challenge is to do so without ruining the agreement between standard model predictions and many other cosmological phenomena, such as the cosmic microwave background. Determining whether such a cosmological scenario exists is the question that researchers, including Francis-Yan Cyr-Racine, assistant professor in the Department of Physics and Astronomy at The University of New Mexico, Fei Ge and Lloyd Knox at the University of California, Davis have been trying to answer.

According to NASA, cosmology is the scientific study of the large-scale properties of the universe as a whole. Cosmologists study concepts such as dark matter, and dark energy and whether there is one universe or many, sometimes called a multiverse. Cosmology entails the entire universe from birth to death with mysteries and intrigue at every turn.

Now, Cyr-Racine, Ge, and Knox have discovered a previously unnoticed mathematical property of cosmological models which could, in principle, allow for a faster expansion rate while hardly changing the most precisely tested other predictions of the standard cosmological model. They found that a uniform scaling of the gravitational free-fall rates and photon-electron scattering rate leaves most dimensionless cosmological observables nearly invariant.

"Basically, we point out that a lot of the observations we do in cosmology have an inherent symmetry under rescaling the universe as a whole. This might provide a way to understand why there appears to be a discrepancy between different measurements of the Universe's expansion rate."

The research, titled "Symmetry of Cosmological Observables, a Mirror World Dark Sector, and the Hubble Constant" was published recently in Physical Review Letters.

This result opens a new approach to reconciling cosmic microwave background and large-scale structure observations with high values of the Hubble constant H0: Find a cosmological model in which the scaling transformation can be realized without violating any measurements of quantities not protected by the symmetry. This work has opened a new path toward resolving what has proved to be a challenging problem. Further model building might bring consistency with the two constraints not yet satisfied: the inferred primordial abundances of deuterium and helium.

If the universe is somehow exploiting this symmetry researchers are led to an extremely interesting conclusion: that there exists a mirror universe very similar to ours but invisible to us except through gravitational impact on our world. Such "mirror world" dark sector would allow for an effective scaling of the gravitational free-fall rates while respecting the precisely measured mean photon density today.

"In practice, this scaling symmetry could only be realized by including a mirror world in the model -- a parallel universe with new particles that are all copies of known particles," said Cyr-Racine. "The mirror world idea first arose in the 1990s but has not previously been recognized as a potential solution to the Hubble constant problem.

"This might seem crazy at face value, but such mirror worlds have a large physics literature in a completely different context since they can help solve important problem in particle physics," explains Cyr-Racine. "Our work allows us to link, for the first time, this large literature to an important problem in cosmology."

In addition to searching for missing ingredients in our current cosmological model, researchers are also wondering whether this Hubble constant discrepancy could be caused in part by measurement errors. While it remains a possibility, it is important to note that the discrepancy has become more and more significant as higher quality data have been included in the analyses, suggesting that the data might not be at fault.

"It went from two and a half Sigma, to three, and three and a half to four Sigma. By now, we are pretty much at the five-Sigma level," said Cyr-Racine. "That's the key number which makes this a real problem because you have two measurements of the same thing, which if you have a consistent picture of the universe should just be completely consistent with each other, but they differ by a very statistically significant amount."

Read more at Science Daily

Research brings hope for spinal cord injury treatment

Scientists from the University of Birmingham have shown an existing drug may reduce damage after spinal cord injury, by blocking the inflammatory response in the spinal cord.

Their research, published today in Clinical and Translational Medicine, demonstrates that AZD1236, a drug developed by AstraZeneca, can significantly reduce 'secondary damage' caused by the body's response to spinal cord injury (SCI).

Researchers led by Professor Zubair Ahmed, Professor of Neuroscience and lead for the Neuroscience and Ophthalmology Section at The University's Institute of Inflammation and Ageing, used animal models to demonstrate that AZD1236 can promote significant nerve regeneration, with a dramatic 80% preservation in nerve function following spinal cord compression injury.

Crucially, this translated into an 85% improvement in movement and sensation. These dramatic effects were observed following only three days of treatment with AZD1236, starting within 24 hours post-injury. Within three weeks, the AZD1236 treated animals showed unprecedented recovery, while controls still showed significant deficits at six weeks post-injury.

One of the key drivers of SCI secondary damage is breakdown of the blood-spinal cord barrier (BSCB). This results in oedema (excess fluid build-up around the spinal cord) and triggers an inflammatory response that can ultimately hinder the healing process, and lead to nerve cell death.

AZD1236 is a potent and selective inhibitor of two enzymes, MMP-9 and MMP-12, which are implicated in the inflammatory process.

The researchers demonstrated that AZD1236 halts SCI-induced oedema, and reduces BSCB breakdown and scarring at the site of the injury. They also examined the effect of AZD1236 dosing on MMP-9 and MMP-12 activity in both the bloodstream and cerebrospinal fluid, which surrounds the spinal cord.

Here they demonstrated significant suppression of enzyme activity after both oral dosing, and intrathecal dosing (injection into the spinal canal). Oral dosing reduced enzyme activity by 90% in serum, and 69-74% in the cerebrospinal fluid. Unsurprisingly, intrathecal injection delivered higher levels (88-90%) of suppression in the cerebrospinal fluid.

Further studies showed the AZD1236 supressed the formation of pro-inflammatory cytokines (molecules that are known to contribute to the development of long-lasting neuropathic pain, which often follows SCI) by 85-95%. AZD1236 was also found to be 82% more effective at alleviating SCI-induced neuropathic pain sensitivity to cold, heat and touch when compared to currently used pain medications such as pregabalin (Lyrica) and gabapentin.

Professor Ahmed commented: "There is currently no reparative drug available for SCI patients, treatments only provide symptomatic relief and do not tackle the underlying molecular mechanisms that cause or contribute to oedema and blood-spinal cord barrier breakdown. This drug has the potential to be a first-in-class treatment against some of the key pathological drivers of SCI and could revolutionise the prospects for recovery of SCI patients."

Read more at Science Daily

May 20, 2022

Lost or extinct? Study finds the existence of more than 500 animal species remains uncertain

An international study provides the first global evaluation of all terrestrial vertebrate species that have not been declared extinct and identifies more than 500 species considered to be 'lost' -- those that haven't been seen by anyone in more than 50 years.

Researchers reviewed information on 32,802 species from the International Union for Conservation of Nature Red List of Threatened Species (IUCN Red List) and identified 562 lost species. Their findings appear in the journal Animal Conservation.

The IUCN Red List defines extinct as 'when there is no reasonable doubt the last individual of a species has died,' which can be challenging to verify. According to Simon Fraser University biodiversity professor and study co-author Arne Mooers, the Red List categorizes 75 of these 562 lost species as 'possibly extinct.' The researchers note the existence of many species with an uncertain conservation status may become increasingly problematic as the extinction crisis worsens and more species go missing.

A total of 311 terrestrial vertebrate species have been declared extinct since 1500, meaning 80 per cent more species are considered lost than have been declared extinct.

Reptiles led the way with 257 species considered lost, followed by 137 species of amphibians, 130 species of mammals and 38 species of birds. Most of these lost animals were last seen in megadiverse countries such as Indonesia (69 species), Mexico (33 species) and Brazil (29 species).

While not surprising, this concentration is important, according to researchers. "The fact most of these lost species are found in megadiverse tropical countries is worrying, given such countries are expected to experience the highest numbers of extinctions in the coming decades," says study lead author Tom Martin from the UK's Paignton Zoo.

Mooers, who anchored the study, says: "While theoretical estimates of ongoing 'extinction rates' are fine and good, looking hard for actual species seems better."

Gareth Bennett, an SFU undergraduate student who did much of the data combing, adds: "We hope this simple study will help make these lost species a focus in future searches."

Read more at Science Daily

Satellite monitoring of biodiversity moves within reach

Internationally comparable data on biodiversity is needed to protect threatened ecosystems, restore destroyed habitats and counteract the negative effects of global biodiversity loss. Current biodiversity monitoring, however, is labor-intensive and costly. In addition, many places around the world are difficult to access.

Biodiversity monitoring from space possible via satellite

Anna Schweiger from the Remote Sensing Laboratories at the Department of Geography, University of Zurich (UZH), and Etienne Laliberté from the University of Montréal, have now shown that plant biodiversity across ecosystems ranging from Arctic tundra to tropical forests can be reliably assessed using image spectrometry. "With our study, we hope to contribute to the future detection of changes in species composition of our Earth's ecosystems from space. The goal is to provide evidence-based guidance for policy measures to protect species and mitigate negative consequences of biodiversity loss," says first author Anna Schweiger.

Imaging spectrometers measure the reflectance of light from the visible to the shortwave infrared range of the electromagnetic spectrum. The reflectance of plants is determined by their chemical, anatomical and morphological characteristics, which are important for interactions among plants and with their environment. "Plants with similar traits, as well as closely related species, therefore tend to have similar reflectance spectra," explains Schweiger.

Using reflected light to assess the characteristics of individual plants and plant communities

The current study is a continuation of the researchers' work on spectral diversity metrics. Their indices calculate spectral variation among individual plants within communities, and among communities within a region. The diversity within communities is called alpha-diversity, while the diversity among communities is called beta-diversity.

Data for the study came from the National Ecological Observatory Network (NEON). The network uses standardized methods to collect biodiversity and Earth observation data across the United States which are then made publicly available. NEON imaging spectrometer data collected from research flights have a pixel size of 1x1 meter.

Spectral diversity calculations showed that the detection of alpha-diversity depends on plant size. Spectral diversity calculated in forests with closed canopies and large individual trees matched plant diversity determined on the ground better than spectral diversity calculated in open landscapes dominated by small herbaceous plants and grasses. Spectral beta-diversity, however, captured differences in plant community composition across all ecosystems studied based on a spatial resolution of 20x20 meters. This pixel size corresponds to the size of NEON's vegetation inventory plots.

Read more at Science Daily

Insomnia in midlife may manifest as cognitive problems in retirement age

The Helsinki Health Study at the University of Helsinki investigated the development of insomnia symptoms in midlife and their effects on memory, learning ability and concentration after retirement. The follow-up period was 15-17 years.

According to the study, long-term insomnia symptoms and later poorer cognitive functioning have a clear connection.

"The findings indicate that severe insomnia symptoms were associated with worse cognitive function among those who were on statutory pension," says Doctoral Researcher Antti Etholén, describing the results of the study.

The study also found that the memory problems, and problems in learning ability and concentration increased as the insomnia symptoms were prolonged.

Sleeping well already in middle age

Prior research has shown that there are a number of mechanisms that can explain how sleep can affect cognitive functioning. What makes the recently published study exceptional is the long follow-up period for insomnia symptoms.

Among other things, the study demonstrated that if insomnia symptoms eased over the years, cognitive functioning was also found to be better at retirement age compared to the problems persisting.

According to the researchers, long-lasting insomnia symptoms should be considered as risk factors for poor cognitive functioning.

"Based on our findings, early intervention tackling insomnia symptoms, or measures aimed at improving the quality of sleep would be justified," says Professor Tea Lallukka.

There are many ways to improve the quality of sleep, including the regularity of the sleep rhythm, the appropriate temperature and brightness of the sleeping environment, and the optimal timing of physical exercise, coffee consumption and eating.

However, Lallukka believes that intervention studies are still needed to ascertain the effects of measures in support of good sleep.

Read more at Science Daily

Prehistoric feces reveal parasites from feasting at Stonehenge

A new analysis of ancient faeces found at the site of a prehistoric village near Stonehenge has uncovered evidence of the eggs of parasitic worms, suggesting the inhabitants feasted on the internal organs of cattle and fed leftovers to their dogs.

Durrington Walls was a Neolithic settlement situated just 2.8km from Stonehenge, and dating from around 2500 BC, when much of the famous stone monument was constructed. It is believed that the site housed the people who built Stonehenge.

A team of archaeologists led by the University of Cambridge investigated nineteen pieces of ancient faeces, or 'coprolite', found at Durrington Walls and preserved for over 4,500 years. Five of the coprolites (26%) -- one human and four dog -- were found to contain the eggs of parasitic worms.

Researchers say it is the earliest evidence for intestinal parasites in the UK where the host species that produced the faeces has also been identified. The findings are published today in the journal Parasitology.

"This is the first time intestinal parasites have been recovered from Neolithic Britain, and to find them in the environment of Stonehenge is really something," said study lead author Dr Piers Mitchell from Cambridge's Department of Archaeology.

"The type of parasites we find are compatible with previous evidence for winter feasting on animals during the building of Stonehenge," he said.

Four of the coprolites, including the human one, contained the eggs of capillariid worms, identified in part by their lemon shape.

While the many types of capillariid around the world infect a wide range of animals, on the rare occasion that a European species infects humans the eggs get lodged in the liver and don't appear in stool.

The evidence of capillariid eggs in human faeces indicates that the person had eaten the raw or undercooked lungs or liver from an already infected animal, resulting in the parasite's eggs passing straight through the body.

During excavations of the main 'midden' -- or dung and refuse heap -- at Durrington Walls, archaeologists uncovered pottery and stone tools along with over 38,000 animal bones. Some 90% of the bones were from pigs, with less than 10% from cows. This is also where the partially mineralised faeces used in the study were found.

"As capillariid worms can infect cattle and other ruminants, it seems that cows may have been the most likely source of the parasite eggs," said Mitchell.

Previous isotopic analyses of cow teeth from Durrington Walls suggest that some cattle were herded almost 100km from Devon or Wales to the site for large-scale feasting. Patterns of butchery previously identified on cattle bones from the site suggest beef was primarily chopped for stewing, and bone marrow was extracted.

"Finding the eggs of capillariid worms in both human and dog coprolites indicates that the people had been eating the internal organs of infected animals, and also fed the leftovers to their dogs," said co-author Evilena Anastasiou, who assisted with the research while at Cambridge.

To determine whether the coprolites excavated from the midden were from human or animal faeces, they were analysed for sterols and bile acids at the National Environment Isotope Facility at the University of Bristol.

One of the coprolites belonging to a dog contained the eggs of fish tapeworm, indicating it had previously eaten raw freshwater fish to become infected. However, no other evidence of fish consumption, such as bones, has been found at the site.

"Durrington Walls was occupied on a largely seasonal basis, mainly in winter periods. The dog probably arrived already infected with the parasite," said Dr Piers Mitchell.

"Isotopic studies of cow bones at the site suggests they came from regions across southern Britain, which was likely also true of the people who lived and worked there."

The dates for Durrington Walls match those for stage two of the construction of Stonehenge, when the world-famous 'trilithons' -- two massive vertical stones supporting a third horizontal stone -- were erected, most likely by the seasonal residents of this nearby settlement.

While Durrington Walls was a place of feasting and habitation, as evidenced by the pottery and vast number of animal bones, Stonehenge itself was not, with little found to suggest people lived or ate there en masse.

Prof Mike Parker Pearson from UCL's Institute of Archaeology, who excavated Durrington Walls between 2005 and 2007, added: "This new evidence tells us something new about the people who came here for winter feasts during the construction of Stonehenge."

Read more at Science Daily

May 19, 2022

Astronauts may one day drink water from ancient moon volcanoes

Billions of years ago, a series of volcanic eruptions broke loose on the moon, blanketing hundreds of thousands of square miles of the orb's surface in hot lava. Over the eons, that lava created the dark blotches, or maria, that give the face of the moon its familiar appearance today.

Now, new research from CU Boulder suggests that volcanoes may have left another lasting impact on the lunar surface: sheets of ice that dot the moon's poles and, in some places, could measure dozens or even hundreds of feet thick.

"We envision it as a frost on the moon that built up over time," said Andrew Wilcoski, lead author of the new study and a graduate student in the Department of Astrophysical and Planetary Sciences (APS) and the Laboratory for Atmospheric and Space Physics (LASP) at CU Boulder.

He and his colleagues published their findings this month in The Planetary Science Journal.

The researchers drew on computer simulations, or models, to try to recreate conditions on the moon long before complex life arose on Earth. They discovered that ancient moon volcanoes spewed out huge amounts of water vapor, which then settled onto the surface -- forming stores of ice that may still be hiding in lunar craters. If any humans had been alive at the time, they may even have seen a sliver of that frost near the border between day and night on the moon's surface.

It's a potential bounty for future moon explorers who will need water to drink and process into rocket fuel, said study co-author Paul Hayne.

"It's possible that 5 or 10 meters below the surface, you have big sheets of ice," said Hayne, assistant professor in APS and LASP.

Temporary atmospheres

The new study adds to a growing body of evidence that suggests that the moon may be awash in a lot more water than scientists once believed. In a 2020 study, Hayne and his colleagues estimated that nearly 6,000 square miles of the lunar surface could be capable of trapping and hanging onto ice -- mostly near the moon's north and south poles. Where all that water came from in the first place is unclear.

"There are a lot of potential sources at the moment," Hayne said.

Volcanoes could be a big one. The planetary scientist explained that from 2 to 4 billion years ago, the moon was a chaotic place. Tens of thousands of volcanoes erupted across its surface during this period, generating huge rivers and lakes of lava, not unlike the features you might see in Hawaii today -- only much more immense.

"They dwarf almost all of the eruptions on Earth," Hayne said.

Recent research from scientists at the Lunar and Planetary Institute in Houston shows that these volcanoes likely also ejected towering clouds made up of mostly carbon monoxide and water vapor. These clouds then swirled around the moon, potentially creating thin and short-lived atmospheres.

That got Hayne and Wilcoski wondering: Could that same atmosphere have left ice on the lunar surface, a bit like frost forming on the ground after a chilly fall night?

Forever ice

To find out, the duo alongside Margaret Landis, a research associate at LASP, set out to try to put themselves onto the surface of the moon billions of years ago.

The team used estimates that, at its peak, the moon experienced one eruption every 22,000 years, on average. The researchers then tracked how volcanic gases may have swirled around the moon, escaping into space over time. And, they discovered, conditions may have gotten icy. According to the group's estimates, roughly 41% of the water from volcanoes may have condensed onto the moon as ice.

"The atmospheres escaped over about 1,000 years, so there was plenty of time for ice to form," Wilcoski said.

There may have been so much ice on the moon, in fact, that you could, conceivably, have spotted the sheen of frost and thick, polar ice caps from Earth. The group calculated that about 18 quadrillion pounds of volcanic water could have condensed as ice during that period. That's more water than currently sits in Lake Michigan. And the research hints that much of that lunar water may still be present today.

Read more at Science Daily

Past events reveal how future warming could harm cold-water corals

How will future warming of the planet impact cold-water corals? A new analysis of ancient evidence from the last major global warming event identifies food and oxygen supply as key environmental factors that influence the vitality of cold-water corals in the North Atlantic Ocean and the Mediterranean Sea. Rodrigo da Costa Portilho-Ramos of the University of Bremen, Germany, and colleagues present these findings in the open-access journal PLOS Biology on May 19th.

Much like tropical corals in shallower waters, cold-water corals serve as crucial "engineers" of deep-sea reefs and mounds that are home to rich, unique ecosystems. As climate change progresses, researchers predict, cold-water corals are likely to face harm from such factors as rising ocean temperatures, decreased food supply, lower oxygen levels, and ocean acidification. However, no extinctions of cold-water coral ecosystems have been documented in real-time, so the precise factors that may determine their fate have been unclear.

To shed new light, Portilho-Ramos and colleagues turned to ancient evidence of past climate change as captured in seafloor sediments. They analyzed sediments collected at or near six sites of cold-water coral ecosystems in the North Atlantic Ocean and the Mediterranean Sea, applying standard techniques to reconstruct ocean conditions and the abundance of the common coral species Lophelia pertusa over the last 20,000 years. This period encompasses Earth's last major global warming event.

The analysis revealed that ancient L. pertusa abundance was most strongly influenced by changes in food supply, delivered either vertically from shallower depths or by lateral water flow along the seafloor. Low oxygen concentration also appeared to be a key stressor for L. pertusa. Meanwhile, changes in ocean temperature and salinity did not appear to be significantly associated with proliferation or disappearance of L. pertusa over time.

These findings suggest that climate change-driven alterations to ocean processes that affect food and oxygen supplies may play key roles in the future health of cold-water coral ecosystems. In some cases, the data suggest, high abundance of food may compensate for low oxygen levels.

Read more at Science Daily

First animals developed complex ecosystems before the Cambrian explosion

Early animals formed complex ecological communities more than 550 million years ago, setting the evolutionary stage for the Cambrian explosion, according to a study by Rebecca Eden, Emily Mitchell, and colleagues at the University of Cambridge, UK, publishing May 17 in the open-access journal PLOS Biology.

The first animals evolved towards the end of the Ediacaran period, around 580 million years ago. However, the fossil record shows that after an initial boom, diversity declined in the run-up to the dramatic burgeoning of biodiversity in the so-called "Cambrian explosion" nearly 40 million years later. Scientists have suggested this drop in diversity is evidence of a mass extinction event roughly 550 million years ago -- possibly caused by an environmental catastrophe -- but previous research has not investigated the structure of these ancient ecological communities.

To evaluate the evidence for an Ediacaran mass extinction, researchers analyzed the metacommunity structure of three fossil assemblages that span the last 32 million years of this geological period (between 575 to 543 million years ago). They used published paleoenvironmental data, such as ocean depth and rock characteristics, to look for metacommunity structure indicative of environmental specialization and interactions between species. The analysis revealed increasingly complex community structure in the later fossil assemblages, suggesting that species were becoming more specialized and engaging in more inter-species interactions towards the end of the Ediacaran era, a trend often seen during ecological succession.

The results point to competitive exclusion, rather than mass extinction, as the cause of the diversity drop in the late Ediacaran period, the authors say. The analysis indicates that the features of ecological and evolutionary dynamics commonly associated with the Cambrian explosion -- such as specialization and niche contraction -- were established by the first animal communities in the late Ediacaran.

Read more at Science Daily

Puzzling features deep in Earth's interior illuminated

New research led by the University of Cambridge is the first to take a detailed image of an unusual pocket of rock at the boundary layer with Earth's core, some three thousand kilometres beneath the surface.

The enigmatic area of rock, which is located almost directly beneath the Hawaiian Islands, is one of several ultra-low velocity zones -- so-called because earthquake waves slow to a crawl as they pass through them.

The research, published today in Nature Communications, is the first to reveal the complex internal variability of one of these pockets in detail, shedding light on the landscape of Earth's deep interior and the processes operating within it.

"Of all Earth's deep interior features, these are the most fascinating and complex. We've now got the first solid evidence to show their internal structure -- it's a real milestone in deep earth seismology," said lead author Zhi Li, PhD student at Cambridge's Department of Earth Sciences.

Earth's interior is layered like an onion: at the centre sits the iron-nickel core, surrounded by a thick layer known as the mantle, and on top of that a thin outer shell -- the crust we live on. Although the mantle is solid rock, it is hot enough to flow extremely slowly. These internal convection currents feed heat to the surface, driving the movement of tectonic plates and fuelling volcanic eruptions.

Scientists use seismic waves from earthquakes to see beneath Earth's surface -- the echoes and shadows of these waves revealing radar-like images of deep interior topography. But until recently, images of the structures at the core-mantle boundary, an area of key interest for studying our planet's internal heat flow, have been grainy and difficult to interpret.

The researchers used the latest numerical modelling methods to reveal kilometre-scale structures at the core-mantle boundary. According to co-author Dr Kuangdai Leng, who developed the methods while at the University of Oxford, "We are really pushing the limits of modern high-performance computing for elastodynamic simulations, taking advantage of wave symmetries unnoticed or unused before." Leng, who is currently based at the Science and Technology Facilities Council, said that this means they can improve the resolution of the images by an order of magnitude compared to previous work.

They observed a 40% reduction in the speed of seismic waves travelling at the base of the ultra-low velocity zone beneath Hawaii. According to the authors, this supports existing proposals that the zone contains much more iron than the surrounding rocks -- meaning it is denser and more sluggish. "It's possible that this iron-rich material is a remnant of ancient rocks from Earth's early history or even that iron might be leaking from the core by an unknown means," said project lead, Dr Sanne Cottaar from Cambridge Earth Sciences.

The new research could also help scientists understand what sits beneath and gives rise to volcanic chains like the Hawaiian Islands. Scientists have started to notice a correlation between the location of the descriptively-named hotspot volcanoes, which include Hawaii and Iceland, and the ultra-low velocity zones at the base of the mantle. The origin of hotspot volcanoes has been widely debated, but the most popular theory suggests that plume-like structures bring hot mantle material all the way from the core-mantle boundary to the surface.

With images of the ultra-low velocity zone beneath Hawaii now in hand, the team can also gather rare physical evidence from what is likely the root of the plume feeding Hawaii. Their observation of dense, iron-rich rock beneath Hawaii would support surface observations, "Basalts erupting from Hawaii have anomalous isotope signatures which could either point to either an early-Earth origin or core leaking, it means some of this dense material piled up at the base must be dragged to the surface," said Cottaar.

More of the core-mantle boundary now needs to be imaged to understand if all surface hotspots have a pocket of dense material at the base. Where and how the core-mantle boundary can be targeted does depend on where earthquakes occur, and where seismometers are installed to record the waves.

The team's observations add to a growing body of evidence that Earth's deep interior is just as variable as it's surface. "These low velocity zones are one of the most intricate features we see at extreme depths -- if we expand our search we are likely to see ever-increasing levels of complexity, both structural and chemical, at the core-mantle boundary," said Li.

Read more at Science Daily

May 18, 2022

Physicists explain how type of aurora on Mars is formed

Physicists led by the University of Iowa have learned how a type of aurora on Mars is formed.

In a new study, the physicists studied discrete aurora, a light-in-the-sky display that occurs mostly during the night in the red planet's southern hemisphere. While scientists have known about discrete aurora on Mars-which also occur on Earth -- they did not know how they formed. That's because Mars does not have a global magnetic field like Earth, which is a main trigger for aurora, also called the northern and southern lights on our planet.

Instead, the physicists report, discrete aurora on Mars are governed by the interaction between the solar wind -- the constant jet of charged particles from the sun -- and magnetic fields generated by the crust at southern latitudes on Mars. It's the nature of this localized interaction between the solar wind and the crustal magnetic fields that lead to discrete aurora, the scientists find.

"We have the first detailed study looking at how solar wind conditions affect aurora on Mars," says Zachary Girazian, associate research scientist in the Department of Physics and Astronomy and the study's corresponding author. "Our main finding is that inside the strong crustal field region, the aurora occurrence rate depends mostly on the orientation of the solar wind magnetic field, while outside the strong crustal field region, the occurrence rate depends mostly on the solar wind dynamic pressure."

The findings come from more than 200 observations of discrete aurora on Mars by the NASA-led Mars Atmosphere and Volatile EvolutioN (MAVEN) spacecraft. One of the instruments used to make the observations, the Solar Wind Ion Analyzer, is led by Jasper Halekas, associate professor in the Department of Physics and Astronomy and a co-author on the study.

"Now is a very fruitful and exciting time for researching aurora at Mars. The database of discrete aurora observations we have from MAVEN is the first of its kind, allowing us to understand basic features of the aurora for the first time," Girazian says.

Read more at Science Daily

Researchers use galaxy as a 'cosmic telescope' to study heart of the young universe

A unique new instrument, coupled with a powerful telescope and a little help from nature, has given researchers the ability to peer into galactic nurseries at the heart of the young universe.

After the big bang some 13.8 billion years ago, the early universe was filled with enormous clouds of neutral diffuse gas, known as Damped Lyman-α systems, or DLAs. These DLAs served as galactic nurseries, as the gasses within slowly condensed to fuel the formation of stars and galaxies. They can still be observed today, but it isn't easy.

"DLAs are a key to understanding how galaxies form in the universe, but they are typically difficult to observe since the clouds are too diffuse and don't emit any light themselves," says Rongmon Bordoloi, assistant professor of physics at North Carolina State University and corresponding author of the research.

Currently, astrophysicists use quasars -- supermassive black holes that emit light -- as "backlight" to detect the DLA clouds. And while this method does allow researchers to pinpoint DLA locations, the light from the quasars only acts as small skewers through a massive cloud, hampering efforts to measure their total size and mass.

But Bordoloi and John O'Meara, chief scientist at the W.M. Keck Observatory in Kamuela, Hawaii, found a way around the problem by using a gravitationally lensed galaxy and integral field spectroscopy to observe two DLAs -- and the host galaxies within -- that formed around 11 billion years ago, not long after the big bang.

"Gravitationally lensed galaxies refers to galaxies that appear stretched and brightened," Bordoloi says. "This is because there is a gravitationally massive structure in front of the galaxy that bends the light coming from it as it travels toward us. So we end up looking at an extended version of the object -- it's like using a cosmic telescope that increases magnification and gives us better visualization.

"The advantage to this is twofold: One, the background object is extended across the sky and bright, so it is easy to take spectrum readings on different parts of the object. Two, because lensing extends the object, you can probe very small scales. For example, if the object is one light year across, we can study small bits in very high fidelity."

Spectrum readings allow astrophysicists to "see" elements in deep space that are not visible to the naked eye, such as diffuse gaseous DLAs and the potential galaxies within them. Normally, gathering the readings is a long and painstaking process. But the team solved that issue by performing integral field spectroscopy with the Keck Cosmic Web Imager.

Integral field spectroscopy allowed the researchers to obtain a spectrum at every single pixel on the part of the sky it targeted, making spectroscopy of an extended object on the sky very efficient. This innovation combined with the stretched and brightened gravitationally lensed galaxy allowed the team to map out the diffuse DLA gas in the sky at high fidelity. Through this method the researchers were able to determine not only the size of the two DLAs, but also that they both contained host galaxies.

"I've waited most of my career for this combination: a telescope and instrument powerful enough, and nature giving us a bit of lucky alignments to study not one but two DLAs in a rich new way," O'Meara says. "It's great to see the science come to fruition."

The DLAs are huge, by the way. With diameters greater than 17.4 kiloparsecs, they're more than two thirds the size of the Milky Way galaxy today. For comparison, 13 billion years ago, a typical galaxy would have a diameter of less than 5 kiloparsecs. A parsec is 3.26 light years, and a kiloparsec is 1,000 parsecs, so it would take light about 56,723 years to travel across each DLA.

"But to me, the most amazing thing about the DLAs we observed is that they aren't unique -- they seem to have similarities in structure, host galaxies were detected in both, and their masses indicate that they contain enough fuel for the next generation of star formation," Bordoloi says. "With this new technology at our disposal, we are going to be able to dig deeper into how stars formed in the early universe."

Read more at Science Daily

Keeping buildings cooler with a wood-based foam

Summertime is almost here, a time when many people try to beat the heat. But running air conditioners constantly can be expensive and wasteful. Now, researchers reporting in ACS' Nano Letters have designed a lightweight foam made from wood-based cellulose nanocrystals that reflects sunlight, emits absorbed heat and is thermally insulating. They suggest that the material could reduce buildings' cooling energy needs by more than a third.

Although scientists have developed cooling materials, they have disadvantages. Some materials that passively release absorbed heat let a lot of heat through to buildings under the direct, midday sun of the summer months. And other materials that reflect sunlight don't work well in hot, humid or cloudy weather. So, Yu Fu, Kai Zhang and colleagues wanted to develop a robust material that could reflect sunlight, passively release heat and keep wayward heat from passing through.

To generate a cooling material, the researchers connected cellulose nanocrystals together with a silane bridge, before freezing and freeze-drying the material under a vacuum. This process vertically aligned the nanocrystals, making a white, lightweight foam, which reflected 96% of visible light and emitted 92% of absorbed infrared radiation. When placed over an aluminum foil-lined box sitting outdoors at noon, the material kept the temperature inside the box 16 degrees F cooler than outside of it. Also, the material kept the inside of the box 13 degrees F cooler when the air was humid. As the cellulose-based foam was compressed, its cooling ability decreased, revealing tunable cooling properties. The team calculated that placing the foam on the roof and exterior walls of a building could reduce its cooling energy needs by an average of 35.4%. Because the wood-based cellulose foam's performance can be tuned depending on weather conditions, the researcher say that the technology could be applied in a wide range of environments.

Read more at Science Daily

How the brain changes during depression treatment

For the first time, researchers have shown what happens to the brain when a person receives a depression treatment known as repetitive transcranial magnetic stimulation (rTMS). The results were published today in the American Journal of Psychiatry.

rTMS is a depression treatment typically used when other approaches -- such as medications -- haven't been effective for a patient. It is estimated that approximately 40 per cent of people with major depression do not respond to antidepressants.

During an rTMS session, a device containing an electromagnetic coil is placed against a patient's scalp. The device then painlessly delivers a magnetic pulse that stimulates nerve cells in a region of the brain involved in mood control -- called the dorsolateral pre-frontal cortex.

Although proven to be effective, the mechanisms behind how rTMS affects the brain have not been well understood.

"When we first started this research, the question we were asking was very simple: we wanted to know what happens to the brain when rTMS treatment is being delivered," says Dr. Fidel Vila-Rodriguez, an assistant professor in UBC's department of psychiatry and researcher at the Djavad Mowafaghian Centre for Brain Health (DMCBH).

To answer this question, Dr. Vila-Rodriguez and his team delivered one round of rTMS to patients while they were inside a magnetic resonance imaging (MRI) scanner. Since the MRI can measure brain activity, the researchers were able to see in real time what changes were happening in the brain.

The team found that by stimulating the dorsolateral pre-frontal cortex, several other regions of the brain were also activated. These other regions are involved in multiple functions -- from managing emotional responses to memory and motor control.

The participants then underwent another four weeks of rTMS treatment and the team assessed whether the activated regions were associated with patients having fewer symptoms of depression when their treatment ended.

"We found that regions of the brain that were activated during the concurrent rTMS-fMRI were significantly related to good outcomes," says Dr. Vila-Rodriguez.

With this new map of how rTMS stimulates different areas of the brain, Dr. Villa Rodriguez hopes the findings could be used to determine how well a patient is responding to rTMS treatments.

"By demonstrating this principle and identifying regions of the brain that are activated by rTMS, we can now try to understand whether this pattern can be used as a biomarker," he says.

Dr. Vila-Rodriguez is now exploring how rTMS can be used to treat a range of neuropsychiatric disorders. He has received funding through the Djavad Mowafaghian Centre for Brain Health Alzheimer's Disease Research Competition to look at rTMS as a way to enhance memory in patients who are showing early signs of Alzheimer's disease. He also received a grant from the Canadian Institutes of Health Research (CIHR) to study whether the rTMS brain activation patterns can be detected by changes in heart rate.

Dr. Vila-Rodriguez says this is type of research will hopefully encourage more widespread adoption and accessibility of rTMS treatments across the country. Despite being approved by Health Canada 20 years ago, rTMS is still not widely available. In British Columbia, there are some private clinics that offer rTMS, but it is not covered by the provincial health plan.

Read more at Science Daily

May 17, 2022

Extraterrestrial stone brings first supernova clues to Earth

New chemistry 'forensics' indicate that the stone named Hypatia from the Egyptian desert could be the first tangible evidence found on Earth of a supernova type Ia explosion. These rare supernovas are some of the most energetic events in the universe.

This is the conclusion from a new study published in the journal Icarus, by Jan Kramers, Georgy Belyanin and Hartmut Winkler of the University of Johannesburg, and others.

Since 2013, Belyanin and Kramers have discovered a series of highly unusual chemistry clues in a small fragment of the Hypatia Stone.

In the new research, they eliminate 'cosmic suspects' for the origin of the stone in a painstaking process. They have pieced together a timeline stretching back to the early stages of the formation of Earth, our Sun and the other planets in our solar system.

A cosmic timeline

Their hypothesis about Hypatia's origin starts with a star: A red giant star collapsed into a white dwarf star. The collapse would have happened inside a gigantic dust cloud, also called a nebula.

That white dwarf found itself in a binary system with a second star. The white dwarf star eventually 'ate' the other star. At some point the 'hungry' white dwarf exploded as a supernova type Ia inside the dust cloud.

After cooling, the gas atoms which remained of the supernova Ia started sticking to the particles of the dust cloud.

"In a sense we could say, we have 'caught' a supernova Ia explosion 'in the act', because the gas atoms from the explosion were caught in the surrounding dust cloud, which eventually formed Hypatia's parent body," says Kramers.

A huge 'bubble' of this supernova dust-and-gas-atoms mix never interacted with other dust clouds.

Millions of years would pass, and eventually the 'bubble' would slowly become solid, in a 'cosmic dust bunny' kind of way. Hypatia's 'parent body' would become a solid rock some time in the early stages of formation of our solar system.

This process probably happened in a cold, uneventful outer part of our solar system -- in the Oort cloud or in the Kuiper belt.

At some point, Hypatia's parent rock started hurtling towards Earth. The heat of entry into earth's atmosphere, combined with the pressure of impact in the Great Sand Sea in south-western Egypt, created micro-diamonds and shattered the parent rock.

The Hypatia stone picked up in the desert must be one of many fragments of the original impactor.

"If this hypothesis is correct, the Hypatia stone would be the first tangible evidence on Earth of a supernova type Ia explosion. Perhaps equally important, it shows that an individual anomalous 'parcel' of dust from outer space could actually be incorporated in the solar nebula that our solar system was formed from, without being fully mixed in," says Kramers.

"This goes against the conventional view that dust which our solar system was formed from, was thoroughly mixed."

Three million volts for a tiny sample

To piece together the timeline of how Hypatia may have formed, the researchers used several techniques to analyse the strange stone.

In 2013, a study of the argon isotopes showed the rock was not formed on earth. It had to be extraterrestrial. A 2015 study of noble gases in the fragment indicated that it may not be from any known type of meteorite or comet.

In 2018 the UJ team published various analyses, which included the discovery of a mineral, nickel phosphide, not previously found in any object in our solar system.

At that stage Hypatia was proving difficult to analyse further. The trace metals Kramers and Belyanin were looking for, couldn't really be 'seen in detail' with the equipment they had. They needed a more powerful instrument that would not destroy the tiny sample.

Kramers started analysing a dataset that Belyanin had created a few years before.

In 2015, Belyanin had done a series of analyses on a proton beam at the iThemba Labs in Somerset West. At the time, Dr Wojciech Przybylowicz kept the three-million Volt machine humming along.

In search of a pattern

"Rather than exploring all the incredible anomalies Hypatia presents, we wanted to explore if there is an underlying unity. We wanted to see if there is some kind of consistent chemical pattern in the stone" says Kramers.

Belyanin carefully selected 17 targets on the tiny sample for analysis. All were chosen to be well away from the earthly minerals that had formed in the cracks of the original rock after its impact in the desert.

"We identified 15 different elements in Hypatia with much greater precision and accuracy, with the proton microprobe. This gave us the chemical 'ingredients' we needed, so Jan could start the next process of analysing all the data," says Belyanin.

Proton beam also rules out solar system

The first big new clue from the proton beam analyses was the surprisingly low level of silicon in the Hypatia stone targets. The silicon, along with chromium and manganese, were less than 1% to be expected for something formed within our inner solar system.

Further, high iron, high sulphur, high phosphorus, high copper and high vanadium were conspicuous and anomalous, adds Kramers.

"We found a consistent pattern of trace element abundances that is completely different from anything in the solar system, primitive or evolved. Objects in the asteroid belt and meteors don't match this either. So next we looked outside the solar system," says Kramers.

Not from our neighnourhood

Then Kramers compared the Hypatia element concentration pattern with what one would expect to see in the dust between stars in our solar arm of the Milky Way galaxy.

"We looked to see if the pattern we get from average interstellar dust in our arm of the Milky Way galaxy fits what we see in Hypatia. Again, there was no similarity at all," adds Kramers.

At this point, the proton beam data had also ruled out four 'suspects' of where Hypatia could have formed.

Hypatia did not form on earth, was not part of any known type of comet or meteorite, did not form from average inner solar system dust, and not from average interstellar dust either.

Not a red giant

The next simplest possible explanation for the element concentration pattern in Hypatia, would be a red giant star. Red giant stars are common in the universe.

But the proton beam data ruled out mass outflow from a red giant star too: Hypatia had too much iron, too little silicon and too low concentrations of heavy elements heavier than iron.

Nor a supernova Type II

The next 'suspect' to consider was a supernova type II. Supernovas of type II cook up a lot of iron. They are also a relatively common type of supernova.

Again, the proton beam data for Hypatia ruled out a promising suspect with 'chemistry forensics'. A supernova type II was highly unlikely as the source of strange minerals like nickel phosphide in the pebble. There was also too much iron in Hypatia compared to silicon and calcium.

It was time to closely examine the predicted chemistry of one of the most dramatic explosions in the universe.

Heavy metal factory

A rarer kind of supernova also makes a lot of iron. Supernovas of the type Ia only happen once or twice per galaxy per century. But they manufacture most of the iron (Fe) in the universe. Most of the steel on earth was once the element iron created by Ia supernovas.

Also, established science says that some Ia supernovas leave very distinctive 'forensic chemistry' clues behind. This is because of the way some Ia supernovas are set up.

First, a red giant star at the end of its life collapses into a very dense white dwarf star. White dwarf stars are usually incredibly stable for very long periods and most unlikely to explode. However, there are exceptions to this.

A white dwarf star could start 'pulling' matter off another star in a binary system. One can say the white dwarf star 'eats up' its companion star. Eventually the white dwarf gets so heavy, hot and unstable, it explodes in a supernova Ia.

The nuclear fusion during the supernova Ia explosion should create highly unusual element concentration patterns, accepted scientific theoretical models predict.

Also, the white dwarf star that explodes in a supernova Ia is not just blown to bits, but literally blown to atoms. The supernova Ia matter is delivered into space as gas atoms.

In an extensive literature search of star data and model results, the team could not identify any similar or better chemical fit for the Hypatia stone than a specific set of supernova Ia models.

Forensic elements evidence

"All supernova Ia data and theoretical models show much higher proportions of iron compared to silicon and calcium than supernova II models," says Kramers.

"In this respect, the proton beam laboratory data on Hypatia fit to supernova Ia data and models."

Altogether, eight of the 15 elements analysed conform to the predicted ranges of proportions relative to iron. Those are the elements silicon, sulphur, calcium, titanium, vanadium, chromium, manganese, iron and nickel.

Not all 15 of the analysed elements in Hypatia fit the predictions though. In six of the 15 elements, proportions were between 10 and 100 times higher than the ranges predicted by theoretical models for supernovas of type 1A. These are the elements aluminium, phosphorus, chlorine, potassium, copper and zinc.

"Since a white dwarf star is formed from a dying red giant, Hypatia could have inherited these element proportions for the six elements from a red giant star. This phenomenon has been observed in white dwarf stars in other research," adds Kramers.

If this hypothesis is correct, the Hypatia stone would be the first tangible evidence on Earth of a supernova type Ia explosion, one of the most energetic events in the universe.

Read more at Science Daily

Deep ocean warming as climate changes

Much of the "excess heat" stored in the subtropical North Atlantic is in the deep ocean (below 700m), new research suggests.

Oceans have absorbed about 90% of warming caused by humans. The study found that in the subtropical North Atlantic (25°N), 62% of the warming from 1850-2018 is held in the deep ocean.

The researchers -- from the University of Exeter and the University of Brest -- estimate that the deep ocean will warm by a further 0.2°C in the next 50 years.

Ocean warming can have a range of consequences including sea-level rise, changing ecosystems, currents and chemistry, and deoxygenation.

"As our planet warms, it's vital to understand how the excess heat taken up by the ocean is redistributed in the ocean interior all the way from the surface to the bottom, and it is important to take into account the deep ocean to assess the growth of Earth's 'energy imbalance'," said Dr Marie-José Messias, from the University of Exeter.

"As well as finding that the deep ocean is holding much of this excess heat, our research shows how ocean currents redistribute heat to different regions.

"We found that this redistribution was a key driver of warming in the North Atlantic."

The researchers studied the system of currents known as the Atlantic Meridional Overturning Circulation (AMOC).

AMOC works like a conveyer belt, carrying warm water from the tropics north -- where colder, dense water sinks into the deep ocean and spreads slowly south.

The findings highlight the importance of warming transferring by AMOC from one region to another.

Dr Messias said excess heat from the Southern Hemisphere oceans is becoming important in the North Atlantic -- now accounting for about a quarter of excess heat.

Read more at Science Daily

Chimpanzees combine calls to form numerous vocal sequences

Compared to the complex use of human language, the way animals communicate with each other appears quite simple. How our language evolved from such a simple system, remains unclear. Researchers from the Max Planck Institutes for Evolutionary Anthropology (MPI-EVA) and for Cognitive and Brain Sciences (MPI-CBS) in Leipzig, Germany, and the CNRS Institute for Cognitive Sciences in Bron, Lyon, France, recorded thousands of vocalisations from wild chimpanzees in Taï, Ivory Coast. They found that the animals produced hundreds of different vocal sequences containing up to ten different call types. The order of calls in these sequences followed some rules, and calls were associated with each other in a structured manner. The researchers will now investigate if this structure may constitute a step towards human syntax and if chimpanzees use these sequences to communicate a wider range of meanings in their complex social environment.

Humans are the only species on earth known to use language. We do this by combining sounds to form words and words to form hierarchically structured sentences. The question, where this extraordinary capacity originates from, still remains to be answered. In order to retrace the evolutionary origins of human language, researchers often use a comparative approach -- they compare the vocal production of other animals, in particular of primates, to those of humans. In contrast to humans, non-human primates often use single calls -referred to as call types -- and rarely combine them with each other to form vocal sequences.

Consequently, vocal communication in non-human primates seems much less complex than human communication. However, human language complexity does not arise from the number of sounds we use when we speak, which is typically bellow 50 different sounds in most languages, but from the way we combine sounds in a structured manner to form words and hierarchically combine these words to form sentences to express an infinite number of meanings. In fact, non-human primates also use up to 38 different calls to communicate, but they rarely combine them with each other. However, since they have so far not been analysed in great detail, we may not have a full picture of the structure and diversity of vocal sequences produced by non-human primates.

Researchers recorded thousands of vocalisations

Researchers at MPI-EVA and MPI-CBS in Leipzig and from the Institute of Cognitive Sciences at the CNRS in Bron, Lyon, France, recorded thousands of vocalisations produced by the members of three groups of wild chimpanzees in the Taï National Park in Ivory Coast. They identified 12 different call types and assessed how chimpanzees combine them to form vocal sequences. "Observing animals in their natural social and ecological environment reveals a previously undiscovered complexity in the ways they communicate," says first author Cédric Girard-Buttoz. "Syntax is a hallmark of human language and in order to elucidate the origin of this human ability it is crucial to understand how non-human primate vocalisations are structured," adds Emiliano Zaccarella, another lead author of the study.

The study shows that chimpanzees communicate with each other using hundreds of different sequences, combining up to ten call types across the whole repertoire. This is the first documentation of such a diversity of vocal production in non-human primates. Furthermore, the researchers show that calls -- in combination with specific other calls -- predictably occurred in certain positions in the sequence, following adjacency rules. These adjacency rules applied also to sequences with three call types.

"Our findings highlight a vocal communication system in chimpanzees that is much more complex and structured than previously thought," says co-author Tatiana Bortolato who recorded the vocalisations in the forest. "This is the first study in a larger project. By studying the rich complexity of the vocal sequences of wild chimpanzees, a socially complex species like humans, we expect to bring fresh insight into understanding where we come from and how our unique language evolved," Catherine Crockford, senior author on the study, points out.

Read more at Science Daily

Boost in nerve-growth protein helps explain why running supports brain health

Exercise increases levels of a chemical involved in brain cell growth, which bolsters the release of the "feel good" hormone dopamine, a new study shows. Dopamine is known to play a key role in movement, motivation, and learning.

Experts have long understood that regular running raises dopamine activity in the brain and may protect nerve cells from damage. In addition, past research has tied exercise-driven boosts in the dopamine-triggering chemical called brain-derived neurotrophic factor (BDNF) and in dopamine levels to improvements in learning and memory. However, the precise way these three factors interact has until now remained unclear.

Led by researchers at NYU Grossman School of Medicine, the investigation showed that mice running on a wheel for 30 days had a 40% increase in dopamine release in the dorsal stratium, the part of the brain involved in movement, compared to levels in mice that did not exercise. The runners also showed a nearly 60% increase in BDNF levels compared to their non-running counterparts. Notably, the increase in dopamine release remained elevated even after a week of rest. Additionally, when BDNF levels were artificially reduced, running did not lead to additional dopamine release.

"Our findings suggest that BDNF plays a key role in the long-lasting changes that occur in the brain as a result of running," says study lead author and neurobiologist Guendalina Bastioli, PhD. "Not only do these results help explain why exercise makes you move, think, and feel better, they also show that these benefits continue even if you do not work out every day," adds Bastioli, a postdoctoral fellow in the Department of Neuroscience at NYU Langone Health.

While researchers have previously measured dopamine activity during running, the new investigation provides insight into the longer-term behavior of the hormone and its effects on the brain well after exercise stops, according to Bastioli. The report is publishing online May 16 in the Journal of Neuroscience.

For the investigation, researchers provided dozens of male mice with unlimited access to either a freely rotating wheel or a locked wheel that could not move. After one month, the team measured dopamine release and BDNF levels in brain slices. They repeated this same process on a new group of rodents, some of which had been genetically modified to produce half as much BDNF as regular mice.

The study authors note that patients with Parkinson's disease and other movement disorders are often treated with drugs that mimic dopamine's effects on motor neurons. However, the mechanism behind dopamine's role in this protective benefit of exercise had not been thoroughly explored.

"Our results help us understand why exercise alleviates the symptoms of Parkinson's disease, as well as those of neuropsychiatric disorders such as depression," says study senior author and neuroscientist Margaret Rice, PhD. "Now that we know why physical activity helps, we can explore it as a means of augmenting or even replacing the use of dopamine-enhancing drugs in these patients."

Rice, a professor in the Departments of Neurosurgery and Neuroscience and Physiology at NYU Langone, cautions that while the preliminary findings in rodents were promising, future studies in humans will be required to fully understand the role of BDNF and dopamine in Parkinson's disease.

She adds that the study team next plans to investigate the relationship between exercise and these chemicals in female mice, which notably run more frequently than males. In addition, the researchers intend to directly examine whether active mice indeed have improved motor skills compared with those with limited physical activity.

Read more at Science Daily

May 16, 2022

Cutting air pollution emissions would save 50,000 US lives, $600 billion each year

Eliminating air pollution emissions from energy-related activities in the United States would prevent more than 50,000 premature deaths each year and provide more than $600 billion in benefits each year from avoided illness and death, according to a new study by University of Wisconsin-Madison researchers.

Published today in the journal GeoHealth, the study reports the health benefits of removing dangerous fine particulates released into the air by electricity generation, transportation, industrial activities and building functions like heating and cooking -- also major sources of carbon dioxide emissions that cause climate change, since they predominantly rely on burning fossil fuels like coal, oil, and natural gas.

"Our work provides a sense of the scale of the air quality health benefits that could accompany deep decarbonization of the U.S. energy system," says Nick Mailloux, lead author of the study and a graduate student at the Center for Sustainability and the Global Environment in UW-Madison's Nelson Institute for Environmental Studies. "Shifting to clean energy sources can provide enormous benefit for public health in the near term while mitigating climate change in the longer term."

Working with scientists specializing in air quality and public health, Mailloux used a model from the U.S. Environmental Protection Agency to determine the health benefits from a complete reduction in emissions of fine particulate matter and of sulfur dioxide and nitrogen oxides. These compounds can form particulate matter once released into the atmosphere.

These pollutants contribute to health problems such as heart disease, stroke, chronic obstructive pulmonary disease, lung cancer and lower respiratory infections that can dramatically shorten lifespans. Doing away with these pollutants would save about 53,200 lives each year in the US, providing about $608 billion in benefits from avoided healthcare costs and loss of life, according to the researchers' analysis.

The researchers also studied the health effects if regions of the country were to act independently to reduce emissions instead of as part of a concerted nationwide effort. The effects can differ widely in different parts of the US, in part because of regional variations in energy use and population.

The Southwest, a region comprising Arizona, California and Nevada, would retain 95 percent of the benefits if it moved alone to eliminate fine particle emissions.

"In the Mountain region, though, most of the benefit of emissions removal is felt somewhere else," Mailloux says. "Just 32 percent of the benefit remains in states in the Mountain region. This is partly because there are large population centers downwind of the Mountain region that would also benefit."

Every region of the country sees more benefit from nationwide action than from acting on their own to reduce emissions.

"The Great Plains, for example, gets more than twice as much benefit from nationwide efforts as it does from acting alone," says Mailloux. "The more that states and regions can coordinate their emissions reductions efforts, the greater the benefit they can provide to us all."

The researchers hope that by describing the near-term payoffs on top of the threats of more distant climate impacts, the new study motivates more action on climate change.

Read more at Science Daily

Amazon deforestation threatens newly discovered fish species in Brazil

Smithsonian's National Museum of Natural History researcher Murilo Pastana and his colleagues have discovered and described two new species of Amazonian fish -- one with striking red-orange fins and the other so small it is technically considered a miniature fish species -- in a paper published today, May 16, in the Zoological Journal of the Linnean Society.

Both species inhabit waters located at the bleeding edge of human encroachment into the Amazon rainforest roughly 25 miles north of the Brazilian city of Apuí. Pastana and his co-authors, Willian Ohara with the Federal University of Rondônia and Priscila Camelier with the Federal University of Bahia, said that ongoing deforestation in the region places these roughly inch-long fish, part of a group known colloquially as the South American darters, in imminent danger of extinction. In particular, the more colorful of the two species, Poecilocharax callipterus, is at risk because its known range is limited to a single stream comprising roughly 1.5 square miles of habitat.

"It was exciting to find new species," Pastana said. "But in the field, we saw the forest on fire, logging trucks carrying out huge trees, and cleared patches turned into cattle pasture. This made us feel a lot of urgency to document these species and publish this paper as quickly as possible."

As a Brazilian-born scientist, Pastana is passionate about preserving the country's biological heritage, and his hope is that naming and describing these species might motivate the Brazilian government to protect and conserve these newly discovered, endangered fishes.

The small subfamily to which these previously unknown fish species belong is also highly desirable in the aquarium hobbyist market. Pastana, whose work is supported by the Smithsonian's Sara E. and Bruce B. Collette Postdoctoral Fellowship in Systematic Ichthyology, said that the exotic aquarium fish trade could pose yet another threat to these two new species even as scientists are first formally identifying them and learning of their existence.

The expeditions that uncovered these new freshwater species took place between 2015 and 2016. Pastana said the broad goal of these forays into the Brazilian Amazon was to search out the still unknown biological treasures of the many waterways in the Madeira River Basin, the richest river basin in the world in terms of fish biodiversity according to a 2019 estimate.

"We went to sample places that have never been visited by scientists," Pastana said. "This area is really important because this is one of the frontiers where deforestation is moving north -- the border between new cities and native forest."

The Apuí region where these scientific surveys took place sits at number two on a recent list of Brazilian municipalities with the highest deforestation rates. Ironically, the same roads that facilitate the region's accelerating loss of habitat also facilitated access to formerly unreachable streams, ponds and tributaries for Pastana and his colleagues.

So, in 2015 and 2016 Pastana and others camped along a road called AM-174 and collected fish using nets, traps and other methods. All the specimens were photographed, cataloged and preserved for further study back at the Museum of Zoology, University of São Paulo.

One of these specimens has vivid red-orange fins and a distinctive dark spot just in front of its tail. This fish stood out immediately as a new species, Pastana said. The fish, which has now been named P. callipterus, inhabits the margins of what scientists call a black water stream, so named because its waters are stained the color of coffee by tannins leached from fallen leaves. Males of the species have even more intense coloration and sport dorsal fins that can exceed half their body length, which averages just over an inch. Despite targeted efforts to seek out this species in the surrounding area on the 2016 return trip, Pastana and his colleagues were only able to find P. callipterus in the stream in which it was first discovered.

Researchers encountered the second new species documented on these field expeditions among tangles of tree roots protruding from the banks of muddy watered streams -- distinct from the relatively translucent, if darkly stained, black water streams. Given the scientific name P. rhizophilus for its love (phil) of roots (rhiz), this species is an amber yellow with males possessing dark streaks in their dorsal and anal fins. But perhaps the most distinctive quality of this new species is that it is so small that scientists consider it to be miniature, a designation given to any fish that is less than about an inch long when mature, Pastana said. He added that lab study revealed that in these three-quarter-inch-long fish, parts of the skeleton that are typically bone are instead made of cartilage.

Genetic investigations confirmed the evolutionary relationship of these two new closely related species and their relatives, bringing the total number of species in their small sub-family (Crenuchinae) to five. This is the first addition of a new species to the group in 57 years.

Read more at Science Daily

The European drought event from 2018 to 2020 was the most intense in over 250 years

These were days, months and years that many will come to remember: the drought from 2018 to 2020. An international team of researchers led by scientists from the Helmholtz Centre for Environmental Research (UFZ) has succeeded in categorizing the historical dimensions of this event. Based on their findings, no drought covering such a large area for an extended period and coinciding with warmer temperature has occurred in Europe since the middle of the 18th century. The years from 2018 to 2020 thus represent a new benchmark for droughts. Because such an unprecedented event is likely to occur more frequently in the future, the scientists urgently recommend the development and implementation of suitable, regionally adapted drought prevention measures.

Withered meadows and fields, dry stream beds, dead forests, and reduced power plant outputs -- the drought years of 2018, 2019 and 2020 were exceptional and had substantial impacts on nature and the economy. Previously it was not clear where they should be classified in their historical dimension. Now we know: "The 2018 to 2020 drought sets a new benchmark for droughts in Europe," says Dr. Oldrich Rakovec, UFZ modeller and lead author of the article published in the Earth's Future journal of the American Geophysical Union. The scientists documented this with a large compilation of data and modelling techniques which allowed them to reconstruct historical droughts back to 1766 and comparing their extents with the drought of 2018 to 2020.

The drought from 2018 to 2020 thus affected approximately one third of the land area of Europe, especially in central Europe, such as Germany, France and the Czech Republic. "No other drought event over the last 250 years had such a large spatial extent as this one," explains Oldrich Rakovec. The total duration of the drought event in Europe was also unusually long, starting in April 2018 and not ending until December 2020: 33 months. Only the drought between 1857 and 1860 lasted slightly longer for a total of 35 months. What's more: The drought from 2018 to 2020 also continued in 2021 and 2022 in deeper soils (i.e., up to 2m below the ground surface). "Although 2021 was wetter and supplied much needed water in the upper soil important for sustaining agriculture activities, the moisture did not penetrate to greater depths," says the UFZ modeller.

The average drought duration was also unusually long in the 50 x 50 km grid cells in which the scientists subdivided Europe for their modelling activity. Because a drought event develops dynamically in space and time (i.e., it starts at one point, then continues developing and finally ends somewhere else) its mean duration differs from its total one. In this case, the 2018-2020 event exhibited a mean drought duration of 12 months.

In the past, only the drought event from 1857 to 1860 lasted longer, with a mean duration of 13 months. The scientists define drought as the time in which the current soil-water content in top 2-m soil falls below the level that has been reached only 20 percent of the time during the 250 years. To reconstruct these historical droughts, the scientists used the mHM hydrologic model developed at the UFZ. Among other things, this environmental model can be used to estimate soil moisture content based on past temperature and precipitation records.

The rise in air temperature also reached a historical record during the 2018-2020 drought event, with an anomaly of 2.8 degrees Celsius above the long-term average over the past 250 years. "The droughts in the past were colder than recent droughts in which the average temperature hardly changed," says Dr. Rohini Kumar, UFZ modeller and co-author of the article. The effects of a drought event become significantly more severe if, on addition to the precipitation deficit (approximately 20 percent for major drought events in past centuries), the warmer conditions prevail. This combined effect results in greater evaporation losses, leading to declining soil-water levels. The scientists also examined the consequences of the lack of water for agriculture during this drought event. They compared average annual crop yields for wheat, grain maize and barley, between 2018 and 2020 with those between 1961 and 2021. The results indicate that harvests were significantly reduced in countries affected primarily by the 2018-2020 drought. For example, grain maize production decreased between 20 and 40 percent in the Benelux countries, Germany and France; wheat reduced by up to 17.5 percent in Germany; and barley reduced by 10 percent in nearly all of Europe.

Read more at Science Daily

Electronic skin: Physicist develops multisensory hybrid material

The "smart skin" developed by Anna Maria Coclite is very similar to human skin. It senses pressure, humidity and temperature simultaneously and produces electronic signals. More sensitive robots or more intelligent prostheses are thus conceivable.

The skin is the largest sensory organ and at the same time the protective coat of the human being. It "feels" several sensory inputs at the same time and reports information about humidity, temperature and pressure to the brain. For Anna Maria Coclite, a material with such multisensory properties is "a kind of 'holy grail' in the technology of intelligent artificial materials. In particular, robotics and smart prosthetics would benefit from a better integrated, more precise sensing system similar to human skin." The ERC grant winner and researcher at the Institute of Solid State Physics at TU Graz has succeeded in developing the three-in-one hybrid material "smart skin" for the next generation of artificial, electronic skin using a novel process. The result of this pioneering research has now been published in the journal Advanced Materials Technologies.

As delicate as a fingertip

For almost six years, the team worked on the development of smart skin as part of Coclite's ERC project Smart Core. With 2,000 individual sensors per square millimetre, the hybrid material is even more sensitive than a human fingertip. Each of these sensors consists of a unique combination of materials: an smart polymer in the form of a hydrogel inside and a shell of piezoelectric zinc oxide. Coclite explains: "The hydrogel can absorb water and thus expands upon changes in humidity and temperature. In doing so, it exerts pressure on the piezoelectric zinc oxide, which responds to this and all other mechanical stresses with an electrical signal." The result is a wafer-thin material that reacts simultaneously to force, moisture and temperature with extremely high spatial resolution and emits corresponding electronic signals. "The first artificial skin samples are six micrometres thin, or 0.006 millimetres. But it could be even thinner," says Anna Maria Coclite. In comparison, the human epidermis is 0.03 to 2 millimetres thick. The human skin perceives things from a size of about one square millimetre. The smart skin has a resolution that is a thousand times smaller and can register objects that are too small for human skin (such as microorganisms).

Material processing at the nanoscale

The individual sensor layers are very thin and at the same time equipped with sensor elements covering the entire surface. This was possible in a worldwide unique process for which the researchers combined three known methods from physical chemistry for the first time: a chemical vapour deposition for the hydrogel material, an atomic layer deposition for the zinc oxide and nanoprint lithography for the polymer template. The lithographic preparation of the polymer template was the responsibility of the research group "Hybrid electronics and structuring" headed by Barbara Stadlober. The group is part of Joanneum Research's Materials Institute based in Weiz.

Several fields of application are now opening up for the skin-like hybrid material. In healthcare, for example, the sensor material could independently detect microorganisms and report them accordingly. Also conceivable are prostheses that give the wearer information about temperature or humidity, or robots that can perceive their environment more sensitively. On the path to application,smart skin scores with a decisive advantage: the sensory nanorods -- the "smart core" of the material -- are produced using a vapor-based manufacturing process. This process is already well established in production plants for integrated circuits, for example. The production of smart skin can thus be easily scaled and implemented in existing production lines.

Read more at Science Daily

May 15, 2022

Novel biomaterial prevents rejection of transplants for type 1 diabetes

In type 1 diabetes, an autoimmune response attacks the pancreas's insulin-producing beta cells, leading to marked fluctuations in blood sugar levels. Lifelong daily insulin treatments are standard for patients, but replacing lost beta cells through transplants of islets, a group of cells in the pancreas, represents an attractive option. This strategy requires that patients take lifelong immunosuppressive drugs to prevent rejection, however. To address this shortcoming, a team at Massachusetts General Hospital (MGH) and Harvard Medical School collaborated with researchers at the Georgia Institute of Technology and the University of Missouri to develop a novel biomaterial that, when mixed with islets, allows islets to survive after transplant without the need for long-term immunosuppression.

In a preclinical study conducted at MGH and published in Science Advances, the researchers tested the biomaterial -- which includes a novel protein called SA-FasL that promotes immune tolerance and is tethered to the surface of microgel beads -- in a nonhuman primate model of type 1 diabetes. The material was mixed with islets and then transplanted to a bioengineered pouch formed by the omentum -- a fold of fatty tissue that hangs from the stomach and covers the intestines. After transplantation, animals received a single anti-rejection drug (rapamycin) for three months.

"Our strategy to create a local immune-privileged environment allowed islets to survive without long-term immunosuppression and achieved robust blood glucose control in all diabetic nonhuman primates during a six-month study period," says lead author Ji Lei, MD, MBA, an associate immunologist at MGH and an assistant professor of Surgery at Harvard Medical School. "We believe that our approach allows the transplants to survive and control diabetes for much longer than six months without anti-rejection drugs because surgical removal of the transplanted tissue at the end of the study resulted in all animals promptly returning to a diabetic state."

Lei, who is also director of the Human Islet/Cell Processing Special Service cGMP Facility at MGH, notes that transplanting islets to the omentum has several advantages over the current clinical approach of transplanting to the liver. "Unlike the liver, the omentum is a non-vital organ allowing its removal should undesired complications be encountered," he explains. "Thus, the omentum is a safer location for transplants to treat diabetes and may be particularly well suited for stem-cell-derived beta cells and bio-engineered cells."

Co-corresponding author James F. Markmann, MD, PhD, chief of the Division of Transplant Surgery and director of Clinical Operations at the Transplant Center at MGH stresses that the non-human primate study is a highly relevant pre-clinical animal model. "This localized immunomodulatory strategy succeeded without long-term immunosuppression and shows great potential for application to type 1 diabetes patients," he says.

Read more at Science Daily