May 28, 2022

AI reveals unsuspected math underlying search for exoplanets

Artificial intelligence (AI) algorithms trained on real astronomical observations now outperform astronomers in sifting through massive amounts of data to find new exploding stars, identify new types of galaxies and detect the mergers of massive stars, accelerating the rate of new discovery in the world's oldest science.

But AI, also called machine learning, can reveal something deeper, University of California, Berkeley, astronomers found: unsuspected connections hidden in the complex mathematics arising from general relativity -- in particular, how that theory is applied to finding new planets around other stars.

In a paper appearing this week in the journal Nature Astronomy, the researchers describe how an AI algorithm developed to more quickly detect exoplanets when such planetary systems pass in front of a background star and briefly brighten it -- a process called gravitational microlensing -- revealed that the decades-old theories now used to explain these observations are woefully incomplete.

In 1936, Albert Einstein himself used his new theory of general relativity to show how the light from a distant star can be bent by the gravity of a foreground star, not only brightening it as seen from Earth, but often splitting it into several points of light or distorting it into a ring, now called an Einstein ring. This is similar to the way a hand lens can focus and intensify light from the sun.

But when the foreground object is a star with a planet, the brightening over time -- the light curve -- is more complicated. What's more, there are often multiple planetary orbits that can explain a given light curve equally well -- so called degeneracies. That's where humans simplified the math and missed the bigger picture.

The AI algorithm, however, pointed to a mathematical way to unify the two major kinds of degeneracy in interpreting what telescopes detect during microlensing, showing that the two "theories" are really special cases of a broader theory that, the researchers admit, is likely still incomplete.

"A machine learning inference algorithm we previously developed led us to discover something new and fundamental about the equations that govern the general relativistic effect of light- bending by two massive bodies," Joshua Bloom wrote in a blog post last year when he uploaded the paper to a preprint server, arXiv. Bloom is a UC Berkeley professor of astronomy and chair of the department.

He compared the discovery by UC Berkeley graduate student Keming Zhang to connections that Google's AI team, DeepMind, recently made between two different areas of mathematics. Taken together, these examples show that AI systems can reveal fundamental associations that humans miss.

"I argue that they constitute one of the first, if not the first time that AI has been used to directly yield new theoretical insight in math and astronomy," Bloom said. "Just as Steve Jobs suggested computers could be the bicycles of the mind, we've been seeking an AI framework to serve as an intellectual rocket ship for scientists."

"This is kind of a milestone in AI and machine learning," emphasized co-author Scott Gaudi, a professor of astronomy at The Ohio State University and one of the pioneers of using gravitational microlensing to discover exoplanets. "Keming's machine learning algorithm uncovered this degeneracy that had been missed by experts in the field toiling with data for decades. This is suggestive of how research is going to go in the future when it is aided by machine learning, which is really exciting."

Discovering exoplanets with microlensing

More than 5,000 exoplanets, or extrasolar planets, have been discovered around stars in the Milky Way, though few have actually been seen through a telescope -- they are too dim. Most have been detected because they create a Doppler wobble in the motions of their host stars or because they slightly dim the light from the host star when they cross in front of it -- transits that were the focus of NASA's Kepler mission. Little more than 100 have been discovered by a third technique, microlensing.

One of the main goals of NASA's Nancy Grace Roman Space Telescope, scheduled to launch by 2027, is to discover thousands more exoplanets via microlensing. The technique has an advantage over the Doppler and transit techniques in that it can detect lower-mass planets, including those the size of Earth, that are far from their stars, at a distance equivalent to that of Jupiter or Saturn in our solar system.

Bloom, Zhang and their colleagues set out two years ago to develop an AI algorithm to analyze microlensing data faster to determine the stellar and planetary masses of these planetary systems and the distances the planets are orbiting from their stars. Such an algorithm would speed analysis of the likely hundreds of thousands of events the Roman telescope will detect in order to find the 1% or fewer that are caused by exoplanetary systems.

One problem astronomers encounter, however, is that the observed signal can be ambiguous. When a lone foreground star passes in front of a background star, the brightness of the background stars rises smoothly to a peak and then drops symmetrically to its original brightness. It's easy to understand mathematically and observationally.

But if the foreground star has a planet, the planet creates a separate brightness peak within the peak caused by the star. When trying to reconstruct the orbital configuration of the exoplanet that produced the signal, general relativity often allows two or more so-called degenerate solutions, all of which can explain the observations.

To date, astronomers have generally dealt with these degeneracies in simplistic and artificially distinct ways, Gaudi said. If the distant starlight passes close to the star, the observations could be interpreted either as a wide or a close orbit for the planet -- an ambiguity astronomers can often resolve with other data. A second type of degeneracy occurs when the background starlight passes close to the planet. In this case, however, the two different solutions for the planetary orbit are generally only slightly different.

According to Gaudi, these two simplifications of two-body gravitational microlensing are usually sufficient to determine the true masses and orbital distances. In fact, in a paper published last year, Zhang, Bloom, Gaudi and two other UC Berkeley co-authors, astronomy professor Jessica Lu and graduate student Casey Lam, described a new AI algorithm that does not rely on knowledge of these interpretations at all. The algorithm greatly accelerates analysis of microlensing observations, providing results in milliseconds, rather than days, and drastically reducing the computer crunching.

Zhang then tested the new AI algorithm on microlensing light curves from hundreds of possible orbital configurations of star and exoplanet and noticed something unusual: There were other ambiguities that the two interpretations did not account for. He concluded that the commonly used interpretations of microlensing were, in fact, just special cases of a broader theory that explains the full variety of ambiguities in microlensing events.

"The two previous theories of degeneracy deal with cases where the background star appears to pass close to the foreground star or the foreground planet," Zhang said. "The AI algorithm showed us hundreds of examples from not only these two cases, but also situations where the star doesn't pass close to either the star or planet and cannot be explained by either previous theory. That was key to us proposing the new unifying theory."

Gaudi was skeptical, at first, but came around after Zhang produced many examples where the previous two theories did not fit observations and the new theory did. Zhang actually looked at the data from two dozen previous papers that reported the discovery of exoplanets through microlensing and found that, in all cases, the new theory fit the data better than the previous theories.

"People were seeing these microlensing events, which actually were exhibiting this new degeneracy but just didn't realize it," Gaudi said. "It was really just the machine learning looking at thousands of events where it became impossible to miss."

Zhang and Gaudi have submitted a new paper that rigorously describes the new mathematics based on general relativity and explores the theory in microlensing situations where more than one exoplanet orbits a star.

The new theory technically makes interpretation of microlensing observations more ambiguous, since there are more degenerate solutions to describe the observations. But the theory also demonstrates clearly that observing the same microlensing event from two perspectives -- from Earth and from the orbit of the Roman Space Telescope, for example -- will make it easier to settle on the correct orbits and masses. That is what astronomers currently plan to do, Gaudi said.

Read more at Science Daily

How anesthetics affect brain functions

Modern anesthesia is one of the most important medical achievements. Whereas before, patients had to suffer hellish agonies during every operation, today anesthesia enables completely painless procedures. One feels nothing and can remember nothing afterwards. It is already known from electroencephalography (EEG) studies on patients that during anesthesia the brain is put into a deep sleep-like state in which periods of rhythmic electrical activity alternate with periods of complete inactivity. This state is called burst-suppression. Until now, it was unclear where exactly this state happens in the brain and which brain areas are involved.

However, this question is important to better understand the phenomenon and thus how the brain functions under anesthesia. Researchers from the Functional Imaging Unit at the German Primate Center (DPZ) -- Leibniz Institute for Primate Research in Göttingen have used functional magnetic resonance imaging (fMRI) to study the precise spatial distribution of synchronously working brain regions in anesthetized humans, long-tailed macaques, common marmosets and rats. They were able to show for the first time that the areas where burst-suppression is evident differ significantly in primates and rodents. While in rats large parts of the cerebral cortex synchronously show the burst-suppression pattern, in primates individual sensory regions, such as the visual cortex, are excluded from it.

"Our brain can be thought of as a full soccer stadium when we are awake," explains Nikoloz Sirmpilatze, a scientist in the Functional Imaging Unit and lead author of the study. "Our active neurons are like tens of thousands of spectators all talking at once. Under anesthesia, however, neuronal activity is synchronized. You can measure this activity using EEG as uniform waves, as if all the spectators in the stadium were singing the same song. In deep anesthesia, this song is repeatedly interrupted by periods of silence. This is called burst-suppression. The deeper the anesthesia, the shorter the phases of uniform activity, the bursts, and the longer the periodically recurring inactive phases, the so-called suppressions."

The phenomenon is caused by many different anesthetics, some of which vary in their mechanisms of action. And burst-suppression is also detectable in coma patients. However, it is not known whether this condition is a protective reaction of the brain or a sign of impaired functioning. It has also been unclear where in the brain burst-suppression occurs and which brain areas are involved, as localization by EEG alone is not possible.

To answer this question, Nikoloz Sirmpilatze and the researcher team used the imaging technique of fMRI. The method makes blood flow changes in the brain visible. The increased activity of neurons in a particular area of the brain leads to an increase in metabolism, followed by an increased blood and oxygen supply at this location, which is ultimately visible in the fMRI image.

In the first part of the study, the researchers established a system to evaluate fMRI data in humans, monkeys and rodents in a standardized manner using the same method. To do this, they used simultaneously-measured EEG and fMRI data from anesthetized patients that had been generated in a previously conducted study at the Technical University of Munich. "We first looked to see whether the burst-suppression detected in the EEG was also visible in the fMRI data and whether it showed a certain pattern," says Nikoloz Sirmpilatze. "Based on that, we developed a new algorithm that allowed detecting burst-suppression events in the experimental animals using fMRI, without additional EEG measurement."

The researchers then performed fMRI measurements in anesthetized long-tailed macaques, common marmosets and rats. In all animals, they were able to detect and precisely localize burst-suppression as a function of anesthetic concentration. The spatial distribution of burst-suppression showed that in both humans and monkey species, certain sensory areas, such as the visual cortex, were excluded from it. In contrast, in the rats, the entire cerebral cortex was affected by burst-suppression.

"At the moment, we can only speculate about the reasons," says Nikoloz Sirmpilatze, who was awarded the German Primate Center's 2021 PhD Thesis Award for his work. "Primates orient themselves mainly through their sense of sight. Therefore, the visual cortex is a highly specialized region that differs from other brain areas by special cell types and structures. In rats, this is not the case. In future studies, we will investigate what exactly happens in these regions during anesthesia to ultimately understand why burst-suppression is not detectable there with fMRI."

Read more at Science Daily

May 27, 2022

Supermassive black holes inside of dying galaxies detected in early universe

An international team of astronomers used a database combining observations from the best telescopes in the world, including the Subaru Telescope, to detect the signal from the active supermassive black holes of dying galaxies in the early Universe. The appearance of these active supermassive black holes correlates with changes in the host galaxy, suggesting that a black hole could have far reaching effects on the evolution of its host galaxy.

The Milky Way Galaxy where we live includes stars of various ages, including stars still forming. But in some other galaxies, known as elliptical galaxies, all of the stars are old and about the same age. This indicates that early in their histories elliptical galaxies had a period of prolific star formation that suddenly ended. Why this star formation ceased in some galaxies but not others is not well understood. One possibility is that a supermassive black hole disrupts the gas in some galaxies, creating an environment unsuitable for star formation.

To test this theory, astronomers look at distant galaxies. Due to the finite speed of light, it takes time for light to travel across the void of space. The light we see from an object 10 billion light-years away had to travel for 10 billion years to reach Earth. Thus the light we see today shows us what the galaxy looked like when the light left that galaxy 10 billion years ago. So looking at distant galaxies is like looking back in time. But the intervening distance also means that distant galaxies look fainter, making study difficult.

To overcome these difficulties an international team led by Kei Ito at SOKENDAI in Japan used the Cosmic Evolution Survey (COSMOS) to sample galaxies 9.5-12.5 billion light-years away. COSMOS combines data taken by world leading telescopes, including the Atacama Large Millimeter/submillimeter Array (ALMA) and the Subaru Telescope. COSMOS includes radio wave, infrared light, visible light, and x-ray data.

The team first used optical and infrared data to identify two groups of galaxies: those with ongoing star formation and those where star formation has stopped. The x-ray and radio wave data signal-to-noise ratio was too weak to identify individual galaxies. So the team combined the data for different galaxies to produce higher signal to noise ratio images of "average" galaxies. In the averaged images, the team confirmed both x-ray and radio emissions for the galaxies without star formation. This is the first time such emissions have been detected for distant galaxies more than 10 billion light-years away. Furthermore, the results show that the x-ray and radio emissions are too strong to be explained by the stars in the galaxy alone, indicating the presence of an active supermassive black hole. This black hole activity signal is weaker for galaxies where star formation is ongoing.

Read more at Science Daily

Climate change reveals unique artifacts in melting ice patches

One day more than 3000 years ago, someone lost a shoe at the place we today call Langfonne in the Jotunheimen mountains. The shoe is 28 cm long, which roughly corresponds to a modern size 36 or 37. The owner probably considered the shoe to be lost for good, but on 17 September 2007 it was found again -- virtually intact.

Sometime around 2000 BCE, a red-wing thrush died at Skirådalskollen in the Dovrefjell mountain range. Its small body quickly became buried under an ice patch. Upon emerging again 4,000 years later, its internal organs are still intact.

In recent years, hundreds of such discoveries have been made in ice patches, revealing traces of hunting, trapping, traffic, animals and plant life -- small, frozen moments of the past.

Exceptional discoveries every year


Norway has soil that is consistently quite acidic, which means that organic material from the past is poorly preserved in the soil. Glaciers often move -- and crush -- what they hide below the surface. Ice patches, on the other hand, are relatively stable and therefore create exceptional conditions for preserving organic material.

"Objects and remains of animals and human activity have been found that we didn't even know existed. They include everything from horse tack and clothing to arrows with tips made of shells, wooden shafts and feathers. Not a year goes by without surprising finds that shift the boundaries of our understanding," says Birgitte Skar, an archaeologist and associate professor at the NTNU (the Norwegian University of Science and Technology) University Museum. She is one of the researchers behind a new report (in Norwegian with an English summary) that summarizes the state of knowledge in Norway's glacial archaeology.

The report describes a variety of fabulous findings but also paints a gloomy picture.

Only a few ice patches containing potential discoveries have been investigated systematically over time, and they have hardly been studied at all in northern Norway.

Short-term financing results in a lack of continuity in monitoring and securing artefacts from the ice patches. Some research has been done on finds, but it barely scratches the surface. All the while, all this knowledge is melting away at record speed.

The most recent surveys from the Norwegian Water Resources and Energy Directorate (NVE) show that 364 square kilometres of Norwegian snow patches and glaciers have melted away since 2006.

Monitoring programme is overdue

"A survey based on satellite images taken in 2020 shows that more than 40 per cent of 10 selected ice patches with known finds have melted away. These figures suggest a significant threat for preserving discoveries from the ice, not to mention the ice as a climate archive," says Skar.

"The time is ripe for establishing a national monitoring programme using remote sensing and systematically securing archaeological finds and biological remains from ice patches. We should also use this programme to collect glaciological data from different parts of the country, since the ice patches can provide detailed data on how the climate has evolved over the last 7500 years," she said.

Unimaginable possibilities

The oldest find that has emerged from the ice in Norway is a 6100-year-old arrow shaft. Like the shoe, it was also found at Langfonne in the Jotunheimen mountain range.

Finds from here and several other places indicate that these areas were in continuous use as hunting grounds for as long as the ice has been there. This means that they offer an unparalleled archaeological source of information.

"We're beginning to assess whether the ice in some places might have survived the warm period following the last ice age, which would mean that the bottom layer of the ice could be remnants from the ice sheet from that period. This possibility offers unprecedented opportunities to trace climate history and activity on these hunting grounds even further back in time," Skar says.

"We have to remember that the oldest population group in Norway descended from reindeer hunters who hunted in Northern Europe and Southern Scandinavia close to the edge of the ice sheet, in the later part of the ice age. In other words, these are people who would have known how to hunt large cloven-hoofed animals and would understand the animals' behaviour patterns," Skar adds.

Reindeer seek out ice patches during hot and buggy summer weather, and the Sami population has also used these areas for a wide range of purposes, including calf marking, milking and separating the animals. However, the Sami use of inland ice has hardly been surveyed.

"The Sami uses would probably expand the known range of uses and significance of the snow patches. Obtaining information from these tradition bearers is urgent," says Skar.

Mummified birds and animals

Human activity through the millennia are not the only stories revealed by the ice patch finds. Animal and plant remains also provide new insights into the ice as an ecosystem, such as reindeer bones from 4200 years ago that still contain intact bone marrow, as well as several whole mummified mammals and birds.

According to Jørgen Rosvold, the findings are often very well preserved and can provide genetic information about several species far back in time. They can show how species have responded to climate change and human disturbances in the past.

Rosvold was also involved in the report. He is a biologist and assistant research director at the Norwegian Institute for Nature Research (NINA). He explains that ice is one of the world's least studied and understood ecosystems, so that we know very little about ice as a habitat.

"Our finds show that the ice in the mountains has provided important habitats for many mountain species for thousands of years through to the present day. The fauna finds also provide background information for the archaeological finds, for example by showing which species people might have hunted on the snow patches," says Rosvold.

Read more at Science Daily

'Fuel of evolution' more abundant than previously thought in wild animals

The raw material for evolution is much more abundant in wild animals than we previously believed, according to new research from The Australian National University (ANU).

Darwinian evolution is the process by which natural selection results in genetic changes in traits that favour the survival and reproduction of individuals. The rate at which evolution occurs depends crucially on genetic differences between individuals.

Led by Dr Timothée Bonnet from ANU, an international research team wanted to know how much of this genetic difference, or "fuel of evolution," exists in wild animal populations. The answer: two to four times more than previously thought.

According to Dr Bonnet, the process of evolution that Darwin described was an incredibly slow one.

"However, since Darwin, researchers have identified many examples of Darwinian evolution occurring in just a few years," Dr Bonnet said.

"A common example of fast evolution is the peppered moth, which prior to the industrial revolution in the UK was predominantly white. With pollution leaving black soot on trees and buildings, black moths had a survival advantage because it was harder for birds to spot them.

"Because moth colour determined survival probability and was due to genetic differences, the populations in England quickly became dominated by black moths."

The study is the first time the speed of evolution has been systematically evaluated on a large scale, rather than on an ad hoc basis. The team of 40 researchers from 27 scientific institutions used studies of 19 populations of wild animals from around the world. These included superb fairy-wrens in Australia, spotted hyenas in Tanzania, song sparrows in Canada and red deer in Scotland.

"We needed to know when each individual was born, who they mated with, how many offspring they had, and when they died. Each of these studies ran for an average of 30 years, providing the team with an incredible 2.6 million hours of field data," Dr Bonnet said.

"We combined this with genetic information on each animal studied to estimate the extent of genetic differences in their ability to reproduce, in each population.

After three years of trawling through reams of data, Dr Bonnet and the team were able to quantify how much species change occurred due to genetic changes caused by natural selection.

"The method gives us a way to measure the potential speed of current evolution in response to natural selection across all traits in a population. This is something we have not been able to do with previous methods, so being able to see so much potential change came as a surprise to the team," Dr Bonnet said.

Professor Loeske Kruuk, also from ANU and now based at the University of Edinburgh in the United Kingdom, said: "This has been a remarkable team effort that was feasible because researchers from around the world were happy to share their data in a large collaboration.

"It also shows the value of long-term studies with detailed monitoring of animal life histories for helping us understand the process of evolution in the wild."

However, the researchers warn it's too early to tell whether the actual rate of evolution is getting quicker over time.

"Whether species are adapting faster than before, we don't know, because we don't have a baseline. We just know that the recent potential, the amount of 'fuel', has been higher than expected, but not necessarily higher than before," Dr Bonnet said.

According to the researchers, their findings also have implications for predictions of species' adaptability to environmental change.

"This research has shown us that evolution cannot be discounted as a process which allows species to persist in response to environmental change," Dr Bonnet said.

Dr Bonnet said that with climate change predicted to increase at an increasing rate, there is no guarantee that these populations will be able to keep up.

Read more at Science Daily

Ancient viral elements embedded in human genome not from fossil retrovirus

Using a next generation sequencing analysis to examine human endogenous retrovirus (HERV) integration sites, researchers from Kumamoto University, the National Institute of Genetics (Japan), and the University of Michigan (USA) have discovered that these ancient retroviruses can undergo retrotransposition (DNA sequence insertion with RNA mediation) into iPS cells. The team believes that their discovery places a spotlight on a possible risk that HERVs pose when using iPS cells in regenerative medicine.

The study of ancient retroviruses embedded in our genome requires knowledge about our coexistence with viral threats throughout history. We know that HERVs occupy approximately 8% of the human genome and obtain mutations and deletions over long periods. HERVs are also expressed in early embryos and play several physiological roles in human development. For example, HERV-W and HERV-FRD Env proteins are important for placental formation, and HERV-K is thought to protect host cells from exogenous retrovirus infection. However, uncontrollable HERV-K expression is also thought to be associated with various diseases, including various cancers and neurological diseases, but the details of this association is not well known in humans.

Since no one has yet discovered replication competent HERVs in our genome, it is thought that they are from an extinct (fossil) virus. In their current work, the research team from Japan and the US discovered that HERV-K is expressed in SOX2-expressing cells, such as those in early embryos, cancer stem cells and iPS cells. They also found that some HERV-K are newly integrated into the host genome in the absence of Env, the viral envelope glycoprotein. This integration was dependent on reverse transcriptase, integrase and protease, thus the researchers hypothesized that the HERV-K embedded in our genome is actually not from a fossil virus, but moves on the genome through the synthesis of proviral DNA reverse transcription. Interestingly, when the researchers compared the HERV-K integration sites between iPS and fibroblast cells from the same donor, they found new HERV-K integration sites in iPS cells. However, the new integration sites were rarely preserved and disappeared during long-term culturing. HERV-K is likely to be randomly integrated into genome, thus the possibility remains that HERV-K retrotransposed-cells predominantly survive depending on their integration site.

The movement of HERV-K on the genome might cause cancer and neurological diseases by altering the gene expression profile. The researchers believe that the risk of HERV-K transposition is low in iPS cells but suggest that monitoring HERV-K integration sites should be seriously considered to improve the safety of regenerative medicine using iPS cells.

Read more at Science Daily

May 26, 2022

New discovery about distant galaxies: Stars are more massive than we thought

A team of University of Copenhagen astrophysicists has arrived at a major result regarding star populations beyond the Milky Way. The result could change our understanding of a wide range of astronomical phenomena, including the formation of black holes, supernovae and why galaxies die.

For as long as humans have studied the heavens, how stars look in distant galaxies has been a mystery. In a study published today in The Astrophysical Journal, a team of researchers at the University of Copenhagen's Niels Bohr Institute is doing away with previous understandings of stars beyond our own galaxy.

Since 1955, it has been assumed that the composition of stars in the universe's other galaxies is similar to that of the hundreds of billions of stars within our own -- a mixture of massive, medium mass and low mass stars. But with the help of observations from 140,000 galaxies across the universe and a wide range of advanced models, the team has tested whether the same distribution of stars apparent in the Milky Way applies elsewhere. The answer is no. Stars in distant galaxies are typically more massive than those in our "local neighborhood." The finding has a major impact on what we think we know about the universe.

"The mass of stars tells us astronomers a lot. If you change mass, you also change the number of supernovae and black holes that arise out of massive stars. As such, our result means that we'll have to revise many of the things we once presumed, because distant galaxies look quite different from our own," says Albert Sneppen, a graduate student at the Niels Bohr Institute and first author of the study.

Analyzed light from 140,000 galaxies

Researchers assumed that the size and weight of stars in other galaxies was similar to our own for more than fifty years, for the simple reason that they were unable to observe them through a telescope, as they could with the stars of our own galaxy.

Distant galaxies are billions of light-years away. As a result, only light from their most powerful stars ever reaches Earth. This has been a headache for researchers around the world for years, as they could never accurately clarify how stars in other galaxies were distributed, an uncertainty that forced them to believe that they were distributed much like the stars in our Milky Way.

"We've only been able to see the tip of the iceberg and known for a long time that expecting other galaxies to look like our own was not a particularly good assumption to make. However, no one has ever been able to prove that other galaxies form different populations of stars. This study has allowed us to do just that, which may open the door for a deeper understanding of galaxy formation and evolution," says Associate Professor Charles Steinhardt, a co-author of the study.

In the study, the researchers analyzed light from 140,000 galaxies using the COSMOS catalog, a large international database of more than one million observations of light from other galaxies. These galaxies are distributed from the nearest to farthest reaches of the universe, from which light has traveled a full twelve billion years before being observable on Earth.

Massive galaxies die first

According to the researchers, the new discovery will have a wide range of implications. For example, it remains unresolved why galaxies die and stop forming new stars. The new result suggests that this might be explained by a simple trend.

"Now that we are better able to decode the mass of stars, we can see a new pattern; the least massive galaxies continue to form stars, while the more massive galaxies stop birthing new stars,. This suggests a remarkably universal trend in the death of galaxies," concludes Albert Sneppen.

The research was conducted at the Cosmic Dawn Center (DAWN), an international basic research center for astronomy supported by the Danish National Research Foundation. DAWN is a collaboration between the Niels Bohr Institute at the University of Copenhagen and DTU Space at the Technical University of Denmark.

The center is dedicated to understanding when and how the first galaxies, stars and black holes formed and evolved in the early universe, through observations using the largest telescopes along with theoretical work and simulations.

Read more at Science Daily

Hot-blooded T. rex and cold-blooded Stegosaurus: Chemical clues reveal dinosaur metabolisms

For decades, paleontologists have debated whether dinosaurs were warm-blooded, like modern mammals and birds, or cold-blooded, like modern reptiles. Knowing whether dinosaurs were warm- or cold-blooded could give us hints about how active they were and what their everyday lives were like, but the methods to determine their warm- or cold-bloodedness -- how quickly their metabolisms could turn oxygen into energy -- were inconclusive. But in a new paper in Nature, scientists are unveiling a new method for studying dinosaurs' metabolic rates, using clues in their bones that indicated how much the individual animals breathed in their last hour of life.

"This is really exciting for us as paleontologists -- the question of whether dinosaurs were warm- or cold-blooded is one of the oldest questions in paleontology, and now we think we have a consensus, that most dinosaurs were warm-blooded," says Jasmina Wiemann, the paper's lead author and a postdoctoral researcher at the California Institute of Technology.

"The new proxy developed by Jasmina Wiemann allows us to directly infer metabolism in extinct organisms, something that we were only dreaming about just a few years ago. We also found different metabolic rates characterizing different groups, which was previously suggested based on other methods, but never directly tested," says Matteo Fabbri, a postdoctoral researcher at the Field Museum in Chicago and one of the study's authors.

People sometimes talk about metabolism in terms of how easy it is for someone to stay in shape, but at its core, "metabolism is how effectively we convert the oxygen that we breathe into chemical energy that fuels our body," says Wiemann, who is affiliated with Yale University and the Natural History Museum of Los Angeles County.

Animals with a high metabolic rate are endothermic, or warm-blooded; warm-blooded animals like birds and mammals take in lots of oxygen and have to burn a lot of calories in order to maintain their body temperature and stay active. Cold-blooded, or ectothermic, animals like reptiles breathe less and eat less. Their lifestyle is less energetically expensive than a hot-blooded animal's, but it comes at a price: cold-blooded animals are reliant on the outside world to keep their bodies at the right temperature to function (like a lizard basking in the sun), and they tend to be less active than warm-blooded creatures.

With birds being warm-blooded and reptiles being cold-blooded, dinosaurs were caught in the middle of a debate. Birds are the only dinosaurs that survived the mass extinction at the end of the Cretaceous, but dinosaurs (and by extension, birds) are technically reptiles -- outside of birds, their closest living relatives are crocodiles and alligators. So would that make dinosaurs warm-blooded, or cold-blooded?

Scientists have tried to glean dinosaurs' metabolic rates from chemical and osteohistological analyses of their bones. "In the past, people have looked at dinosaur bones with isotope geochemistry that basically works like a paleo-thermometer," says Wiemann -- researchers examine the minerals in a fossil and determine what temperatures those minerals would form in. "It's a really cool approach and it was really revolutionary when it came out, and it continues to provide very exciting insights into the physiology of extinct animals. But we've realized that we don't really understand yet how fossilization processes change the isotope signals that we pick up, so it is hard to unambiguously compare the data from fossils to modern animals."

Another method for studying metabolism is growth rate. "If you look at a cross section of dinosaur bone tissue, you can see a series of lines, like tree rings, that correspond to years of growth," says Fabbri. "You can count the lines of growth and the space between them to see how fast the dinosaur grew. The limit relies on how you transform growth rate estimates into metabolism: growing faster or slower can have more to do with the animal's stage in life than with its metabolism, like how we grow faster when we're young and slower when we're older."

The new method proposed by Wiemann, Fabbri, and their colleagues doesn't look at the minerals present in bone or how quickly the dinosaur grew. Instead, they look at one of the most basic hallmarks of metabolism: oxygen use. When animals breathe, side products form that react with proteins, sugars, and lipids, leaving behind molecular "waste." This waste is extremely stable and water-insoluble, so it's preserved during the fossilization process. It leaves behind a record of how much oxygen a dinosaur was breathing in, and thus, its metabolic rate.

The researchers looked for these bits of molecular waste in dark-colored fossil femurs, because those dark colors indicate that lots of organic matter are preserved. They examined the fossils using Raman and Fourier-transform infrared spectroscopy -- "these methods work like laser microscopes, we can basically quantify the abundance of these molecular markers that tell us about the metabolic rate," says Wiemann. "It is a particularly attractive method to paleontologists, because it is non-destructive."

The team analyzed the femurs of 55 different groups of animals, including dinosaurs, their flying cousins the pterosaurs, their more distant marine relatives the plesiosaurs, and modern birds, mammals, and lizards. They compared the amount of breathing-related molecular byproducts with the known metabolic rates of the living animals and used those data to infer the metabolic rates of the extinct ones.

The team found that dinosaurs' metabolic rates were generally high. There are two big groups of dinosaurs, the saurischians and the ornithischians -- lizard hips and bird hips. The bird-hipped dinosaurs, like Triceratops and Stegosaurus, had low metabolic rates comparable to those of cold-blooded modern animals. The lizard-hipped dinosaurs, including theropods and the sauropods -- the two-legged, more bird-like predatory dinosaurs like Velociraptor and T. rex and the giant, long-necked herbivores like Brachiosaurus -- were warm- or even hot-blooded. The researchers were surprised to find that some of these dinosaurs weren't just warm-blooded -- they had metabolic rates comparable to modern birds, much higher than mammals. These results complement previous independent observations that hinted at such trends but could not provide direct evidence, because of the lack of a direct proxy to infer metabolism.

These findings, the researchers say, can give us fundamentally new insights into what dinosaurs' lives were like.

"Dinosaurs with lower metabolic rates would have been, to some extent, dependent on external temperatures," says Wiemann. "Lizards and turtles sit in the sun and bask, and we may have to consider similar 'behavioral' thermoregulation in ornithischians with exceptionally low metabolic rates. Cold-blooded dinosaurs also might have had to migrate to warmer climates during the cold season, and climate may have been a selective factor for where some of these dinosaurs could live."

On the other hand, she says, the hot-blooded dinosaurs would have been more active and would have needed to eat a lot. "The hot-blooded giant sauropods were herbivores, and it would take a lot of plant matter to feed this metabolic system. They had very efficient digestive systems, and since they were so big, it probably was more of a problem for them to cool down than to heat up." Meanwhile, the theropod dinosaurs -- the group that contains birds -- developed high metabolisms even before some of their members evolved flight.

"Reconstructing the biology and physiology of extinct animals is one of the hardest things to do in paleontology. This new study adds a fundamental piece of the puzzle in understanding the evolution of physiology in deep time and complements previous proxies used to investigate these questions. We can now infer body temperature through isotopes, growth strategies through osteohistology, and metabolic rates through chemical proxies," says Fabbri.

In addition to giving us insights into what dinosaurs were like, this study also helps us better understand the world around us today. Dinosaurs, with the exception of birds, died out in a mass extinction 65 million years ago when an asteroid struck the Earth. "Having a high metabolic rate has generally been suggested as one of the key advantages when it comes to surviving mass extinctions and successfully radiating afterwards," says Wiemann -- some scientists have proposed that birds survived while the non-avian dinosaurs died because of the birds' increased metabolic capacity. But this study, Wiemann says, helps to show that this isn't true: many dinosaurs with bird-like, exceptional metabolic capacities went extinct.

Read more at Science Daily

Archaeologists reveal pre-Hispanic cities in Bolivia with laser technology

More than 20 years ago, Dr. Heiko Prümers from the German Archaeological Institute and Prof. Dr. Carla Jaimes Betancourt from the University of Bonn, at that time a student in La Paz, began archaeological excavations on two "mounds" near the village of Casarabe in Bolivia. The Mojos Plains is a southwestern fringe of the Amazon region. Even though the savannah plain, which flooded several months a year during rainy season, does not encourage permanent settlement, there are still many visible traces of the time before Spanish colonization at the beginning of the 16th century. Next to the "mounds," these traces include mainly causeways and canals that often lead for kilometers in a dead straight line across the savannahs.

"This indicated a relatively dense settlement in pre-Hispanic times. Our goal was to conduct basic research and trace the settlements and life there," says Heiko Prümers. In earlier studies, the researchers already found that the Casarabe culture -- named after the nearby village -- dates to the period between 500 and 1400 AD and, according to current knowledge, extended over a region of around 16,000 square kilometers. The "mounds" turned out to be eroded pyramid stumps and platform buildings.

Initial conventional surveys revealed a terraced core area, a ditch-wall enclosing the site, and canals. In addition, it became apparent that some of these pre-Hispanic settlements were enormous in size. "However, the dense vegetation under which these settlements were located prevented us from seeing the structural details of the monumental mounds and their surroundings," says Carla Jaimes Betancourt from the Department for the Anthropology of the Americas at the University of Bonn.

LIDAR technology used in the Amazon for the first time

To find out more, the researchers used the airborne laser technology LIDAR (Light Detection and Ranging) for the first time in the Amazon region. This involves surveying the terrain with a laser scanner attached to a helicopter, small aircraft or drone that transmits around 1.5 million laser pulses per second. In a subsequent evaluation step, the vegetation is digitally removed creating a digital model of the earth's surface, which can also be displayed as a 3D image. "The first results were excellent and showed how effective the technology was even in dense rainforest. From that moment on, the desire arose to map the large settlements of the Casarabe culture using LIDAR technology," says study leader Dr. Heiko Prümers.

For the current study, in 2019 the team together with Prof. Dr. José Iriarte and Mark Robinson from the University of Exeter, mapped a total of 200 square kilometers of the Casarabe cultural area. The evaluation done by the company ArcTron3 held a surprise. What came to light were two remarkably large sites of 147 hectares and 315 hectares in a dense four-tiered settlement system. "With a north-south extension of 1.5 kilometers and an east-west extension of about one kilometer, the largest site found so far is as large as Bonn was in the 17th century," says co-author Prof. Dr. Carla Jaimes Betancourt.

It is not yet possible to estimate how many people lived there. "However, the layout of the settlement itself tells us that planners and many active hands were at work here," says Heiko Prümers. Modifications made to the settlement, for example the expansion of the rampart-ditch system, also speak to a reasonable increase in population. "For the first time, we can refer to pre-Hispanic urbanism in the Amazon and show the map of the Cotoca site, the largest settlement of the Casarabe culture known to us so far," Prümers emphasizes. In other parts of the world similar agrarian cities with low population densities had already been found.

LIDAR shows anthropogenically altered landscape


LIDAR mapping reveals the architecture of the settlement's large squares. Stepped platforms topped by U-shaped structures, rectangular platform mounds, and conical pyramids (up to 22 meters high). Causeway-like paths and canals connect the individual settlements and indicate a tight social fabric. At least one other settlement can be found within five kilometers of each of the settlements known today. "So the entire region was densely settled, a pattern that overturns all previous ideas," says Carla James Betancourt, who is a member of the Transdisciplinary Research Area "Present Pasts" at the University of Bonn.

The researchers emphasize that for all the euphoria about the site mappings and the possibilities they offer for reinterpreting the settlements in their geographic setting, the real archaeological work is just beginning. The goal for the future, they say, is to understand how these large regional centers functioned.

Read more at Science Daily

Scientists identify how the brain links memories

Our brains rarely record single memories -- instead, they store memories into groups so that the recollection of one significant memory triggers the recall of others connected by time. As we age, however, our brains gradually lose this ability to link related memories.

Now UCLA researchers have discovered a key molecular mechanism behind memory linking. They've also identified a way to restore this brain function in middle-aged mice -- and an FDA-approved drug that achieves the same thing.

Published in Nature, the findings suggest a new method for strengthening human memory in middle age and a possible early intervention for dementia.

"Our memories are a huge part of who we are," explained Alcino Silva, a distinguished professor of neurobiology and psychiatry at the David Geffen School of Medicine at UCLA. "The ability to link related experiences teaches how to stay safe and operate successfully in the world."

A bit of Biology 101: cells are studded with receptors. To enter a cell, a molecule must latch onto its matching receptor, which operates like a doorknob to provide access inside.

The UCLA team focused on a gene called CCR5 that encodes the CCR5 receptor -- the same one that HIV hitches a ride on to infect the brain cell and cause memory loss in AIDS patients.

Silva's lab demonstrated in earlier research that CCR5 expression reduced memory recall.

In the current study, Silva and his colleagues discovered a central mechanism underlying mice's ability to link their memories of two different cages. A tiny microscope opened a window into the animals' brains, enabling the scientists to observe neurons firing and creating new memories.

Boosting CCR5 gene expression in the brains of middle-aged mice interfered with memory linking. The animals forgot the connection between the two cages.

When the scientists deleted the CCR5 gene in the animals, the mice were able to link memories that normal mice could not.

Silva had previously studied the drug, maraviroc, which the U.S. Food and Drug Administration approved in 2007 for the treatment of HIV infection. His lab discovered that maraviroc also suppressed CCR5 in the brains of mice.

"When we gave maraviroc to older mice, the drug duplicated the effect of genetically deleting CCR5 from their DNA," said Silva, a member of the UCLA Brain Research Institute. "The older animals were able to link memories again."

The finding suggests that maraviroc could be used off-label to help restore middle-aged memory loss, as well as reverse the cognitive deficits caused by HIV infection.

"Our next step will be to organize a clinical trial to test maraviroc's influence on early memory loss with the goal of early intervention," said Silva. "Once we fully understand how memory declines, we possess the potential to slow down the process."

Which begs the question: why does the brain need a gene that interferes with its ability to link memories?

Read more at Science Daily

May 25, 2022

Hubble reaches new milestone in mystery of universe's expansion rate

Completing a nearly 30-year marathon, NASA's Hubble Space Telescope has calibrated more than 40 "milepost markers" of space and time to help scientists precisely measure the expansion rate of the universe -- a quest with a plot twist.

Pursuit of the universe's expansion rate began in the 1920s with measurements by astronomers Edwin P. Hubble and Georges Lemaître. In 1998, this led to the discovery of "dark energy," a mysterious repulsive force accelerating the universe's expansion. In recent years, thanks to data from Hubble and other telescopes, astronomers found another twist: a discrepancy between the expansion rate as measured in the local universe compared to independent observations from right after the big bang, which predict a different expansion value.

The cause of this discrepancy remains a mystery. But Hubble data, encompassing a variety of cosmic objects that serve as distance markers, support the idea that something weird is going on, possibly involving brand new physics.

"You are getting the most precise measure of the expansion rate for the universe from the gold standard of telescopes and cosmic mile markers," said Nobel Laureate Adam Riess of the Space Telescope Science Institute (STScI) and the Johns Hopkins University in Baltimore, Maryland.

Riess leads a scientific collaboration investigating the universe's expansion rate called SH0ES, which stands for Supernova, H0, for the Equation of State of Dark Energy. "This is what the Hubble Space Telescope was built to do, using the best techniques we know to do it. This is likely Hubble's magnum opus, because it would take another 30 years of Hubble's life to even double this sample size," Riess said.

Riess's team's paper, to be published in the Special Focus issue of The Astrophysical Journal reports on completing the biggest and likely last major update on the Hubble constant. The new results more than double the prior sample of cosmic distance markers. His team also reanalyzed all of the prior data, with the whole dataset now including over 1,000 Hubble orbits.

When NASA conceived of a large space telescope in the 1970s, one of the primary justifications for the expense and extraordinary technical effort was to be able to resolve Cepheids, stars that brighten and dim periodically, seen inside our Milky Way and external galaxies. Cepheids have long been the gold standard of cosmic mile markers since their utility was discovered by astronomer Henrietta Swan Leavitt in 1912. To calculate much greater distances, astronomers use exploding stars called Type Ia supernovae.

Combined, these objects built a "cosmic distance ladder" across the universe and are essential to measuring the expansion rate of the universe, called the Hubble constant after Edwin Hubble. That value is critical to estimating the age of the universe and provides a basic test of our understanding of the universe.

Starting right after Hubble's launch in 1990, the first set of observations of Cepheid stars to refine the Hubble constant was undertaken by two teams: the HST Key Project led by Wendy Freedman, Robert Kennicutt, Jeremy Mould, and Marc Aaronson, and another by Allan Sandage and collaborators, that used Cepheids as milepost markers to refine the distance measurement to nearby galaxies. By the early 2000s the teams declared "mission accomplished" by reaching an accuracy of 10 percent for the Hubble constant, 72 plus or minus 8 kilometers per second per megaparsec.

In 2005 and again in 2009, the addition of powerful new cameras onboard the Hubble telescope launched "Generation 2" of the Hubble constant research as teams set out to refine the value to an accuracy of just one percent. This was inaugurated by the SH0ES program. Several teams of astronomers using Hubble, including SH0ES, have converged on a Hubble constant value of 73 plus or minus 1 kilometer per second per megaparsec. While other approaches have been used to investigate the Hubble constant question, different teams have come up with values close to the same number.

The SH0ES team includes long-time leaders Dr. Wenlong Yuan of Johns Hopkins University, Dr. Lucas Macri of Texas A&M University, Dr. Stefano Casertano of STScI, and Dr. Dan Scolnic of Duke University. The project was designed to bracket the universe by matching the precision of the Hubble constant inferred from studying the cosmic microwave background radiation leftover from the dawn of the universe.

"The Hubble constant is a very special number. It can be used to thread a needle from the past to the present for an end-to-end test of our understanding of the universe. This took a phenomenal amount of detailed work," said Dr. Licia Verde, a cosmologist at ICREA and the ICC-University of Barcelona, speaking about the SH0ES team's work.

The team measured 42 of the supernova milepost markers with Hubble. Because they are seen exploding at a rate of about one per year, Hubble has, for all practical purposes, logged as many supernovae as possible for measuring the universe's expansion. Riess said, "We have a complete sample of all the supernovae accessible to the Hubble telescope seen in the last 40 years." Like the lyrics from the song "Kansas City," from the Broadway musical Oklahoma, Hubble has "gone about as fur as it c'n go!"

Weird Physics?


The expansion rate of the universe was predicted to be slower than what Hubble actually sees. By combining the Standard Cosmological Model of the Universe and measurements by the European Space Agency's Planck mission (which observed the relic cosmic microwave background from 13.8 billion years ago), astronomers predict a lower value for the Hubble constant: 67.5 plus or minus 0.5 kilometers per second per megaparsec, compared to the SH0ES team's estimate of 73.

Given the large Hubble sample size, there is only a one-in-a-million chance astronomers are wrong due to an unlucky draw, said Riess, a common threshold for taking a problem seriously in physics. This finding is untangling what was becoming a nice and tidy picture of the universe's dynamical evolution. Astronomers are at a loss for an explanation of the disconnect between the expansion rate of the local universe versus the primeval universe, but the answer might involve additional physics of the universe.

Such confounding findings have made life more exciting for cosmologists like Riess. Thirty years ago they started out to measure the Hubble constant to benchmark the universe, but now it has become something even more interesting. "Actually, I don't care what the expansion value is specifically, but I like to use it to learn about the universe," Riess added.

Read more at Science Daily

High air pollution from fracking in Ohio county

Some residents of Belmont County in eastern Ohio have long suffered from headaches, fatigue, nausea and burning sensations in their throats and noses. They suspected these symptoms were the result of air pollution from fracking facilities that dominate the area, but regulators dismissed and downplayed their concerns.

With the technical assistance of volunteer scientists at Columbia University's Lamont-Doherty Earth Observatory, MIT and the American Geophysical Union's Thriving Earth Exchange, local advocacy groups set up their own network of low-cost sensors. They found that the region's three EPA sensors were not providing an accurate picture: The sensors revealed concerning levels of air pollution, and correlations between local spikes and health impacts.

The results are published today in the journal Environmental Research Letters.

Nestled in an Appalachian valley, Belmont has been booming with new infrastructure to extract and process natural gas. Fracking is known to emit pollutants including particulate matter and volatile organic compounds such as benzene, toluene and ethylbenzene, which have been linked to respiratory and cardiovascular health problems. Lung and bronchus cancer have become the leading cause of cancer deaths in Ohio. A 2017 Yale Public Health analysis confirmed the need for additional monitoring and regulation for chemicals associated with unconventional oil and gas development.

Concerned about the fumes in certain areas of the community and the lack of information and transparency, two activist groups, Concerned Ohio River Residents and the Freshwater Accountability Project, wanted to set up a high-density monitoring network. After submitting their proposal to the Thriving Earth Exchange, which enables collaborations between community groups and volunteer scientists, they were paired with Garima Raheja, a PhD candidate who studies air pollution at Lamont-Doherty.

"We realized that the Thriving Earth Exchange program would give us valuable aid to validate the complaints we often receive from those living near pollution sources in a way that would provide credible and actionable data to improve air quality in the region," said Lea Harper, managing director of Freshwater Accountability Project.

With advice from Raheja and other scientists, the community members bought 60 low-cost sensors to monitor particulate matter and volatile organic compounds in the air. Then they identified areas of highest concern, and recruited residents to install and maintain the sensors in backyards, churches and schools in those areas.

The new study presents the first two years of data from the sensor network. The team found that many sites frequently experienced days when air pollution exceeded levels recommended by the World Health Organization. For example, in the city of Martins Ferry, where a sensor took measurements for 336 days, it measured unsafe levels of air pollution on 50 of those days.

"It is kind of wild," said Raheja, "considering that it's generally a clean area. I think any number of days above WHO guidelines is really concerning for an area like this."

She sees a clear link to the area's fossil fuel development. "If there wasn't fracking in this area, there would be no reason for bad air pollution. It's not an urban area. There's not a lot of cars or rush hour or anything like that which usually causes air pollution."

The study compares the daily averages collected from the citizen sensors with the EPA's three nearby sensors. The correlation between the two was low -- less than 55 percent.

"It just goes to show that the EPA monitors might be getting broad trends correctly, like annual or seasonal amounts," said Raheja. "But in terms of daily averages, which is what affects human health, the EPA sensors are not always capturing the heterogeneous exposure that people in this area experience."

That's because the EPA sensors are too few and too widely spaced to capture a detailed picture of the air pollution levels, she said. EPA relies on high-grade monitors that cost hundreds of thousands of dollars apiece, which helps explain why the network is so sparse. In contrast, the citizen scientists' sensors cost only a few hundred dollars each, so they were able to set up a denser network.

In another aspect of the study, residents picked up air pollution spikes on their monitors and wanted to know where they came from. So the volunteer scientists helped to model local wind patterns to key in on which fracking facilities could be responsible for spikes in specific sensors on specific days.

"There are a lot of different sources in the area, and sometimes community activists have to pick which battles to fight first," said Raheja. So far, residents say they are particularly concerned about the area's Williams Compressor Station and the Dominion Compressor Station.

The data have allowed community leaders to submit targeted public records requests about these operations and their compliance with air quality standards, the paper notes. Information from the air quality sensors also has helped residents know when to close their windows, wear masks or update indoor air purification systems.

Community members also saw correlations between air pollution spikes and their headaches and nausea. For example, some noticed bad smells and more severe symptoms in mid December 2020. At the same time, the air pollution data shows several spikes in emissions.

The paper quotes community member Kevin Young. "Before, [there] was no one to help us. None of the Ohio regulators would come to witness the extreme air pollution events that made my wife and me very sick." He added, "Now that we have data to substantiate the harmful amounts of the air pollutants, it seems the regulators are taking us more seriously."

The paper notes that the data offered a shared language that community members could use to articulate their complaints to the EPA, Ohio Department of Natural Resources, and the Ohio Department of Health. Regulators are starting to take notice; local activist Jill Hunkler was invited to testify in April 2021 before the U.S. House of Representatives Subcommittee on the Environment.

The scientists and community groups hope to continue working together. They are currently applying for grants to scale up their sensor network, and networking with other concerned community groups, some as far away as Louisiana's infamous Cancer Alley, who want to learn more about how to get started on similar programs.

Read more at Science Daily

Horses and pigs sense harsh speaking tones

How we speak matters to animals. Horses, pigs and wild horses can distinguish between negative and positive sounds from their fellow species and near relatives, as well as from human speech. This, according to new research in behavioral biology at the University of Copenhagen. The study provides insight into the history of emotional development and opens up interesting perspectives with regards to animal welfare.

The idea of horse whisperers -- those with a talent for communicating with horses -- may bring a chuckle to many. But according to new research from the University of Copenhagen and ETH Zurich, there may be something about their whispering skills. In an international collaboration, along with researchers Anne-Laure Maigrot and Edna Hillmann, behavioral biologist Elodie Briefer of the University of Copenhagen's Department of Biology investigated whether a range of animals can distinguish between positively and negatively charged sounds.

"The results showed that domesticated pigs and horses, as well as Asian wild horses, can tell the difference, both when the sounds come from their own species and near relatives, as well as from human voices," explains Elodie Briefer. Pigs were studied along with boar, their wild relatives. Just as in the case of the two related horse species, the pigs clearly reacted to how the sounds of their counterparts were emotionally charged. In fact, to the same extent as when it came to sounds of their own kind.

The animals even showed the ability to distinguish between positively or negatively charged human voices. While their reactions were more subdued, all but wild boars reacted differently when exposed to human speech that was either charged with positive or negative emotion.

Human gibberish

The researchers played recordings of animal sounds and human voices from hidden speakers.

To avoid having the domesticated animals react to specific words, positive and negative human speech was performed by a professional voice actor in a kind of gibberish without any meaningful phrases.

The animals' behavioral reactions were recorded in a number of categories used in previous studies -- everything from their ear position to their movement or lack thereof.

On this basis, the researchers concluded that: How we speak matters to animals.

"Our results show that these animals are affected by the emotions we charge our voices with when we speak to or are around them. They react more strongly -- generally faster -- when they are met with a negatively charged voice, compared to having a positively charged voice played to them first. In certain situations, they even seem to mirror the emotion to which they are exposed" says Elodie Briefer.

Do animals have an emotional life?

Part of the aim of the study, was to investigate the possibility of "emotional contagion" in animals -- a kind of mirroring of emotion. Situations where one expressed emotion is assumed by another. In behavioral biology, this type of reaction is seen as the first step in the empathy category.

"Should future research projects clearly demonstrate that these animals mirror emotions, as this study suggests, it will be very interesting in relation to the history of the development of emotions and the extent to which animals have an emotional life and level of consciousness," says Elodie Briefer.

The study was unable to detect clear observations of "emotional contagion," but an interesting result was in the order by which the sounds where delivered. Sequences in which the negative sound was played first triggered stronger reactions in all but the wild boars. This included human speech.

According to Elodie Briefer, this suggests that the way we talk around animals and the way we talk to animals may have an impact on their well-being.

"It means that our voices have a direct impact on the emotional state of animals, which is very interesting from an animal welfare perspective," she says.

This knowledge doesn't just raise ethical questions about how we perceive animals -- and vice versa, it can also be used as a concrete means of improving animals' daily lives, if those who work with them are familiar with it.

"When the animals reacted strongly to hearing negatively charged speech first, the same is also true in the reverse. That is, if animals are initially spoken to in a more positive, friendly voice, when met by people, they should react less. They may become calmer and more relaxed," explains Elodie Briefer.

Next step for the Copenhagen University researcher is the switchover. She and her colleagues, are now looking into how well we humans are able to understand animal sounds of emotion.

How the researchers did it
 

  • The animals in the experiment were either privately owned (horses), from a research station (pigs) or living in zoos in Switzerland and France (wild Przewalski's horses and wild boars).
  • The researchers used animal sounds with a previously established emotion valence.
  • The animal sounds and human voices were played to the animals from hidden speakers.
  • Doing so required high sound quality to ensure for the natural frequencies heard best by animals.
  • The sounds were played in sequences with either a positive or negatively charged sound first, then a pause, -- and then sounds with reverse valence, i.e. the reverse emotion.
  • The reactions were recorded on video, which the researchers could subsequently use to observe and record the animals' reactions.
  • Three theses can explain the animal reactions


The researchers worked with three theories about which conditions they expected to influence the animals' reactions in the experiment:

Phylogeny
 

  • According to this theory, depending on the evolution of species, i.e., the history of evolution, animals with a common ancestry may be able to perceive and interpret each other's sounds by virtue of their common biology.


Domestication
 

  • Close contact with humans, over a long period of time, may have increased the ability to interpret human emotions.
  • Animals that are good at picking up human emotions might have been preferred for breeding.


Familiarity
 

  • Based on learning. The specific animals in the study may have learned a greater understanding of humans and fellow species, who they were in close contact with where they were housed.


The conclusion is as follows. Among the horse species, the phylogeny thesis best explained their behavior. In contrast, the behavior of the pig species best fit the domestication hypothesis.

Read more at Science Daily

A family of termites has been traversing the world's oceans for millions of years

Termites are a type of cockroach that split from other cockroaches around 150 million years ago and evolved to live socially in colonies. Today, there are many different kinds of termites. Some form large colonies with millions of individuals, which tend to live in connected tunnels in the soil. Others, including most species known as drywood termites, form much smaller colonies of less than 5000 individuals, and live primarily in wood.

Researchers from the Evolutionary Genomics Unit at the Okinawa Institute of Science and Technology Graduate University (OIST), alongside a network of collaborators from across the world, have mapped out the natural history of drywood termites -- the second largest family of termites -- and revealed a number of oceanic voyages that accelerated the evolution of their diversity. The research, published in Molecular Biology and Evolution, shines light on where termites originated and how and when they spread across the globe. It also confirms that some species have, in recent centuries, hitched a ride with humans to reach far-flung islands.

"Drywood termites, or Kalotermitidae, are often thought of as primitive because they split from other termites quite early, around 100 million years ago, and because they appear to form smaller colonies," said Dr. Aleš Buček, OIST Postdoctoral Researcher and lead author of the study. "But very little is actually known about this family."

Dr. Buček went on to explain how, before this study, there was very little molecular data on the family and the little understanding of the relationships between the different species that was known was based on their appearance. Previous research had focused on one genus within the family that contains common pest species, often found within houses.

To gain overarching knowledge, the researchers collected hundreds of drywood termite samples from around the world over a timespan of three decades. From this collection, they selected about 120 species, some of which were represented by multiple samples collected in different locations. This represented over a quarter of Kalotermitidae diversity. Most of these samples were brought to OIST where the DNA was isolated and sequenced.

By comparing the genetic sequences from the different species, the researchers constructed an extensive family tree of the drywood termites.

They found that drywood termites have made more oceanic voyages than any other family of termites. They've crossed oceans at least 40 times in the past 50 million years, travelling as far as South America to Africa, which, over a timescale of millions of years, resulted in the diversification of new drywood termite species in the newly colonized places.

"They're very good at getting across oceans," said Dr. Buček. "Their homes are made of wood so can act as tiny ships."

The researchers found that most of the genera originated in southern America and dispersed from there. It takes a scale of millions of years for one species to split into several after a move. The research also confirmed that, more recently, dispersals have largely been mediated by humans.

Furthermore, this study has cast doubt on the common assumption that drywood termites have a primitive lifestyle. Among the oldest lineages in the family, there are termite species that do not have a primitive lifestyle. In fact, they can form large colonies across multiple pieces of wood that are connected by tunnels underground.

Read more at Science Daily

May 24, 2022

Planets of binary stars as possible homes for alien life

Nearly half of Sun-size stars are binary. According to University of Copenhagen research, planetary systems around binary stars may be very different from those around single stars. This points to new targets in the search for extraterrestrial life forms.

Since the only known planet with life, the Earth, orbits the Sun, planetary systems around stars of similar size are obvious targets for astronomers trying to locate extraterrestrial life. Nearly every second star in that category is a binary star. A new result from research at University of Copenhagen indicate that planetary systems are formed in a very different way around binary stars than around single stars such as the Sun.

"The result is exciting since the search for extraterrestrial life will be equipped with several new, extremely powerful instruments within the coming years. This enhances the significance of understanding how planets are formed around different types of stars. Such results may pinpoint places which would be especially interesting to probe for the existence of life," says Professor Jes Kristian Jørgensen, Niels Bohr Institute, University of Copenhagen, heading the project.

The results from the project, which also has participation of astronomers from Taiwan and USA, are published in the journal Nature.

Bursts shape the planetary system

The new discovery has been made based on observations made by the ALMA telescopes in Chile of a young binary star about 1,000 lightyears from Earth. The binary star system, NGC 1333-IRAS2A, is surrounded by a disc consisting of gas and dust. The observations can only provide researchers with a snapshot from a point in the evolution of the binary star system. However, the team has complemented the observations with computer simulations reaching both backwards and forwards in time.

"The observations allow us to zoom in on the stars and study how dust and gas move towards the disc. The simulations will tell us which physics are at play, and how the stars have evolved up till the snapshot we observe, and their future evolution," explains Postdoc Rajika L. Kuruwita, Niels Bohr Institute, second author of the Nature article.

Notably, the movement of gas and dust does not follow a continuous pattern. At some points in time -- typically for relatively shorts periods of ten to one hundred years every thousand years -- the movement becomes very strong. The binary star becomes ten to one hundred times brighter, until it returns to its regular state.

Presumably, the cyclic pattern can be explained by the duality of the binary star. The two stars encircle each other, and at given intervals their joint gravity will affect the surrounding gas and dust disc in a way which causes huge amounts of material to fall towards the star.

"The falling material will trigger a significant heating. The heat will make the star much brighter than usual," says Rajika L. Kuruwita, adding:

"These bursts will tear the gas and dust disc apart. While the disc will build up again, the bursts may still influence the structure of the later planetary system."

Comets carry building blocks for life

The observed stellar system is still too young for planets to have formed. The team hopes to obtain more observational time at ALMA, allowing to investigate the formation of planetary systems.

Not only planets but also comets will be in focus:

"Comets are likely to play a key role in creating possibilities for life to evolve. Comets often have a high content of ice with presence of organic molecules. It can well be imagined that the organic molecules are preserved in comets during epochs where a planet is barren, and that later comet impacts will introduce the molecules to the planet's surface," says Jes Kristian Jørgensen.

Understanding the role of the bursts is important in this context:

"The heating caused by the bursts will trigger evaporation of dust grains and the ice surrounding them. This may alter the chemical composition of the material from which planets are formed."

Thus, chemistry is a part of the research scope:

"The wavelengths covered by ALMA allow us to see quite complex organic molecules, so molecules with 9-12 atoms and containing carbon. Such molecules can be building blocks for more complex molecules which are key to life as we know it. For example, amino acids which have been fund in comets."

Powerful tools join the search for life in space

ALMA (Atacama Large Millimeter/submillimeter Array) is not a single instrument but 66 telescopes operating in coordination. This allows for a much better resolution than could have been obtained by a single telescope.

Very soon the new James Webb Space Telescope (JWST) will join the search for extraterrestrial life. Near the end of the decade, JWST will be complemented by the ELT (European Large Telescope) and the extremely powerful SKA (Square Kilometer Array) both planned to begin observing in 2027. The ELT will with its 39-meter mirror be the biggest optical telescope in the world and will be poised to observe the atmospheric conditions of exoplanets (planets outside the Solar System, ed.). SKA will consist of thousands of telescopes in South Africa and in Australia working in coordination and will have longer wavelengths than ALMA.

Read more at Science Daily

Alcohol may be more risky to the heart than previously thought

Levels of alcohol consumption currently considered safe by some countries are linked with development of heart failure, according to research presented at Heart Failure 2022, a scientific congress of the European Society of Cardiology (ESC).1

"This study adds to the body of evidence that a more cautious approach to alcohol consumption is needed," said study author Dr. Bethany Wong of St. Vincent's University Hospital, Dublin, Ireland. "To minimise the risk of alcohol causing harm to the heart, if you don't drink, don't start. If you do drink, limit your weekly consumption to less than one bottle of wine or less than three-and-a-half 500 ml cans of 4.5% beer."

According to the World Health Organization, the European Union is the heaviest-drinking region in the world.2 While it is well recognised that long-term heavy alcohol use can cause a type of heart failure called alcoholic cardiomyopathy,3 evidence from Asian populations suggests that lower amounts may also be detrimental.4,5 "As there are genetic and environmental differences between Asian and European populations this study investigated if there was a similar relationship between alcohol and cardiac changes in Europeans at risk of heart failure or with pre-heart failure," said Dr. Wong. "The mainstay of treatment for this group is management of risk factors such as alcohol, so knowledge about safe levels is crucial."

This was a secondary analysis of the STOP-HF trial.6 The study included 744 adults over 40 years of age either at risk of developing heart failure due to risk factors (e.g. high blood pressure, diabetes, obesity) or with pre-heart failure (risk factors and heart abnormalities but no symptoms).7 The average age was 66.5 years and 53% were women. The study excluded former drinkers and heart failure patients with symptoms (e.g. shortness of breath, tiredness, reduced ability to exercise, swollen ankles). Heart function was measured with echocardiography at baseline and follow up.

The study used the Irish definition of one standard drink (i.e. one unit), which is 10 grams of alcohol.8 Participants were categorised according to their weekly alcohol intake: none; low (less than seven units; up to one 750 ml bottle of 12.5% wine or three-and-a-half 500 ml cans of 4.5% beer); moderate (7-14 units; up to two bottles of 12.5% wine or seven 500 mL cans of 4.5% beer); high (above 14 units; more than two bottles of 12.5% wine or seven 500 ml cans of 4.5% beer).

The researchers analysed the association between alcohol use and heart health over a median of 5.4 years. The results were reported separately for the at-risk and pre-heart failure groups. In the at-risk group, worsening heart health was defined as progression to pre-heart failure or to symptomatic heart failure. For the pre-heart failure group, worsening heart health was defined as deterioration in the squeezing or relaxation functions of the heart or progression to symptomatic heart failure. The analyses were adjusted for factors that can affect heart structure including age, gender, obesity, high blood pressure, diabetes, and vascular disease.

A total of 201 (27%) patients reported no alcohol usage, while 356 (48%) were low users and 187 (25%) had moderate or high intake. Compared to the low intake group, those with moderate or high use were younger, more likely to be male, and had a higher body mass index.

In the pre-heart failure group, compared with no alcohol use, moderate or high intake was associated with a 4.5-fold increased risk of worsening heart health. The relationship was also observed when moderate and high levels were analysed separately. In the at-risk group, there was no association between moderate or high alcohol use with progression to pre-heart failure or to symptomatic heart failure. No protective associations were found for low alcohol intake.

Read more at Science Daily

Skydiving salamanders live in world's tallest trees

Salamanders that live their entire lives in the crowns of the world's tallest trees, California's coast redwoods, have evolved a behavior well-adapted to the dangers of falling from high places: the ability to parachute, glide and maneuver in mid-air.

Flying squirrels, not to mention numerous species of gliding frogs, geckos, and ants and other insects, are known to use similar aerial maneuvers when jumping from tree to tree or when falling, so as to remain in the trees and avoid landing on the ground.

Similarly, the researchers suspect that this salamander's skydiving skills are a way to steer back to a tree it's fallen or jumped from, the better to avoid terrestrial predators.

"While they're parachuting, they have an exquisite amount of maneuverable control," said Christian Brown, a doctoral candidate at the University of South Florida (USF) in Tampa and first author of a paper about these behaviors. "They are able to turn. They are able to flip themselves over if they go upside down. They're able to maintain that skydiving posture and kind of pump their tail up and down to make horizontal maneuvers. The level of control is just impressive."

The aerial dexterity of the so-called wandering salamander (Aneides vagrans) was revealed by high-speed video footage taken in a wind tunnel at the University of California, Berkeley, where the salamanders were nudged off a perch into an upward moving column of air simulating free fall.

"What struck me when I first saw the videos is that they (the salamanders) are so smooth -- there's no discontinuity or noise in their motions, they're just totally surfing in the air," said Robert Dudley, UC Berkeley professor of integrative biology and an expert on animal flight. "That, to me, implies that this behavior is something deeply embedded in their motor response, that it (falling) must happen at reasonably high frequencies so as to effect selection on this behavior. And it's not just passive parachuting, they're not just skydiving downwards. They're also clearly doing the lateral motion, as well, which is what we would call gliding."

The behavior is all the more surprising because the salamanders, aside from having slightly larger foot pads, look no different from other salamanders that aren't aerially maneuverable. They have no skin flaps, for example, that would tip you off to their parachuting ability.

"Wandering salamanders have big feet, they have long legs, they have active tails. All of these things lend themselves to aerial behaviors. But everybody just assumed that was for climbing, because that's what they use those features for when we're looking at them," Brown said. "So, it's not really a dedicated aerodynamic control surface, but it functions as both. It helps them climb, and it seems to help them parachute and glide, as well."

Among the questions the researchers hope to answer in future research are how salamanders manage to parachute and maneuver without obvious anatomical adaptations to gliding and whether many other animals with similar aerial skills have never been noticed before.

"Salamanders are sluggish, you don't think of them as having particularly fast reflexes. It's life in the slow lane. And flight control is all about rapid response to dynamic visual cues and being able to target and orient and change your body position," Dudley said. "So, it's just kind of odd. How often can this be happening, anyway, and how would we know?"

Life in the canopy

Using the wind tunnel, Brown and UC Berkeley graduate student Erik Sathe compared the gliding and parachuting behavior of A. vagrans -- adults are about 4 inches (10 centimeters) from snout to tip of tail -- with the abilities of three other salamander species native to Northern California, each with varying degrees of arboreality -- that is, the propensity to climb or live in trees. The wandering salamander, which probably spends its entire life in a single tree, moving up and down but never touching the ground, was the most proficient skydiver. A related species, the so-called arboreal salamander, A. lugubris, which lives in shorter trees, such as oaks, was nearly as effective at parachuting and gliding.

Two of the least arboreal salamanders -- Ensatina eschscholtzii, a forest floor-dwelling salamander, and A. flavipunctatus, the speckled black salamander, which occasionally climbs trees -- essentially flailed ineffectively for the few seconds they were airborne in the wind tunnel. All four species are plethodontid, or lungless, salamanders, the largest family of salamanders and mostly found in the Western Hemisphere.

"The two least arboreal species flail around a lot. We call it ineffective, undulating motion because they don't glide, they don't move horizontally, they just kind of hover in the wind tunnel freaking out," Brown said. "The two most arboreal species never actually flailed."

Brown encountered these salamanders while working in California's Humboldt and Del Norte counties with nonprofit and university conservation groups that mark and track the animals that live in the redwood canopy, primarily in old growth forest some 150 feet off the ground. Using ropes and ascenders, the biologists regularly climb the redwoods -- the tallest of which rise to a height of 380 feet -- to capture and mark wandering salamanders. Over the past 20 years, as part of a project led by James Campbell-Spickler, now director of the Sequoia Park Zoo in Eureka, the researchers discovered that most of their marked salamanders could be found in the same tree year after year, although at different heights. They live primarily in fern mats growing in the duff, the decaying vegetable matter that collects in the junctions of large branches. Brown said that few marked wandering salamanders from the redwood canopy have been found on the ground, and most of those were found dead.

Brown noticed, when picking them up to mark them, that the salamanders were quick to leap out of his hands. Even a light tap on a branch or a shadow passing nearby were enough to get them to jump from the redwood canopy. Given their location high above the forest floor, their nonchalant leaps into thin air were surprising.

"They jump, and before they've even finished toeing off, they've got their forelimbs splayed out, and they're ready to go," he said. "So, the jump and the parachute are very closely tied together. They assume the position immediately."

When he approached Dudley, who has studied such behavior in other animals, he invited Brown to bring some of the salamanders into his wind tunnel to record their behavior. Using a high-speed video camera shooting at 400 frames per second, Brown and Sathe filmed the salamanders for as long as they floated on the column of air, sometimes up to 10 seconds.

They then analyzed the frames to determine the animals' midair posture and to deduce how they used their legs, bodies and tails to maneuver. They typically fell at a steep angle, only 5 degrees from vertical, but based on the distances between branches in the crowns of redwoods, this would usually be sufficient for them to reach a branch or trunk before they hit the ground. Parachuting reduced their free-fall speed by about 10%.

Brown suspects that their aerial skills evolved to deal with falls, but have become part of their behavioral repertoire and perhaps their default method of descent. He and USF undergraduate Jessalyn Aretz found, for example, that walking downward was much harder for the salamander than walking on a horizontal branch or up a trunk.

"That suggests that when they're wandering, they're likely walking on flat surfaces, or they're walking upward. And when they run out of habitat, as the upper canopy becomes drier and drier, and there's nothing else for them up there, they could just drop back down to those better habitats," he said. "Why walk back down? You're already probably exhausted. You've burned all your energy, you're a little 5 gram salamander, and you've just climbed the tallest tree on Earth. You're not going to turn around and walk down -- you're going to take the gravity elevator."

Brown sees A. vagrans as another poster child for old growth forests that is akin to the spotted owl because it is found primarily in the crowns of the tallest and oldest redwoods, although also in Douglas fir and Sitka spruce.

"This salamander is a poster child for the part of the redwoods that was almost completely lost to logging -- the canopy world. It is not there in these new-growth forests created by logging companies," he said. "Perhaps it would help not just efforts in conserving redwoods, but restoring redwoods, so that we could actually get canopy ecosystems. Restoring redwoods to the point of fern mats, to the point of salamanders in the canopy -- that would be a new bar for conservation."

Read more at Science Daily

New research may explain unexpected effects of common painkillers

Non-steroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen and aspirin are widely used to treat pain and inflammation. But even at similar doses, different NSAIDs can have unexpected and unexplained effects on many diseases, including heart disease and cancer.

Now, a new Yale-led study has uncovered a previously unknown process by which some NSAIDs affect the body. The finding may explain why similar NSAIDs produce a range of clinical outcomes and could inform how the drugs are used in the future.

The study was published May 23 in the journal Immunity.

Until now, the anti-inflammatory effects of NSAIDs were believed to arise solely through the inhibition of certain enzymes. But this mechanism does not account for many clinical outcomes that vary across the family of drugs. For example, some NSAIDs prevent heart disease while others cause it, some NSAIDs have been linked to decreased incidence of colorectal cancer, and various NSAIDs can have a wide range of effects on asthma.

Now, using cell cultures and mice, Yale researchers have uncovered a distinct mechanism by which a subset of NSAIDs reduce inflammation. And that mechanism may help explain some of these curious effects.

The research showed that only some NSAIDs -- including indomethacin, which is used to treat arthritis and gout, and ibuprofen -- also activate a protein called nuclear factor erythroid 2-related factor 2, or NRF2, which, among its many actions, triggers anti-inflammatory processes in the body.

"It's interesting and exciting that NSAIDs have a different mode of action than what was known previously," said Anna Eisenstein, an instructor at the Yale School of Medicine and lead author of the study. "And because people use NSAIDs so frequently, it's important we know what they're doing in the body."

The research team can't say for sure that NSAIDs' unexpected effects are due to NRF2 -- that will require more research. "But I think these findings are suggestive of that," Eisenstein said.

Eisenstein is now looking into some of the drugs' dermatological effects -- causing rashes, exacerbating hives, and worsening allergies -- and whether they are mediated by NRF2.

This discovery still needs to be confirmed in humans, the researchers note. But if it is, the findings could have impacts on how inflammation is treated and how NSAIDs are used.

For instance, several clinical trials are evaluating whether NRF2-activating drugs are effective in treating inflammatory diseases like Alzheimer's disease, asthma, and various cancers; this research could inform the potential and limitations of those drugs. Additionally, NSAIDs might be more effectively prescribed going forward, with NRF2-activating NSAIDs and non-NRF2-activating NSAIDs applied to the diseases they're most likely to treat.

The findings may also point to entirely new applications for NSAIDs, said Eisenstein.

NRF2 controls a large number of genes involved in a wide range of processes, including metabolism, immune response, and inflammation. And the protein has been implicated in aging, longevity, and cellular stress reduction.

Read more at Science Daily

May 22, 2022

Unraveling a perplexing explosive process that occurs throughout the universe

Mysterious fast radio bursts release as much energy in one second as the Sun pours out in a year and are among the most puzzling phenomena in the universe. Now researchers at Princeton University, the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and the SLAC National Accelerator Laboratory have simulated and proposed a cost-effective experiment to produce and observe the early stages of this process in a way once thought to be impossible with existing technology.

Producing the extraordinary bursts in space are celestial bodies such as neutron, or collapsed, stars called magnetars (magnet + star) enclosed in extreme magnetic fields. These fields are so strong that they turn the vacuum in space into an exotic plasma composed of matter and anti-matter in the form of pairs of negatively charged electrons and positively charged positrons, according to quantum electrodynamic (QED) theory. Emissions from these pairs are believed to be responsible for the powerful fast radio bursts.

Pair plasma

The matter-antimatter plasma, called "pair plasma," stands in contrast to the usual plasma that fuels fusion reactions and makes up 99% of the visible universe. This plasma consists of matter only in the form of electrons and vastly higher-mass atomic nuclei, or ions. The electron-positron plasmas are composed of equal mass but oppositely charged particles that are subject to annihilation and creation. Such plasmas can exhibit quite different collective behavior.

"Our laboratory simulation is a small-scale analog of a magnetar environment," said physicist Kenan Qu of the Princeton Department of Astrophysical Sciences. "This allows us to analyze QED pair plasmas," said Qu, first author of a study showcased in Physics of Plasmas as a Scilight, or science highlight, and also first author of a paper in Physical Review Letters that the present paper expands on.

"Rather than simulating a strong magnetic field, we use a strong laser," Qu said. "It converts energy into pair plasma through what are called QED cascades. The pair plasma then shifts the laser pulse to a higher frequency," he said. "The exciting result demonstrates the prospects for creating and observing QED pair plasma in laboratories and enabling experiments to verify theories about fast radio bursts."

Laboratory-produced pair plasmas have previously been created, noted physicist Nat Fisch, a professor of astrophysical sciences at Princeton University and associate director for academic affairs at PPPL who serves as principle investigator for this research. "And we think we know what laws govern their collective behavior," Fisch said. "But until we actually produce a pair plasma in the laboratory that exhibits collective phenomena that we can probe, we cannot be absolutely sure of that.

Collective behavior

"The problem is that collective behavior in pair plasmas is notoriously hard to observe," he added. "Thus, a major step for us was to think of this as a joint production-observation problem, recognizing that a great method of observation relaxes the conditions on what must be produced and in turn leads us to a more practicable user facility."

The unique simulation the paper proposes creates high-density QED pair plasma by colliding the laser with a dense electron beam travelling near the speed of light. This approach is cost-efficient when compared with the commonly proposed method of colliding ultra-strong lasers to produce the QED cascades. The approach also slows the movement of plasma particles, thereby allowing stronger collective effects.

"No lasers are strong enough to achieve this today and building them could cost billions of dollars," Qu said. "Our approach strongly supports using an electron beam accelerator and a moderately strong laser to achieve QED pair plasma. The implication of our study is that supporting this approach could save a lot of money."

Currently underway are preparations for testing the simulation with a new round of laser and electron experiments at SLAC. "In a sense what we are doing here is the starting point of the cascade that produces radio bursts," said Sebastian Meuren, a SLAC researcher and former postdoctoral visiting fellow at Princeton University who coauthored the two papers with Qu and Fisch.

Evolving experiment


"If we could observe something like a radio burst in the laboratory that would be extremely exciting," Meuren said. "But the first part is just to observe the scattering of the electron beams and once we do that we'll improve the laser intensity to get to higher densities to actually see the electron-positron pairs. The idea is that our experiment will evolve over the next two years or so."

The overall goal of this research is understanding how bodies like magnetars create pair plasma and what new physics associated with fast radio bursts are brought about, Qu said. "These are the central questions we are interested in."

Read more at Science Daily