Mar 19, 2022

Astronomers closer to unlocking origin of mysterious fast radio bursts

Nearly 15 years after the discovery of fast radio bursts (FRBs), the origin of the millisecond-long, deep-space cosmic explosions remains a mystery.

That may soon change, thanks to the work of an international team of scientists -- including UNLV astrophysicist Bing Zhang -- which tracked hundreds of the bursts from five different sources and found clues in FRB polarization patterns that may reveal their origin. The team's findings were reported in the March 17 issue of the journal Science.

FRBs produce electromagnetic radio waves, which are essentially oscillations of electric and magnetic fields in space and time. The direction of the oscillating electric field is described as the direction of polarization. By analyzing the frequency of polarization in FRBs observed from various sources, scientists revealed similarities in repeating FRBs that point to a complex environment near the source of the bursts.

"This is a major step towards understanding the physical origin of FRBs," said Zhang, a UNLV distinguished professor of astrophysics who coauthored the paper and contributed to the theoretical interpretation of the phenomena.

To make the connection between the bursts, an international research team, led by Yi Feng and Di Li of the National Astronomical Observatories of the Chinese Academy of Sciences, analyzed the polarization properties of five repeating FRB sources using the massive Five-hundred-meter Aperture Spherical radio Telescope (FAST) and the Robert C. Byrd Green Bank Telescope (GBT). Since FRBs were first discovered in 2007, astronomers worldwide have turned to powerful radio telescopes like FAST and GBT to trace the bursts and to look for clues on where they come from and how they're produced.

Though still considered mysterious, the source of most FRBs is widely believed to be magnetars, incredibly dense, city-sized neutron stars that possess the strongest magnetic fields in the universe. They typically have nearly 100% polarization. Conversely, in many astrophysical sources that involve hot randomized plasmas, such as the Sun and other stars, the observed emission is unpolarized because the oscillating electric fields have random orientations.

That's where the cosmic detective work kicks in.

In a study the team originally published last year in Nature, FAST detected 1,652 pulses from the active repeater FRB 121102. Even though the bursts from the source were discovered to be highly polarized with other telescopes using higher frequencies -- consistent with magnetars -- none of the bursts detected with FAST in its frequency band were polarized, despite FAST being the largest single-dish radio telescope in the world.

"We were very puzzled by the lack of polarization," said Feng, first author on the newly released Science paper. "Later, when we systematically looked into other repeating FRBs with other telescopes in different frequency bands -- particularly those higher than that of FAST, a unified picture emerged."

According to Zhang, the unified picture is that every repeating FRB source is surrounded by a highly magnetized dense plasma. This plasma produces different rotation of the polarization angle as a function of frequency, and the received radio waves come from multiple paths due to scattering of the waves by the plasma.

When the team accounted for just a single adjustable parameter, Zhang says, the multiple observations revealed a systematic frequency evolution, namely depolarization toward lower frequencies.

"Such a simple explanation, with only one free parameter, could represent a major step toward a physical understanding of the origin of repeating FRBs," he says.

Di Li, a corresponding author of the study, agrees that the analysis could represent a corner piece in completing the cosmic puzzle of FRBs. "For example, the extremely active FRBs could be a distinct population," he says. "Alternatively, we're starting to see the evolutionary trend in FRBs, with more active sources in more complex environments being younger explosions."

Read more at Science Daily

Scientists identify neurons in the brain that drive competition and social behavior within groups

New research in mice has identified neurons in the brain that influence competitive interactions between individuals and that play a critical role in shaping the social behavior of groups. Published in Nature by a team led by investigators at Massachusetts General Hospital (MGH), the findings will be useful not only for scientists interested in human interactions but also for those who study neurocognitive conditions such as autism spectrum disorder and schizophrenia that are characterized by altered social behavior.

"Social interactions in humans and animals occur most commonly in large groups, and these group interactions play a prominent role in sociology, ecology, psychology, economics and political science," says lead author S. William Li, an MD/PhD student at MGH. "What processes in the brain drive the complex dynamic behavior of social groups remains poorly understood, in part because most neuroscience research thus far has focused on the behaviors of pairs of individuals interacting alone. Here, we were able to study the behavior of groups by developing a paradigm in which large cohorts of mice were wirelessly tracked across thousands of unique competitive group interactions."

Li and his colleagues found that the animals' social ranking in the group was closely linked to the results of competition, and by examining recordings from neurons in the brains of mice in real time, the team discovered that neurons in the anterior cingulate region of the brain store this social ranking information to inform upcoming decisions.

"Collectively, these neurons held remarkably detailed representations of the group's behavior and their dynamics as the animals competed together for food, in addition to information about the resources available and the outcome of their past interactions," explains senior author Ziv M. Williams, MD, a neurosurgical oncologist at MGH. "Together, these neurons could even predict the animal's own future success well before competition onset, meaning that they likely drove the animals' competitive behavior based on whom they interacted with."

Manipulating the activity of these neurons, on the other hand, could artificially increase or decrease an animal's competitive effort and therefore control their ability to successfully compete against others. "In other words, we could tune up and down the animal's competitive drive and do so selectively without affecting other aspects of their behavior such as simple speed or motivation," says Williams.

The findings indicate that competitive success is not simply a product of an animal's physical fitness or strength, but rather, is strongly influenced by signals in the brain that affect competitive drive. "These unique neurons are able to integrate information about the individual's environment, social group settings, and reward resources to calculate how to best behave under specific conditions," says Li.

In addition to providing insights into group behavior and competition in different sociologic or economic situations and other settings, identifying the neurons that control these characteristics may help scientists design experiments to better understand scenarios in which the brain is wired differently. "Many conditions manifest in aberrant social behavior that spans many dimensions, including one's ability to understand social norms and to display actions that may fit the dynamical structure of social groups," says Williams. "Developing an understanding of group behavior and competition holds relevance to these neurocognitive disorders, but until now, how this happens in the brain has largely remained unexplored."

Read more at Science Daily

Mar 18, 2022

Gravitational wave mirror experiments can evolve into quantum entities

Quantum physical experiments exploring the motion of macroscopic or heavy bodies under gravitational forces require protection from any environmental noise and highly efficient sensing.

An ideal system is a highly reflecting mirror whose motion is sensed by monochromatic light, which is photoelectrically detected with high quantum efficiency. A quantum optomechanical experiment is achieved if the quantum uncertainties of light and mirror motion influence each other, ultimately leading to the observation of entanglement between optical and motional degrees of freedom.

In AVS Quantum Science, co-published by AIP Publishing and AVS, researchers from Hamburg University in Germany review research on gravitational wave detectors as a historical example of quantum technologies and examine the fundamental research on the connection between quantum physics and gravity. Gravitational wave astronomy requires unprecedented sensitivities for measuring the tiny space-time oscillations at audio-band frequencies and below.

The team examined recent gravitational wave experiments, showing it is possible to shield large objects, such as a 40-kilogram quartz glass mirror reflecting 200 kilowatts of laser light, from strong influences from the thermal and seismic environment to allow them to evolve as one quantum object.

"The mirror perceives only the light, and the light only the mirror. The environment is basically not there for the two of them," said author Roman Schnabel. "Their joint evolution is described by the Schrödinger equation."

This decoupling from the environment, which is central to all quantum technologies, including the quantum computer, enables measurement sensitivities that would otherwise be impossible.

The researchers review intersects with Nobel laureate Roger Penrose's work on exploring the quantum behavior of massive objects. Penrose sought to better understand the connection between quantum physics and gravity, which remains an open question.

Penrose thought of an experiment in which light would be coupled to a mechanical device via radiation pressure. In their review, the researchers show while these very fundamental questions in physics remain unresolved, the highly shielded coupling of massive devices that reflect laser light is beginning to improve sensor technology.

Going forward, researchers will likely explore further decoupling gravitational wave detectors from influences of the environment.

Read more at Science Daily

Wildfires devastate the land they burn, and they are also warming the planet

The 2021 wildfire season broke records globally, leaving land charred from California to Siberia. The risk of fire is growing, and a report published by the UN last month warned that wildfires are on track to increase 50% by 2050. These fires destroy homes, plant life, and animals as they burn, but the risk doesn't stop there. In the journal One Earth on March 18, researchersdetail how the brown carbon released by burning biomass in the northern hemisphere is accelerating warming in the Arctic and warn that this could lead to even more wildfires in the future.

Blazing wildfires are accompanied by vast plumes of brown smoke, made up of particles of brown carbon suspended in the air. This smoke poses health hazards, and can even block out the summer sun, and researchers suspected that it might also be contributing to global warming.

In 2017, the Chinese icebreaker vessel Xue Long headed for the Arctic Ocean to examine which aerosols were floating around in the pristine Arctic air and identify their sources. The scientists on the vessel were particularly curious about how brown carbon released by wildfires was affecting the climate and how its warming effects compared to those of denser black carbon from high-temperature fossil fuel burning, the second most powerful warming agent after carbon dioxide.

Their results showed that brown carbon was contributing to warming more than previously thought. "To our surprise, observational analyses and numerical simulations show that the warming effect of brown carbon aerosols over the Arctic is up to about 30% of that of black carbon," says senior author Pingqing Fu, an atmospheric chemist at Tianjin University.

In the last 50 years, the Arctic has been warming at a rate three times that of the rest of the planet, and it appears that wildfires are helping to drive this discrepancy. The researchers found that brown carbon from burning biomass was responsible for at least twice as much warming as brown carbon from fossil fuel burning.

Like black carbon and carbon dioxide, brown carbon warms the planet by absorbing solar radiation. Since warming temperatures have been linked to the rise in wildfires in recent years, this leads to a positive feedback loop. "The increase in brown carbon aerosols will lead to global or regional warming, which increases the probability and frequency of wildfires," says Fu. "Increased wildfire events will emit more brown carbon aerosols, further heating the earth, thus making wildfires more frequent."

For future research, Fu and his colleagues plan to investigate how wildfires are changing aerosol composition from sources other than brown carbon. Specifically, they are interested in the effect of fires on bioaerosols, which originate from plants and animals and can contain living organisms, including pathogens. In the meantime, Fu urges that attention be focused on wildfire mitigation. "Our findings highlight just how important it is to control wildfires," he says.

Read more at Science Daily

Extended napping in seniors may signal dementia

Daytime napping among older people is a normal part of aging -- but it may also foreshadow Alzheimer's disease and other dementias. And once dementia or its usual precursor, mild cognitive impairment, are diagnosed, the frequency and/or duration of napping accelerates rapidly, according to a new study.

The study, led by UC San Francisco and Harvard Medical School together with Brigham and Women's Hospital, its teaching affiliate, departs from the theory that daytime napping in older people serves merely to compensate for poor nighttime sleep. Instead, it points to work by other UCSF researchers suggesting that dementia may affect the wake-promoting neurons in key areas of the brain, the researchers state in their paper publishing March 17, 2022, in Alzheimer's and Dementia: The Journal of the Alzheimer's Association.

"We found the association between excessive daytime napping and dementia remained after adjusting for nighttime quantity and quality of sleep," said co-senior author Yue Leng, MD, PhD, of the UCSF Department of Psychiatry and Behavioral Sciences.

"This suggested that the role of daytime napping is important itself and is independent of nighttime sleep," said Leng, who partnered with Kun Hu, PhD, of Harvard Medical School, in senior-authoring the paper.

Watch-Like Devices, Annual Evaluations Used to Measure Naps, Cognition

In the study, the researchers tracked data from 1,401 seniors, who had been followed for up to 14 years by the Rush Memory and Aging Project at the Rush Alzheimer's Disease Center in Chicago. The participants, whose average age was 81 and of whom approximately three-quarters were female, wore a watch-like device that tracked mobility. Each prolonged period of non-activity from 9 a.m. to 7 p.m. was interpreted as a nap.

The device was worn every year continuously for up to 14 days, and once a year each participant underwent a battery of neuropsychological tests to evaluate cognition. At the start of the study 75.7% of participants had no cognitive impairment, while 19.5% had mild cognitive impairment and 4.1% had Alzheimer's disease.

For participants who did not develop cognitive impairment, daily daytime napping increased by an average 11 minutes per year. The rate of increase doubled after a diagnosis of mild cognitive impairment to a total of 24 minutes and nearly tripled to a total of 68 minutes after a diagnosis of Alzheimer's disease.

When the researchers looked at the 24% of participants who had normal cognition at the start of the study but developed Alzheimer's six years later, and compared them with those whose cognition remained stable, they found differences in napping habits. Participants who napped more than an hour a day had a 40% higher risk of developing Alzheimer's than those who napped less than an hour a day; and participants who napped at least once a day had a 40% higher risk of developing Alzheimer's than those who napped less than once a day.

The research confirms the results of a 2019 study, of which Leng was the first author, that found older men who napped two hours a day had higher odds of developing cognitive impairment that those who napped less than 30 minutes a day. The current study builds on these findings by evaluating both daytime napping and cognition each year, hence addressing directionality, Leng notes.

Loss of Wake-Promoting Neurons May Account for Longer Naps

According to the researchers, increase in napping may be explained by a further 2019 study, by other UCSF researchers, comparing the postmortem brains of people with Alzheimer's disease to those without cognitive impairment. Those with Alzheimer's disease were found to have fewer wake-promoting neurons in three brain regions. These neuronal changes appear to be linked to tau tangles -- a hallmark of Alzheimer's, characterized by increased activity of enzymes causing the protein to misfold and clump.

"It is plausible that our observed associations of excessive daytime napping at baseline, and increased risk for Alzheimer's disease during follow-up, may reflect the effect of Alzheimer's disease pathology at preclinical stages," the authors noted.

The study shows for the first time that napping and Alzheimer's disease "seem to be driving each other's changes in a bi-directional way," said Leng, who is also affiliated with the UCSF Weill Institute for Neurosciences. "I don't think we have enough evidence to draw conclusions about a causal relationship, that it's the napping itself that caused cognitive aging, but excessive daytime napping might be a signal of accelerated aging or cognitive aging process," she said.

Read more at Science Daily

Monkeys play to reduce group tension

New research has discovered that adult howler monkeys use play to avoid conflict and reduce group tension, with levels of play increasing when they are faced with scarce resources.

The study, carried out by a team of researchers from Spain, Brazil and the UK, and published in the journal Animal Behaviour, focuses on the activity of two subspecies of howler monkey: the Mexican howler (Alouatta palliata mexicana) and the golden-mantled howler (Alouatta palliata palliata).

The researchers examined how play varies with age, and they measured the amount of time adults play with other adults and with juvenile monkeys within their groups.

Howler monkey play involves individuals hanging from their tails and making facial expressions and signals, such as shaking their heads. However, play is an energy-costly activity for howler monkeys, who generally have an inactive lifestyle due to their mainly leaf-based diet.

By studying seven different groups of howler monkeys in the rainforests of Mexico and Costa Rica, the researchers found that the amount of adult play is linked to the number of potential playmates, increasing in line with the size of the group. Adults spend more time playing with other adults, rather than juveniles, and adult females spend more time engaged in play than adult males.

Crucially, the researchers found that play amongst adults increases in line with time spent foraging on fruit. Howler monkeys typically eat leaves, and fruit is a highly prized resource that generates competition amongst the monkeys.

Howler monkeys do not have a fixed social hierarchy within their groups to navigate competition and conflict, and they do not engage in collective grooming, which is used by some primates for group cohesiveness and tension reduction. Instead, the study authors believe play has a key role in helping howler monkeys regulate relationships within their social group and avoid conflict.

Co-author Dr Jacob Dunn, Associate Professor in Evolutionary Biology at Anglia Ruskin University (ARU), said: "Despite its appearance and our own perception of what play means, play is not always associated with frivolity or education. Instead, we think it fulfils an important function in howler monkey society by reducing tension when there is competition over scarce resources.

"We found that levels of play are at their highest when howler monkeys are feeding on fruit -- which is a valuable and defendable resource -- and female adults play more than males. This is striking, as females would be more vulnerable to food competition than males. Howler monkeys are a particularly energy-conservative species, and we would have assumed females would have played less, as they are also constrained by the energy requirements of reproduction."

Lead author Dr Norberto Asensio, of University of the Basque Country, said: "One theory for the positive effect of fruit consumption on play is that a fruit-based diet simply provides the howler monkeys with more energy compared to their typical diet of leaves.

Read more at Science Daily

Mar 17, 2022

Moon's orbit proposed as a gravitational wave detector

Researchers from the UAB, IFAE and University College London propose using the variations in distance between the Earth and the Moon, which can be measured with a precision of less than a centimeter, as a new gravitational wave detector within a frequency range that current devices cannot detect. The research, which could pave the way for the detection of signals from the early universe, was published recently in Physical Review Letters.

Gravitational waves, predicted by Albert Einstein at the start of the 20th century and detected for the first time in 2015, are the new messengers of the most violent processes taking place in the universe. The gravitational wave detectors scan different frequency ranges, similar to moving a dial when tuning into a radio station. Nevertheless, there are frequencies that are impossible to cover with current devices and which may harbour signals that are fundamental to understanding the cosmos. One particular example can be seen in microhertz waves, which could have been produced at the dawn of our universe, and are practically invisible to even the most advanced technology available today.

In an article recently published in the journal Physical Review Letters, researchers Diego Blas from the Department of Physics at the Universitat Autònoma de Barcelona (UAB) and the Institut de Física d'Altes Energies (IFAE), and Alexander Jenkins from the University College London (UCL), point out that a natural gravitational wave detector exists in our immediate environment: the Earth-Moon System. The gravitational waves constantly hitting this system generate tiny deviations in the Moon's orbit. Although these deviations are minute, Blas and Jenkins plan on taking advantage of the fact that the Moon's exact position is known with an error of at most one centimeter, thanks to the use of lasers sent from different observatories which are continuously reflected upon mirrors left on the surface of the Moon by the Apollo space mission and others. This incredible precision, with an error of one billionth of a part at most, is what may allow a small disturbance caused by ancient gravitational waves to be detected. The Moon's orbit lasts approximately 28 days, which translates into a particularly relevant sensitivity when it comes to microhertz, the frequency range researchers are interested in.

Similarly, they also propose using the information other binary systems in the universe may provide as gravitational wave detectors. This is the case of pulsar binary systems distributed throughout the galaxy, systems in which the pulsar's radiation beam allows obtaining the orbit of these stars with incredible precision (with a precision of one millionth). Given that these orbits last approximately 20 days, the passing of gravitational waves in the microhertz frequency range affect them particularly. Blas and Jenkins concluded that these systems could also be potential detectors of these types of gravitational waves.

With these "natural detectors" in the microhertz frequency range, Blas and Jenkins were able to propose a new form of studying gravitational waves emitted by the distant universe. Specifically, those produced by the possible presence of transitions in highly energetic phases of the early universe, commonly seen in many models.

Read more at Science Daily

The oxidation of volcanoes -- a magma opus

A new, Yale-led study unlocks the science behind a key ingredient -- namely oxygen -- in some of the world's most violent volcanoes.

The research offers a new model for understanding the oxidation state of arc magmas, the lavas that form some volcanoes, such as the one that erupted dramatically in Tonga earlier this year.

The plume from Tonga's underwater volcanic eruption on Jan. 15 rose 36 miles into the air. Ash from the volcano reached the mesosphere, Earth's third layer of atmosphere.

"These eruptions occur in volcanic arcs, such as the Aleutian island chain, which are well known in the circum-Pacific region and produce the world's most explosive volcanic eruptions," said Jay Ague, the Henry Barnard Davis Memorial Professor of Earth & Planetary Sciences at Yale.

Ague is first author of the new study, published in the journal Nature Geoscience. Ague is also curator-in-charge of mineralogy and meteoritics for the Yale Peabody Museum of Natural History.

Scientists have long known that arc magmas have a higher oxidation state than rocks in most of the Earth's mantle (its upper, rocky layer). This is surprising, they say, because arc magmas form in the mantle. There has been no consensus on the origins of the oxidizing signature.

Ague and his colleagues say the process begins with a layer of sediment that covers tectonic plates beneath the ocean floor. Tectonic plates are large slabs of rock that jockey for position in the Earth's crust and upper mantle.

The sediment covering these ocean plates is largely made up of weathered materials shed from continents or produced as a result of seafloor hydrothermal vent activity. Giant tube worms and other exotic sea creatures commonly thrive near these vents. But regardless of origin, the sediments covering oceanic plates are often highly oxidized.

Tectonic plates are constantly in motion, moving at about the rate that fingernails grow. Oceanic plates are generated at mid-ocean ridges and sink sharply into Earth's interior -- in a process called subduction.

That's where things get interesting for arc volcanism, Ague said.

When an ocean plate subducts, Ague explained, it heats up, is compressed, and begins to dehydrate. This metamorphism produces hot, water-rich fluids that rise toward the surface.

As these materials move upward through the oxidized sediment layer on top of slabs, the fluids themselves become oxidized -- setting the stage for an arc magma.

"As the fluids continue to rise they leave the slab behind and enter Earth's mantle," Ague said. "There, the fluids drive mantle melting, producing oxidized magmas that ascend and can ultimately erupt as lava from volcanoes."

Beyond the dramatic effects of volcanic eruptions, the oxidized character of arc magmas is also geologically significant, Ague said. Oxidation is critical for making certain kinds of ore deposits, particularly copper and gold, such as those found in western South America.

Also, the injection of highly-oxidized, sulfur-bearing gases into the atmosphere after an eruption can lead to transient global cooling of the troposphere, the lowest level of Earth's atmosphere.

"This was the case with the 1991 eruption of Mount Pinatubo in the Philippines," Ague said. "It also occurred in a number of famous historical cases, such Mount Tambora in Indonesia in 1815. That was the most powerful volcanic eruption in human history and led to the so-called 'Year Without a Summer' in 1816."

Read more at Science Daily

Effects of ancient carbon releases suggest possible scenarios for future climate

A massive release of greenhouse gases, likely triggered by volcanic activity, caused a period of extreme global warming known as the Paleocene-Eocene Thermal Maximum (PETM) about 56 million years ago. A new study now confirms that the PETM was preceded by a smaller episode of warming and ocean acidification caused by a shorter burst of carbon emissions.

The new findings, published March 16 in Science Advances, indicate that the amount of carbon released into the atmosphere during this precursor event was about the same as the current cumulative carbon emissions from the burning of fossil fuels and other human activities. As a result, the short-lived precursor event represents what might happen if current emissions can be shut down quickly, while the much more extreme global warming of the PETM shows the consequences of continuing to release carbon into the atmosphere at the current rate.

"It was a short-lived burp of carbon equivalent to what we've already released from anthropogenic emissions," said coauthor James Zachos, professor of Earth and planetary sciences and Ida Benson Lynn Chair of Ocean Health at UC Santa Cruz. "If we turned off emissions today, that carbon would eventually get mixed into the deep sea and its signal would disappear, because the deep-sea reservoir is so huge."

This process would take hundreds of years -- a long time by human standards, but short compared to the tens of thousands of years it took for Earth's climate system to recover from the more extreme PETM.

The new findings are based on an analysis of marine sediments that were deposited in shallow waters along the U.S. Atlantic coast and are now part of the Atlantic Coastal Plain. At the time of the PETM, sea levels were higher, and much of Maryland, Delaware, and New Jersey were under water. The U.S. Geological Survey (USGS) has drilled sediment cores from this region which the researchers used for the study.

The PETM is marked in marine sediments by a major shift in carbon isotope composition and other evidence of dramatic changes in ocean chemistry as a result of the ocean absorbing large amounts of carbon dioxide from the atmosphere. The marine sediments contain the microscopic shells of tiny sea creatures called foraminifera that lived in the surface waters of the ocean. The chemical composition of these shells records the environmental conditions in which they formed and reveals evidence of warmer surface water temperatures and ocean acidification.

First author Tali Babila began the study as a postdoctoral fellow working with Zachos at UC Santa Cruz and is now at the University of Southampton, U.K. Novel analytical methods developed at Southampton enabled the researchers to analyze the boron isotope composition of individual foraminifera to reconstruct a detailed record of ocean acidification. This was part of a suite of geochemical analyses they used to reconstruct environmental changes during the precursor event and the main PETM.

"Previously, thousands of foraminifera fossil shells were needed for boron isotope measurement. Now we are able to analyze a single shell that's only the size of a grain of sand," Babila said.

Evidence of a precursor warming event had been identified previously in sediments from the continental section at Big Horn Basin in Wyoming and a few other sites. Whether it was a global signal remained unclear, however, as it was absent from deep-sea sediment cores. Zachos said this makes sense because sedimentation rates in the deep ocean are slow, and the signal from a short-lived event would be lost due to mixing of sediments by bottom-dwelling marine life.

"The best hope for seeing the signal would be in shallow marine basins where sedimentation rates are higher," he said. "The problem there is that deposition is episodic and erosion is more likely. So there's not a high likelihood of capturing it."

The USGS and others have drilled numerous sediment cores (or sections) along the Atlantic Coastal Plain. The researchers found that the PETM is present in all of those sections, and several also capture the precursor event. Two sections from Maryland (at South Dover Bridge and Cambridge-Dover Airport) are the focus of the new study.

"Here we have the full signal, and a couple of other locations capture part of it. We believe it's the same event they found in the Bighorn Basin," Zachos said.

Based on their analyses, the team concluded that the precursor signal in the Maryland sections represents a global event that probably lasted for a few centuries, or possibly several millennia at most.

The two carbon pulses -- the short-lived precursor and the much larger and more prolonged carbon emissions that drove the PETM -- led to profoundly different mechanisms and time scales for the recovery of the Earth's carbon cycle and climate system. The carbon absorbed by the surface waters during the precursor event got mixed into the deep ocean within a thousand years or so. The carbon emissions during the PETM, however, exceeded the buffering capacity of the ocean, and removal of the excess carbon depended on much slower processes such as the weathering of silicate rocks over tens of thousands of years.

Zachos noted that there are important differences between Earth's climate system today and during the Paleocene -- notably the presence of polar ice sheets today, which increase the sensitivity of the climate to greenhouse warming.

Read more at Science Daily

Cheaper, more efficient ways to capture carbon

University of Colorado Boulder researchers have developed a new tool that could lead to more efficient and cheaper technologies for capturing heat-trapping gases from the atmosphere and converting them into beneficial substances, like fuel or building materials. Such carbon capture technology may be needed at scale in order to limit global warning this century to 2.7 degrees F (1.5 Celsius) above pre-industrial temperatures and fend off catastrophic impacts of global climate change.

The scientists describe their technique in a paper published this month in the journal iSCIENCE.

The method predicts how strong the bond will be between carbon dioxide and the molecule that traps it, known as a binder. This electrochemical diagnosis can be easily applied to any molecule that is chemically inclined to bind with carbon dioxide, allowing researchers to identify suitable molecular candidates with which to capture carbon dioxide from everyday air.

"The Holy Grail, if you will, is to try to inch toward being able to use binders that can grab carbon dioxide from the air [around us], not just concentrated sources," said Oana Luca, co-author of the new study and assistant professor of chemistry. "Determining the strength of binders allows us to figure out whether the binding will be strong or weak, and identify candidates for future study for direct carbon capture from dilute sources."

The goal of carbon capture and storage technology is to remove carbon dioxide from the atmosphere and store it safely for hundreds or thousands of years. But while it has been in use in the U.S. since the 1970s, it currently captures and stores a mere 0.1% of global carbon emissions annually. To help meet carbon emissions goals laid out by the IPCC, carbon capture and storage would have to rapidly increase in scale by 2050.

Current industrial facilities around the world rely on capturing carbon dioxide from a concentrated source, such as emissions from power plants. While these methods can bind a lot of carbon dioxide quickly and efficiently using large amounts of certain chemical binders, they are also extraordinarily energy intensive.

This method also is quite expensive at scale to take carbon dioxide and turn it into something else useful, such as carbonates, an ingredient in cement, or formaldehyde or methanol, which can be used as a fuel, according to Luca, fellow-elect of the Renewable and Sustainable Energy Institute (RASEI).

Using electrochemical methods instead, such as those detailed in the new CU Boulder-led study, would free carbon capture facilities from being tied to concentrated sources, allowing them to exist almost anywhere.

Being able to easily estimate the strength of chemical bonds also enables researchers to screen for which binders will be best suited -- and offer a cheaper alternative to traditional methods -- for capturing and converting carbon into materials or fuel according to Haley Petersen, co-lead author on the study and graduate student in chemistry.

Creating chemical bonds


The science of chemistry is based on a few basic facts: One, that molecules are made of atoms, and two, that they are orbited by electrons. When atoms bond with other atoms, they form molecules. And when atoms share electrons with other atoms, they form what is called a covalent bond.

Using electricity, the researchers can activate these bonds by using an electrode to deliver an electron to a molecule. When they do that to an imidazolium molecule, like they did in this study, a hydrogen atom is removed, creating a gap in a carbon atom for another molecule to want to bond with it -- such as carbon dioxide.

However, carbon dioxide (CO2) is the kind of molecule that doesn't typically like to create new bonds.

"It's generally unreactive, and in order to react with it, you also have to bend it," said Luca. "So we're in a chemical space that hasn't really been probed before, for CO2 capture."

The method the researchers examines how good a whole family of carbenes (a specific type of molecule, containing a neutral carbon atom), that they can electrochemically generate, are at binding CO2.

Read more at Science Daily

A gene could prevent Parkinson's disease

Parkinson's disease is a neurodegenerative disorder characterized by the destruction of a specific population of neurons: the dopaminergic neurons. The degeneration of these neurons prevents the transmission of signals controlling specific muscle movements and leads to tremors, involuntary muscle contractions or balance problems characteristic of this pathology. A team from the University of Geneva (UNIGE) has investigated the destruction of these dopaminergic neurons using the fruit fly as study model. The scientists identified a key protein in flies, and also in mice, which plays a protective role against this disease and could be a new therapeutic target. This work can be read in the journal Nature Communications.

Apart from rare forms involving a single gene, most Parkinson's cases result from an interaction between multiple genetic and environmental risk factors. However, a common element in the onset of the disease is a dysfunction of mitochondria in dopaminergic neurons. These small factories within cells are responsible for energy production, but also for activating the cell's self-destruct mechanisms when damaged.

The laboratory of Emi Nagoshi, Professor in the Department of Genetics and Evolution at the UNIGE Faculty of Science, uses the fruit fly, or Drosophila, to study the mechanisms of dopaminergic neuron degeneration. Her group is particularly interested in the Fer2 gene, whose human homolog encodes a protein that controls the expression of many other genes and whose mutation might lead to Parkinson's disease via mechanisms that are not yet well understood.

In a previous study, this scientific team demonstrated that a mutation in the Fer2 gene causes Parkinson's-like deficiencies in flies, including a delay in the initiation of movement. They had also observed defects in the shape of the mitochondria of dopaminergic neurons, similar to those observed in Parkinson's patients.

Protecting neurons

Since the absence of Fer2 causes Parkinson's disease-like conditions, the researchers tested whether -- on the contrary -- an increase in the amount of Fer2 in the cells could have a protective effect. When flies are exposed to free radicals, their cells undergo oxidative stress which leads to the degradation of dopaminergic neurons. However, the scientists were able to observe that oxidative stress no longer has any deleterious effect on the flies if they overproduce Fer2, confirming the hypothesis of its protective role.

"We have also identified the genes regulated by Fer2 and these are mainly involved in mitochondrial functions. This key protein therefore seems to play a crucial role against the degeneration of dopaminergic neurons in flies by controlling not only the structure of mitochondria but also their functions," explains Federico Miozzo, researcher in the Department of Genetics and Evolution and first author of the study.

Read more at Science Daily

Mar 16, 2022

Look! Up in the sky! Is it a planet? Nope, just a star

The first worlds beyond our solar system were discovered three decades ago. Since then, close to 5,000 exoplanets have been confirmed in our galaxy. Astronomers have detected another 5,000 planetary candidates -- objects that might be planets but have yet to be confirmed. Now, the list of planets has shrunk by at least three.

In a study appearing in the Astronomical Journal, MIT astronomers report that three, and potentially four, planets that were originally discovered by NASA's Kepler Space Telescope are in fact misclassified. Instead, these suspected planets are likely small stars.

The team used updated measurements of planet-hosting stars to double-check the size of the planets, and identified three that are simply too big to be planets. With new and better estimates of stellar properties, the researchers found that the three objects, which are known as Kepler-854b, Kepler-840b, and Kepler-699b, are now estimated to be between two and four times the size of Jupiter.

"Most exoplanets are Jupiter-sized or much smaller. Twice [the size of] Jupiter is already suspicious. Larger than that cannot be a planet, which is what we found," says the study's first author Prajwal Niraula, a graduate student in MIT's Department of Earth, Atmospheric, and Planetary Sciences.

A fourth planet, Kepler-747b, is about 1.8 times Jupiter's size, which is comparable to the very largest confirmed planets. But Kepler-747b is relatively far from its star, and the amount of light it receives is too small to sustain a planet of its size. Kepler-747b's planetary status, the team concludes, is suspect but not entirely implausible.

"Overall, this study makes the current list of planets more complete," says study author Avi Shporer, a research scientist at MIT's Kavli Institute for Astrophysics and Space Research. "People rely on this list to study the population of planets as a whole. If you use a sample with a few interlopers, your results may be inaccurate. So, it's important that the list of planets is not contaminated."

The study's co-authors also include Ian Wong, NASA Postdoctoral Program Fellow at NASA Goddard Space Flight Center, and MIT Assistant Professor Julien de Wit.

Stellar updates

Rooting out planetary imposters was not the team's initial goal. Niraula originally intended to look for systems with signs of tidal distortion.

"If you have two objects close to each other, the gravitational pull of one will cause the other to be egg-shaped, or ellipsoidal, which gives you an idea of how massive the companion is," Niraula explains. "So you could determine whether it's a star-star or star-planet system, just based on that tidal pull."

When combing through the Kepler catalog, he came upon a signal from Kepler-854b that appeared too large to be true.

"Suddenly we had a system where we saw this ellipsoidal signal which was huge, and pretty immediately we knew this could not be from a planet," Shporer says. "Then we thought, something doesn't add up."

The team then took a second look at both the star and the planetary candidate. As with all Kepler-detected planets, Kepler-854b was spotted through a transit detection -- a periodic dip in starlight that signals a possible planet passing in front of its star. The depth of that dip represents the ratio between the size of the planet and that of its star. Astronomers can calculate the planet's size based on what they know of the star's size. But as Kepler-854b was discovered in 2016, its size was based on stellar estimates that were less precise than they are today.

Currently, the most accurate measurements of stars comes from the European Space Agency's Gaia mission, a space-based observatory that is designed to precisely measure and map the properties and paths of stars in the Milky Way. In 2016, Gaia's measurements of Kepler-854 were not yet available. Given the stellar information that was available, the object seemed to be a plausible-sized planet. But Niraula found that with Gaia's improved estimates, Kepler-854b turned out to be much larger, at three times the size of Jupiter.

"There's no way the universe can make a planet of that size," Shporer says. "It just doesn't exist."

Tiny corrections

The team confirmed that Kepler-854b was a planetary "false positive" -- not a planet at all, but instead, a small star orbiting a larger host star. Then they wondered: Could there be more?

Niraula searched through the Kepler catalog's more than 2,000 planets, this time for significant updates to the size of stars provided by Gaia. He ultimately discovered three stars whose sizes significantly changed based on Gaia's improved measurements. From these estimates, the team recalculated the size of the planets orbiting each star, and found them to be about two to four times Jupiter's size.

"That was a very big flag," Niraula says. "We now have three objects that are now not planets, and the fourth is likely not a planet."

Going forward, the team anticipates that there won't be many more such corrections to existing exoplanet catalogs.

Read more at Science Daily

Ancient ice reveals scores of gigantic volcanic eruptions

Ice cores drilled in Antarctica and Greenland have revealed gigantic volcanic eruptions during the last ice age. Sixty-nine of these were larger than any eruption in modern history. According to the University of Copenhagen physicists behind the research, these eruptions can teach us about our planet's sensitivity to climate change.

For many people, the mention of a volcanic eruption conjures up doomsday scenarios that include deafening explosions, dark ash billowing into the stratosphere and gloopy lava burying everything in its path as panicked humans run for their lives. While such an eruption could theoretically happen tomorrow, we have had to make do with disaster films and books when it comes to truly massive volcanic eruptions in the modern era.

"We haven't experienced any of history's largest volcanic eruptions. We can see that now. Eyjafjellajökull, which paralysed European air traffic in 2010, pales in comparison to the eruptions we identified further back in time. Many of these were larger than any eruption over the last 2,500 years," says Associate Professor Anders Svensson of the University of Copenhagen's Niels Bohr Institute.

By comparing ice cores drilled in Antarctica and Greenland, he and his fellow researchers managed to estimate the quantity and intensity of volcanic eruptions over the last 60,000 years. Estimates of volcanic eruptions more than 2,500 years ago have been associated with great uncertainty and a lack of precision, until now.

Sixty-nine eruptions larger than Mount Tambora


Eighty-five of the volcanic eruptions identified by the researchers were large global eruptions. Sixty-nine of these are estimated to be larger than the 1815 eruption of Mount Tambora in Indonesia -- the largest volcanic eruption in recorded human history. So much sulfuric acid was ejected into the stratosphere by the Tambora eruption that it blocked sunlight and caused global cooling in the years that followed. The eruption also caused tsunamis, drought, famine and at least 80,000 deaths.

"To reconstruct ancient volcanic eruptions, ice cores offer a few advantages over other methods. Whenever a really large eruption occurs, sulfuric acid is ejected into the upper atmosphere, which is then distributed globally -- including onto Greenland and Antarctica. We can estimate the size of an eruption by looking at the amount of sulfuric acid that has fallen," explains Anders Svensson.

In a previous study, the researchers managed to synchronize ice cores from Antarctica and Greenland -- i.e., to date the respective core layers on the same time scale. By doing so, they were able to compare sulphur residues in ice and deduce when sulfuric acid spread to both poles after globally significant eruptions.

When will it happen again?

"The new 60,000-year timeline of volcanic eruptions supplies us with better statistics than ever before. Now we can see that many more of these great eruptions occurred during the prehistoric Ice Age than in modern times. Because large eruptions are relatively rare, a long timeline is needed to know when they occur. That is what we now have," says Anders Svensson.

One may be left wondering when the next of these massive eruptions will occur. But Svensson isn't ready to make any concrete predictions:

"Three eruptions of the largest known category occurred during the entire period we studied, so-called VEI-8 eruptions (see fact box). So, we can expect more at some point, but we just don't know if that will be in a hundred or a few thousand years. Tambora sized eruptions appears to erupt once or twice every thousand years, so the wait for that may be shorter."

How was climate affected?

When powerful enough, volcanic eruptions can affect global climate, where there is typically a 5-10- year period of cooling. As such, there is great interest in mapping the major eruptions of the past -- as they can help us look into the future.

"Ice cores contain information about temperatures before and after the eruptions, which allows us to calculate the effect on climate. As large eruptions tell us a lot about how sensitive our planet is to changes in the climate system, they can be useful for climate predictions," explains Anders Svensson.

Read more at Science Daily

How inland and coastal waterways influence climate

"Streams to the river, river to the sea." If only it were that simple.

Most global carbon-budgeting efforts assume a linear flow of water from the land to the sea, which ignores the complex interplay between streams, rivers, lakes, groundwater, estuaries, mangroves and more. A study co-led by climate scientist Laure Resplandy, an assistant professor of geosciences and the High Meadows Environmental Institute (HMEI) at Princeton University, details how carbon is stored and transported through the intricacy of inland and coastal waterways. Published in the current issue of the journal Nature, the work has significant implications for enforcing the carbon calculations that are part of international climate accords.

Terrestrial and marine ecosystems have a powerful influence on climate by regulating the level of atmospheric carbon dioxide (CO2). These ecosystems, however, are often viewed as disconnected from each other, which ignores the transfer of carbon from land to the open ocean through a complex network of water bodies -- the continuum of streams, rivers, estuaries and other bodies carrying water from land to the sea.

In a detailed analysis, the team of researchers from Belgium, the United States and France discovered that this land-to-ocean aquatic continuum (LOAC) carries a substantial amount of carbon of anthropogenic (e.g., fossil-fuel) origin. Thus, the carbon removed from the atmosphere by terrestrial ecosystems is not all stored locally, as is commonly assumed, which has implications for global agreements that require countries to report their carbon inventories. The researchers also found that the land-to-ocean carbon transfer of natural origin was larger than previously thought, with far-reaching implications for the assessment of the anthropogenic CO2 uptake by the ocean and the land.

"The complexity of the LOAC, which includes rivers, groundwater, lakes, reservoirs, estuaries, tidal marshes, mangroves, seagrasses, and waters above continental shelves, has made it challenging to assess its influence on the global carbon cycle," said Pierre Regnier, a professor at the University of Brussels who co-led the study with Resplandy.

Because of that complexity, important global carbon-budgeting efforts, such as those of the U.N. Intergovernmental Panel on Climate Change and the Global Carbon Project, typically assume a direct "pipeline" transfer of carbon from river mouths to the open ocean. Another common assumption is that all the transported carbon is natural, neglecting the impacts of human perturbations on this aquatic continuum, such as damming and the decimation of coastal vegetation.

In this study, the researchers synthesized more than 100 individual studies of the various components of the continuum. From this synthesis, LOAC carbon budgets were developed for two time periods: the pre-industrial period and the present day. Their results confirm the well-known pre-industrial carbon "loop" in which carbon is taken up from the atmosphere by terrestrial ecosystems, transferred by rivers to the ocean, and then outgassed back to the atmosphere.

"We find the amount of carbon carried by this natural land-to-ocean loop, 0.65 billion tons per year, is roughly 50% greater than previously thought," Resplandy said.

Furthermore, this loop is comprised of two smaller loops, one that transfers carbon from terrestrial ecosystems to inland waters and another from coastal vegetation (so-called "blue carbon ecosystems") to the open ocean.

"A larger pre-industrial land-to-ocean carbon transport implies that the ocean uptake of anthropogenic CO2 previously inferred from observations was underestimated," Resplandy said.

"The flip side is that the land uptake of anthropogenic CO2 was overestimated," added Regnier.

The study demonstrates that anthropogenic carbon carried by rivers is either outgassed back to the atmosphere or eventually stored in aquatic sediments and the open ocean.

Philippe Ciais, a research director at the Laboratoire des Sciences du Climat et de l'Environnement and a co-author of the study explained: "This new view of the anthropogenic CO2 budget may have a silver lining because sediments and the ocean offer arguably more stable repositories than terrestrial biomass and soil carbon, which are vulnerable to droughts, fires and land-use change."

Read more at Science Daily

How the brain encodes social rank and 'winning mindset'

If you're reaching for the last piece of pizza at a party and see another hand going for it at the same time, your next move probably depends both on how you feel and whom the hand belongs to. Your little sister -- you might go ahead and grab the pizza. Your boss -- you're probably more likely to step back and give up the slice. But if you're hungry and feeling particularly confident, you might go for it.

Now, Salk researchers have made inroads into understanding how the mammalian brain encodes social rank and uses this information to shape behaviors -- such as whether to fight for that last pizza slice. In mice engaged in a competition, the team discovered, patterns of brain activity differ depending on the social rank of the opposing animal. Moreover, the scientists could use brain readouts to accurately predict which animal would win a food reward -- the victor was not always the more socially dominant animal, but the one more engaged in a "winning mindset." The findings were published in Nature on March 16, 2022.

"Most social species organize themselves into hierarchies that guide each individual's behavior," says senior author Kay Tye, professor in Salk's Systems Neurobiology Laboratory and Howard Hughes Medical Institute Investigator. "Understanding how the brain mediates this may help us understand the interplay between social rank, isolation, and psychiatric diseases, such as depression, anxiety, or even substance abuse."

Researchers already knew that an area of the brain called the medial prefrontal cortex (mPFC) was responsible for representing social rank in mammals; alterations to a mouse's mPFC change an animal's dominance behavior. But it was unknown how the mPFC represented this information and which neurons (if any) were involved in altering dominance behavior.

In the new study, Tye and her team let groups of four mice share a cage, allowing a social hierarchy to naturally develop -- some animals became more dominant and others more subordinate. Then, the researchers selected pairs of cohabitating mice to compete for food rewards in a "round robin" tournament style structure.

To capture the brain activity of the animals, as well as slight, difficult to measure differences in their behavior as they competed, the researchers spearheaded several new technologies. They used new wireless devices to record brain activity in free-roaming animals and developed a multi-animal artificial intelligence tracking tool to follow the movements of the mice over time, even when two animals looked identical. Finally, they turned to new modeling approaches to analyze the data.

As soon as the mice were paired up, the scientists discovered, the activity of neurons in their mPFC could predict -- with 90 percent certainty -- the rank of their opponent.

"We expected that the animals might only signal rank when they heard a beep to start the competition," says co-first author Nancy Padilla-Coreano, an assistant professor at the University of Florida, who carried out the work while she was a postdoctoral fellow at Salk. "But it turns out that animals are walking around with this representation of social rank in their brain all the time."

When the researchers next asked whether the activity of the mPFC neurons was associated with behavior, they found something surprising. The brain activity patterns were linked with slight changes in behavior, such as how fast a mouse moved, and they also could predict -- a full 30 seconds before the competition started -- which mouse would win the food reward.

While the more dominant mouse was usually predicted to win, sometimes the model accurately predicted that the subordinate animal would win. The model, the team says, was capturing competitive success, or what some people might call a "winning mindset."

Just as you might sometimes be in a more competitive mood and be more likely to snatch that pizza slice before your boss, a subordinate mouse might be in a more "winning mindset" than a more dominant animal and end up winning.

The areas of the mPFC associated with social rank and competitive success are adjacent to one another, the researchers discovered, and highly connected. Signals on social rank, they say, impact the state of the brain involved in competitive success. In other words, a subordinate animal's confidence and "winning mindset" may partially diminish when faced with the alpha mouse.

"This is the first time we've been able to capture these internal states that connect social rank to behavior," says Kanha Batra, a graduate student in the Tye lab and co-first author of the paper. "At any timepoint, we could predict an animal's next move from brain activity using these internal states."

The researchers also showed that changes in brain activity occurred when the animals were in competition versus when they were collecting rewards alone. However, social rank of the animals' living group could still be decoded from the brain activity even when animals were alone.

"This is all further evidence to suggest that we are in different brain states when we are with others compared to when we're alone," says Tye, holder of the Wylie Vale Chair. "Regardless of who you're with, if you're aware of other people around you, your brain is using different neurons."

Read more at Science Daily

Mar 15, 2022

Combing the cosmos: New color catalog aids hunt for life on frozen worlds

Aided by microbes found in the subarctic conditions of Canada's Hudson Bay, an international team of scientists has created the first color catalog of icy planet surface signatures to uncover the existence of life in the cosmos.

As ground-based and space telescopes get larger and can probe the atmosphere of rocky exoplanets, astronomers need a color-coded guide to compare them and their moons to vibrant, tinted biological microbes on Earth, which may dominate frozen worlds that circle different stars.

But researchers need to know what microbes that live in frigid places on Earth look like before they can spot them elsewhere.

The study, "Color Catalogue of Life in Ice: Surface Biosignatures on Icy Worlds," published in the journal Astrobiology, provides this toolkit. Researchers from Cornell University, Portugal's Instituto Superior de Agronomia and Técnico and Canada's Université Laval in Quebec were involved in the study.

"On Earth, vibrant, biological colors in the Arctic represent signatures of life in small, frozen niches," said lead author Lígia F. Coelho, an astrobiologist and doctoral student at Técnico. She grew and measured this frigid, colorful biota at the Carl Sagan Institute at Cornell (CSI).

Coelho collected 80 microorganisms from ice and water at Kuujjuarapik, Quebec, working across the frozen Hudson Bay, obtaining ice cores and drilling holes in the ice to take water samples. She acquired samples at the mouth of the Great Whale River in February 2019.

"When searching for life in the cosmos, microbes in these frozen plains of the Arctic give us crucial insight of what to look for on cold new worlds," said Lisa Kaltenegger, a senior author on the paper, professor of astronomy at Cornell and director of the Carl Sagan Institute. Kaltenegger explained that this icy microbial life is well-adapted to the harsh radiation bombardment of space -- which can be the norm on distant exoplanets under a red sun.

"We are assembling the tools to search for life in the universe, so as not to miss it, taking all of Earth's vibrant biosphere into account -- even those in the breathtaking chilled places of our Pale Blue Dot," Kaltenegger said.

From Science Daily

Scientists show large impact of controlling humidity on greenhouse gas emissions

Greenhouse gas emissions from air conditioners are expected to climb as economic growth drives efforts to control both temperature and humidity, according to an analysis by scientists from the National Renewable Energy Laboratory and Xerox PARC.

The research, which explores the environmental impact of controlling humidity, appears in the journal Joule as "Humidity's impact on greenhouse gas emissions from air conditioning." While the energy used to power air conditioners has clear implications on greenhouse gas emissions, the impact from removing moisture from the air has escaped in-depth study until now. The researchers showed that controlling humidity is responsible for roughly half of the energy-related emissions, with the other half due to controlling temperature.

"It's a challenging problem that people haven't solved since air conditioners became commonplaces more than a half-century ago," said Jason Woods, an NREL senior research engineer and co-author of the new study. His co-authors from NREL are Nelson James, Eric Kozubal, and Eric Bonnema. The collaborators from Xerox PARC, an R&D company working on ways to remove humidity more efficiently from the air, are Kristin Brief, Liz Voeller, and Jessy Rivest.

The researchers pointed out the increasing need to cool the air is both a cause and an effect of climate change.

Even a small amount of moisture in the air can cause people to feel uncomfortable and even damage buildings in the form of mold and mildew. Furthermore, controlling indoor humidity through commercially available air conditioning technologies impacts the environment in three ways: 1) They consume a considerable amount of electricity, 2) they use and leak CFC-based refrigerants with global warming potential that is 2,000 times as potent as carbon dioxide, and 3) the manufacturing and delivery of these systems also release greenhouse gases.

The researchers calculated air conditioning is responsible for the equivalent of 1,950 million tons of carbon dioxide released annually, or 3.94% of global greenhouse gas emissions. Of that figure, 531 million tons comes from energy expended to control the temperature and 599 million tons from removing humidity. The balance of the 1,950 million tons of the carbon dioxide come from leakage of global-warming-causing refrigerants and from emissions during the manufacturing and transport of the air conditioning equipment. Managing humidity with air conditioners contributes more to climate change than controlling temperature does. The problem is expected to worsen as consumers in more countries -- particularly in India, China, and Indonesia -- rapidly install many more air conditioners.

"It's a good and a bad thing," Woods said. "It's good that more people can benefit from improved comfort, but it also means a lot more energy is used, and carbon emissions are increased."

To calculate the emissions to manage both temperature and humidity, the researchers divided the globe into a fine grid measuring 1 degree of latitude by 1 degree of longitude. Within each grid cell, the following characteristics were considered: population, gross domestic product, estimated air conditioner ownership per capita, carbon intensity of the grid, and hourly weather. They ran nearly 27,000 simulations across the globe for representative commercial and residential buildings.

Climate change is affecting ambient temperatures and humidity around the globe, making it warmer and more humid. As part of the study, the researchers considered the impact of the changing climate on air conditioner energy use by 2050. For example, the study projects air conditioner energy use to increase by 14% in the hottest climate studied (Chennai, India) and by 41% in the mildest (Milan, Italy) by 2050. The increase in global humidity is projected to have a larger impact on emissions than the increase in global temperatures.

"We've already made the existing, century-old technology nearly as efficient as possible," Woods said. "To get a transformational change in efficiency, we need to look at different approaches without the limitations of the existing one."

Existing vapor compression technology is optimized to cool our buildings using a "vapor compression cycle." This cycle uses harmful refrigerants to cool air down low enough to wring out its moisture, often over-cooling the air and wasting energy. Improving the vapor compression cycle is reaching practical and theoretical limits, thus pointing to a need to leap-frog to an entirely new way to cool and dehumidify buildings. New technologies that split this cooling and humidity control problem into two processes show potential to improve efficiency by 40% or more. Once such technology space is the use of liquid desiccant-based cooling cycles such as the many liquid desiccant air conditioning technologies that NREL is currently developing with many partners, such as Emerson and Blue Frontier.

Read more at Science Daily

Anyone can be trained to be creative

Researchers have developed a new method for training people to be creative, one that shows promise of succeeding far better than current ways of sparking innovation.

This new method, based on narrative theory, helps people be creative in the way children and artists are: By making up stories that imagine alternative worlds, shift perspective and generate unexpected actions.

The narrative method works by recognizing that we're all creative, said Angus Fletcher, who developed the method and is a professor of English and a member of The Ohio State University's Project Narrative.

"We as a society radically undervalue the creativity of kids and many others because we are obsessed with the idea that some people are more creative than others," Fletcher said.

"But the reality is that we're just not training creativity in the right way."

Fletcher and Mike Benveniste, also of Project Narrative, discussed the narrative method of training creativity in a just-published article in the Annals of the New York Academy of Sciences.

The two researchers successfully used the narrative approach to train members of the U.S. Army's Command and General Staff College. Fletcher wrote a publicly available training guide based on his methods that was tailored to officers and advanced enlisted personnel.

They have also worked with the University of Chicago Booth School of Business, the Ohio State College of Engineering and several Fortune 50 companies to teach creativity to their staffs and students.

The current foundation of creativity training is the technique known as divergent thinking, which has been in use since the 1950s. It is a "computational approach" to creativity that treats the brain as a logic machine, Fletcher said.

It works through exercises designed to, among other things, expand working memory, foster analogical thinking and promote problem-solving.

But divergent thinking hasn't delivered the results that many hoped for, Fletcher said. A major issue is that its computational approach relies on data and information about the problems and successes of the past.

"What it can't do is help prepare people for new challenges that we know little about today. It can't come up with truly original actions," Fletcher said. "But the human brain's narrative machinery can."

The narrative method of training for creativity uses many of the techniques that writers use to create stories. One is to develop new worlds in your mind. For example, employees at a company might be asked to think about their most unusual customer -- then imagine a world in which all their customers were like that. How would that change their business? What would they have to do to survive?

Another technique is perspective-shifting. An executive at a company might be asked to answer a problem by thinking like another member of their team.

The point of using these techniques and others like them is not that the scenarios you dream up will actually happen, Fletcher said.

"Creativity isn't about guessing the future correctly. It's about making yourself open to imagining radically different possibilities," he said.

"When you do that, you can respond more quickly and nimbly to the changes that do occur."

Fletcher noted that the narrative approach of training creativity through telling stories resembles how young children are creative -- and research shows that young children are more imaginatively creative than adults.

But the ability of children to perform creative tasks drops after four or five years of schooling, according to studies. That's when children begin intensive logical, semantic and memory training.

The narrative approach to creativity can help people unlock the creativity they may have stopped using as they progressed through school, Fletcher said.

One advantage for organizations that train employees to be creative is that they no longer need to strive to hire "creative people," he said.

"Trying to hire creative people causes problems because the people that leaders identify as creative are almost always people just like themselves. So it promotes conformity instead of originality," Fletcher said.

"It's better to hire a diverse group of people and then train them to be creative. That creates a culture that recognizes that there are already creative people in your organization that you aren't taking advantage of."

While this narrative method of creativity training has already been received positively, Fletcher and his colleagues have started a more formal evaluation. They are conducting randomized controlled trials of the creativity curriculum on more than 600 U.S. Army majors who are part of the Command and General Staff College.

They are also continuing to work with new organizations, such as the Worthington Local School District in Ohio.

"Teaching creativity is one of the most useful things you can do in the world, because it is just coming up with new solutions to solve problems," he said.

Fletcher said this new method of training creativity "could only have come from Ohio State's Project Narrative.

"Project Narrative is all about how stories work in the brain. It is the foundation that helped us put together this new way of thinking about and training for creativity," he said.

Read more at Science Daily

Cell fusion ‘awakens’ regenerative potential of human retina

Fusing human retinal cells with adult stem cells could be a potential therapeutic strategy to treat retinal damage and visual impairment, according to the findings of a new study published in the journal eBioMedicine. The hybrid cells act by awakening the regenerative potential of human retinal tissue, previously only thought to be the preserve of cold-blood vertebrates.

Cell fusion events -- the combination of two different cells into one single entity -- are known to be a possible mechanism contributing to tissue regeneration. Though rare in humans, the phenomenon has been consistently detected in the liver, brain, and gastrointestinal tract.

A team led by ICREA Research Professor Pia Cosma at the Centre for Genomic Regulation (CRG) in Barcelona and funded by Fundació "la Caixa" has now found that cell fusion events also take place in the human retina.

The researchers tested whether cell fusion events could differentiate into cells that turn into neurons, which would show potential for tissue regeneration. The team fused Müller glia, cells that play a secondary but important role in maintaining the structure and function of the retina, with adult stem cells derived from human adipose tissue or bone marrow.

"We were able to carry out cell fusion in vitro,creating hybrid cells. Importantly, the process was more efficient in the presence of a chemical signal transmitted from the retina in response to damage, resulting in rates of hybridisation increasing twofold. This gave us an important clue for the role of cell fusion in the retina," says Sergi Bonilla, postdoctoral researcher at the CRG at the time of publication and first author of the study.

The hybrid cells were injected into a growing retinal organoid, a model that closely resembles the function of the human retina. The researchers found that the hybrid cells successfully engrafted into the tissue and differentiated into cells that closely resemble ganglion cells, a type of neuron essential for vision.

"Our findings are important because they show that the Müller Glia in the human retina have the potential to regenerate neurons," says Pia Cosma. "Salamanders and fish can repair damage caused to the retina thanks to their Müller glia, which differentiate into neurons that rescue or replace damaged neurons. Mammalian Müller glia have lost this regenerative capacity, which means retinal damage or degradation can lead to visual impairment for life. Our findings bring us one step closer to recovering this ability."

The authors caution that much work remains to be done before the development of any potential treatments. One of the next steps is understanding why hybrid cells -- with four complete sets of chromosomes -- don't result in chromosomal instability and cancer development. The authors of the study believe the retina may have a mechanism regulating chromosome segregation similar to the liver, which contains tetraploid cells that act as a genetic reservoir, undergoing mitosis in response to stress and injury.

Read more at Science Daily

Mar 14, 2022

Large, long-lived, and entirely molten magma chambers once existed in Earth’s crust

An international group of researchers led by geologists from Wits University in Johannesburg have come up with multiple lines of evidence indicating that the Bushveld Complex in South Africa functioned as a "big magma tank" in the ancient Earth's crust. This research was published as a paper in Scientific Reports.

Professor Rais Latypov from the School of Geosciences at Wits University says "While re-examining thin-sections of Bushveld chromitites, we noticed a very puzzling observation: chromite often occurs as individual grains that seemingly 'suspended' within matrix minerals. This observation leads us to a critical question: why have the chromite grains failed to sink towards the chamber floor despite being much denser than the host melt?"

To answer this question, the researchers have studied chromitite in three-dimensions (3D) using high-resolution X-ray computed tomography and revealed that nearly all chromite grains are closely interconnected to form a single continuous 3D framework. "This gave us an answer to the above question: chromite grains are not able to settle freely towards the chamber floor simply because they are all bound together in self-supporting 3D frameworks attached to the chamber floor," says Dr Sofya Chistyakova from the School of Geosciences at Wits University.

There is only one process that may result in the formation of such 3D frameworks of chromite crystals. This is an in situ self-nucleation and growth of chromite grains, for example, when all new chromite grains nucleate and grow on pre-existing chromite grains directly at the chamber floor. This happens from the parental melt that is saturated in chromite as the only crystallising phase.

"This logically brought us to a long-known Cr mass balance issue -- normal basaltic melts contain only a very small amount of Cr so that the formation of thick chromitite layer requires extraction of Cr from a very large volume of liquid that must be present as a thick melt layer in the chamber. Simple mass balance calculations indicate that a 1 metre thick layer of chromitite will require a magma column of 2km to 4km thick," says Latypov.

Read more at Science Daily

Precipitation trends determine how often droughts and heat waves will occur together

The fact that global warming will increase temperatures over land masses, increasing the frequency of droughts and heat waves, is a certainty -- as is the fact that climate change will alter the average amount of precipitation on land. However, it has remained unclear until now under what conditions both extreme events will occur together, known as 'compound hot-dry-events'. The UFZ researchers have defined these events as summers in which the average temperature was higher than in 90 percent of the summers between 1950 and 1980, and precipitation was simultaneously lower than in 90 percent of those years.

"In the past, periods of drought and heat waves were often considered separately; there is, however, a strong correlation between the two events, which can be seen in the extremes experienced in 2003 and 2018 in Europe. The negative consequences of these compound extremes are often greater than with one single extreme," says UFZ climate researcher Dr Jakob Zscheischler, last author of the study. Until now, however, it was not known what the future simultaneous occurrence of these extremes depends on -- the uncertainties in the occurrences estimated via routinely used climate model simulations were too large to arrive at robust pronouncements.

The researchers have now used a novel model ensemble, comprising seven climate models, to reduce and better understand these uncertainties. Each model simulation was carried out up to 100 times in order to account for natural climate variability. They examined the historical period between 1950 and 1980, comparing the results with those of a potential future climate that is two degrees warmer than preindustrial conditions. "The advantage of these multiple simulations is that we have a much larger volume of data than with conventional model ensembles, enabling us to better estimate compound extremes," explains Dr Emanuele Bevacqua, first author and climate researcher at the UFZ. The researchers were able to confirm the previous assumption that the average frequency of compound hot-dry events will increase with global warming: while the frequency lay at 3 percent between 1950 and 1980, which statistically is an occurrence every 33 years, in a climate that is two degrees warmer, this figure will be around 12 percent. This would be a fourfold increase compared to the historical period studied.

The climate researchers were also able to determine from the simulations that the frequency of compound hot-dry events in the future will be determined not by temperature trends, but by precipitation trends. The reason for this is that, even with a moderate warming of two degrees, local temperature increase will be so great that in the future, every drought anywhere in the world will be accompanied by a heat wave, regardless of the exact number of degrees by which the temperature increases locally. The uncertainty in the warming leads to an uncertainty in the prediction of compound hot-dry event frequencies of only 1.5 percent. This discounts temperature as a decisive factor for uncertainty. For precipitation, however, the researchers calculated an uncertainty of up to 48 percent. "This demonstrates that local precipitation trends determine whether periods of drought and heat waves will occur simultaneously," explains Emanuele Bevacqua. For Central Europe, for example, this implies that in the case of a 'wet storyline' with increasing precipitation, concurrent droughts and heat waves will occur on average every ten years, whereas in the case of a 'dry storyline' with decreasing precipitation, they will occur at least every four years. For Central North America, these events would be expected every nine years ('wet storyline') and six years ('dry storyline'). These regional storylines for precipitation trends can be used as a basis for decisions on adaptation, for example to evaluate best and worst case-scenarios.

However, even if we know that precipitation trends are decisive for the occurrence of concurrent droughts and heat waves, it is still difficult to predict them any more reliably: "Climate change may shift the distribution of precipitation in certain regions. The pattern of precipitation depends on atmospheric circulation, which determines regional weather dynamics through numerous interactions over large parts of the globe," says Emanuele Bevacqua. Since the dynamic of many of these processes is not yet fully understood, it is difficult to reduce these uncertainties any further. 

Read more at Science Daily

Ice sheet retreat and forest expansion turned ancient subtropical drylands into oases

As human-caused greenhouse gas emissions continue to rise beyond limits for what our species has experienced, researchers are looking to a mystery in the past to answer questions about what may lay ahead.

This work, published today in Nature Communications by an international team of scientists, is part of a project called the 2nd Pliocene Model Intercomparison Project, or PlioMIP2.

The team focused on the climate of the Pliocene, over 3 million years ago, the last time Earth has seen concentrations of over 400 PPM CO2 in the atmosphere, similar to today's concentrations. The Pliocene prompts a long-standing question, says UConn Department of Geosciences researcher and lead author Ran Feng: despite the similarity to the present-day, why were dry areas like the Sahel in Africa and Northern China much wetter and greener in the Pliocene than they are today?

The Pliocene was warmer than present-day conditions by 2 to 3°C, and everything we know about the physics of the climate system suggests the Pliocene should have been drier in the subtropics, says co-author Tripti Bhattacharya, Thonis Family Professor of Earth and Environmental Sciences at Syracuse University.

"Our paper was motivated by a desire to understand this apparent discrepancy and see whether there are processes that can account for wetter Pliocene subtropics," Bhattacharya says.

The answer, the researchers found, is more complex than simply looking at CO2.

Evidence from the geologic record -- which includes a wide variety of sedimentary and paleobotanical indicators of past climate -- show that the Sahel and subtropical Eurasian regions were once home to lusher landscapes with drastically different hydroclimates. Along with proxy data, the team utilized a suite of the latest state-of-the-art model simulations to identify the factors responsible for subtropical rainfall changes in the Pliocene.

Previous studies suggest the only explanation for the Pliocene discrepancy was that there must be some mechanism unaccounted for in models to explain the Pliocene. However, to their surprise, the researchers found that current generation models perform well at simulating wet conditions on Pliocene subtropical continents.

"We discovered the hydroclimate in the dry areas like the Sahel and subtropical East Asia get much wetter when we prescribed vegetation and ice sheet changes in the Pliocene simulations," says Feng.

Feng explains this work is providing a new perspective when studying hydrological cycle responses to CO2 changes: long-term changes in terrestrial conditions like the shifting range of the biomes and the ice sheets are important.

"Continental greening and ice sheet retreat have profound impacts on the surface temperature through lowering the surface albedo -- the ability of the Earth's surface to reflect sunlight back to space -- and a profound effect on the hydrological cycle through allowing for greater evaporation and altering paths of moisture transport. In the long run, there's much bigger change in hydrological cycle, compared to what we are anticipating today," says Feng. "Currently, few of these changes is considered when predicting climate conditions for the next 10 years, or next 50 years."

This is cause for concern, says Feng, because changes in the Earth system's hydrological cycle will mean places already receiving excessive amounts of summer rainfall such as Southeastern Asia, Northern India, and West Africa, are going to see even more summer rainfall as continental greening increases and the ice sheets continue to recede.

Additionally, this work redefines the way we see the Pliocene climate, says Bhattacharya. "The other nice takeaway is that the Pliocene does not really challenge our fundamental understanding of the physics of climate. Our study suggests that we do not need exotic physical mechanisms to explain the Pliocene. Rather, we can explain regional patterns of change in aridity by including earth system feedbacks in models and considering the relationship between earth system sensitivity and rainfall changes. This ultimately increases our confidence that models do a good job at simulating the past and can be trusted to provide reliable projections of future climate."

Read more at Science Daily

Endless forms most beautiful: Why evolution favors symmetry

From sunflowers to starfish, symmetry appears everywhere in biology. This isn't just true for body plans -- the molecular machines keeping our cells alive are also strikingly symmetric. But why? Does evolution have a built-in preference for symmetry?

An international team of researchers believe so, and have combined ideas from biology, computer science and mathematics to explain why. As they report in PNAS, symmetric and other simple structures emerge so commonly because evolution has an overwhelming preference for simple "algorithms" -- that is, simple instruction sets or recipes for producing a given structure.

"Imagine having to tell a friend how to tile a floor using as few words as possible," says Iain Johnston, a professor at the University of Bergen and author on the study. "You wouldn't say: put diamonds here, long rectangles here, wide rectangles here. You'd say something like: put square tiles everywhere. And that simple, easy recipe gives a highly symmetric outcome."

The team used computational modeling to explore how this preference comes about in biology. They showed that many more possible genomes describe simple algorithms than more complex ones. As evolution searches over possible genomes, simple algorithms are more likely to be discovered -- as are, in turn, the more symmetric structures that they produce. The scientists then connected this evolutionary picture to a deep result from the theoretical discipline of algorithmic information theory.

"These intuitions can be formalized in the field of algorithmic information theory, which provides quantitative predictions for the bias towards descriptive simplicity," says Ard Louis, professor at the University of Oxford and corresponding author on the study.

The study's key theoretical idea can be illustrated by a twist on a famous thought experiment in evolutionary biology, which pictures a room full of monkeys trying to write a book by typing randomly on a keyboard. Imagine the monkeys are instead trying to write a recipe. Each is far more likely to randomly hit the letters required to spell out a short, simple recipe than a long, complicated one. If we then follow any recipes the monkeys have produced -- our metaphor for producing biological structures from genetic information -- we will produce simple outcomes much more often than complicated ones.

Read more at Science Daily