Apr 16, 2022

Giant stars undergo dramatic weight loss program

Astronomers at the University of Sydney have found a slimmer type of red giant star for the first time. These stars have undergone dramatic weight loss, possibly due to the presence a greedy neighbour. Published in Nature Astronomy, the discovery is an important step forward to understanding the life of stars in the Milky Way -- our closest stellar neighbours.

There are millions of 'red giant' stars found in our galaxy. These cool and luminous objects are what our Sun will become in four billion years. For some time, astronomers have predicted the existence of slimmer red giants. After finding a smattering of them, the University of Sydney team can finally confirm their existence.

"It's like finding Waldo," said lead author, PhD candidate Mr Yaguang Li from the University of Sydney. "We were extremely lucky to find about 40 slimmer red giants, hidden in a sea of normal ones. The slimmer red giants are either smaller in size or less massive than normal red giants."

How and why did they slim down? Most stars in the sky are in binary systems -- two stars that are gravitationally bound to each other. When the stars in close binaries expand, as stars do as they age, some material can reach the gravitational sphere of their companion and be sucked away. "In the case of relatively tiny red giants, we think a companion could possibly be present," Mr Li said.

An intragalactic treasure hunt

The team analysed archival data from NASA's Kepler space telescope. From 2009 to 2013, the telescope continuously recorded brightness variations on tens of thousands of red giants. Using this incredibly accurate and large dataset, the team conducted a thorough census of this stellar population, providing the groundwork for spotting any outliers.

Two types of unusual stars were revealed: very low-mass red giants, and underluminous (dimmer) red giants.

The very low-mass stars weigh only 0.5 to 0.7 solar mass -- around half the weight of our Sun. If the very low-mass stars had not suddenly lost weight, their masses would indicate they were older than the age of the Universe -- an impossibility.

"So, when we first obtained the masses of these stars, we thought there was something wrong with the measurement," Mr Li said. "But it turns out there wasn't."

The underluminous stars, on the other hand, have normal masses, ranging from 0.8 to 2.0 solar mass. "However, they are much less 'giant' than we expect," said study co-author, Dr Simon Murphy from the University of Southern Queensland. "They've slimmed down somewhat and because they're smaller, they're also fainter, hence 'underluminous' compared to normal red giants."

Only seven such underluminous stars were found, and the authors suspect many more are hiding in the sample. "The problem is that most of them are very good at blending in. It was a real treasure hunt to find them," Dr Murphy said.

These unusual data points could not be explained by simple expectations from stellar evolution. This led the researchers to conclude that another mechanism must be at work, forcing these stars to undergo dramatic weight loss: theft of mass by nearby stars.

Stellar population census

The researchers relied on asteroseismology -- the study of stellar vibrations -- to determine the properties of the red giants.

Traditional methods to study a star are limited to their surface properties, for example, surface temperature and luminosity. By contrast, asteroseismology, which uses sound waves, probes beneath this. "The waves penetrate the stellar interior, giving us rich information on another dimension," Mr Li said.

The researchers could precisely determine stars' evolutionary stages, masses, and sizes with this method. And when they looked at the distributions of these properties, something unusual was immediately noticed: some stars have tiny masses or sizes.

Read more at Science Daily

A key brain region for substance use disorders now has a searchable atlas of distinct cell populations

In a work of systematic biology that advances the field, University of Alabama at Birmingham researchers have identified 16 distinct cell populations in a complex area of the midbrain called the ventral tegmental area, or VTA.

The VTA is important for its role in the dopamine neurotransmission involved in reward-directed behavior. Substance use disorders involve dysregulation of these reward circuits, leading to repeated drug-seeking despite adverse consequences. These include more than 100,000 drug overdose deaths in the United States in the most recent year. The VTA also has a role in several other neuropsychiatric disorders.

Thus, expanding knowledge of its function is a start to explaining the mechanisms for substance use disorders involving drugs like cocaine, alcohol, opioids and nicotine, or psychiatric disorders like schizophrenia and attention deficit hyperactivity, or ADHD.

Dopamine is one of the neurotransmitters used by the brain as chemical messengers to send signals between nerve cells. While decades of research have focused on dopaminergic neurotransmission in the VTA, there is also substantial evidence for the importance of two other neurotransmitters acting in the VTA in reward-related behaviors -- GABA and glutamate. There is also evidence for "combinatorial" neurons that can potentially synthesize and release multiple neurotransmitters. These suggest an additional layer of complexity in VTA cellular and synaptic function.

Systematic biology is the science of classification, and it usually refers to the classification of organisms with regard to their natural relationships. The UAB VTA study classifies cell populations to extend and deepen previous work on the different cell types in the VTA, to provide a starting point for deciphering the relationships among these cells and their broad connections to other areas of the brain. The research, published in Cell Reports, was led by co-first authors Robert A. Phillips III and Jennifer J. Tuscher, Ph.D., and corresponding author Jeremy J. Day, Ph.D.

The 16 distinct cell populations were identified by differences in gene expression after single-nucleus RNA sequencing of 21,600 cells from the rat VTA, creating a searchable online atlas of the VTA. The rat is the prime model for reward and substance use studies. This unbiased approach -- in contrast to previous studies that selected some subsets of cells for RNA sequencing -- was used to create the largest and most comprehensive single-cell transcriptomic analysis focused exclusively on the composition and molecular architecture of the VTA.

Though it was well known that the VTA is composed of heterogeneous cell types, the UAB atlas expands those studies in several key ways.

"For example, previous single-cell sequencing studies were conducted exclusively in the mouse brain and have relied primarily on sequencing a subset of fluorescence-activated cell sorting-isolated midbrain dopaminergic populations, rather than sampling all VTA cell types," Day said. "Notably, our sequencing dataset focuses exclusively on VTA sub-regions, unlike other studies that have focused on pooled cells from the mouse substantia nigra and VTA or a subset of fluorescently tagged cells from general midbrain regions."

The 16 distinct cell populations include classic dopaminergic neurons, three subsets of glutamatergic neurons and three subsets of GABAergic neurons, as well as nine other cell types, including astrocytes and glial cells.

After sub-clustering neuronal cells, the UAB researchers also identified four sub-clusters that may represent neurons capable of combinatorial neurotransmitter release. They also identified selective gene markers for classically defined dopamine neurons and for the combinatorial neurons. A selective marker allows viral targeting of distinct VTA subclasses for functional studies.

The researchers also examined sub-clusters for opioid neuropeptides and their receptors, and identified pan-neuronal increased expression for risk genes associated with schizophrenia and "smoking initiation," as well as enrichment of ADHD risk genes in two glutamatergic neuronal populations.

Read more at Science Daily

Apr 15, 2022

Nova outbursts are apparently a source for cosmic rays

Light on, light off -- this is how one could describe the behavior of the nova, which goes by the name RS Ophiuchi (RS Oph). Every 15 years or so, a dramatic explosion occurs in the constellation of the Serpent Bearer. Birthplaces of a nova are systems in which two very different stars live in a parasitic relationship: A white dwarf, a small, burned-out and tremendously dense star -- a teaspoon of its matter weighs about 1 ton -- orbits a red giant, an old star that will soon burn up.

The dying giant star feeds the white dwarf with matter shedding its outer hydrogen layer as the gas flows onto the nearby white dwarf. This flow of matter continues, until the white dwarf over(h)eats itself. The temperature and pressure in the newly gained stellar shells become too large and are flung away in a gigantic thermonuclear explosion. The dwarf star remains intact and the cycle begins again -- until the spectacle repeats itself.

Explosion in the high-energy range

It had been speculated that such explosions involve high energies. The two MAGIC telescopes recorded gamma rays with the value of 250 gigaelectronvolts (GeV), among the highest energies ever measured in a nova. By comparison, the radiation is a hundred billion times more energetic than visible light.

MAGIC was able to make its observations following initial alerts from other instruments measuring at different wavelengths. "The spectacular eruption of the RS Ophiuchi shows that the MAGIC telescopes' fast response really pays off: It takes them no more than 30 seconds to move to a new target," said David Green, a scientist at the Max Planck Institute for Physics and one of the authors of the paper.

Accelerated protons as a part of cosmic rays

After the explosion, several shock fronts propagated through the stellar wind from the Red Giant and the interstellar medium surrounding the binary system. These shock waves work like a giant power plant in which particles are accelerated to near the speed of light. The combined measurements suggest that the gamma rays emanate from energetic protons, nuclei of hydrogen atoms.

"This also makes nova outbursts a source of cosmic rays," explains David Green. "However, they tend to play the role of local heroes -- meaning to only contribute to the cosmic rays in the close neighborhood. The big players for cosmic rays are supernova remnants. The shock fronts created from stellar explosions are far more violent compared to novae.

Read more at Science Daily

Methane from waste should not be wasted: Exploring landfill ecosystems

Each year, humans across the globe produce billions of tons of solid waste. Roughly 70% of this refuse ends up deposited in landfills, where it slowly decays. Yet, what may seem an inert accumulation of useless debris, is in reality, a complex ecosystem, teeming with microbial activity. Vast communities of microorganisms feed on the waste, converting it into byproducts -- primarily carbon dioxide (CO2) and methane.

While most landfill methane is captured and flared away, researchers hope instead to make use of this resource, which can be converted into fuels, electricity or used for heating homes (see below).

In a new study published in the journal Applied and Environmental Microbiology, lead author Mark Reynolds, along with his Arizona State University and Industrial colleagues, explore these microbial communities flourishing in leachate, a liquid percolating through solid waste in a landfill. They find that the composition and behavior of specific microbes found in arid landfills, like those in Arizona, are distinct from similar communities in more subtropical or temperate climates. Microbial composition also differs depending on the age of the landfill deposits.

The project was carried out at the Salt River Landfill located in Scottsdale, near ASU's Tempe campus. The facility receives about 1,600 tons of municipal solid waste daily.

Solid waste: a breakdown


The study explores ecosystem-level microbial composition in leachate. Diverse environmental conditions seemingly affect the microbial niches that are compartmentalized across the landfill's 143 acres.

"I think of a landfill as like a big carbon buffet to these microorganisms," says Reynolds, a researcher in the Biodesign Swette Center for Environmental Biotechnology. "Our trash is mostly paper-heavy and it's really rich in cellulose and hemicellulose. These are readily degradable under anaerobic (oxygen-free) conditions."

The capture and use of gases produced in landfills can help reduce hazards associated with landfill emissions, and prevent methane from escaping into the atmosphere. Further, energy projects associated with the capture and processing of landfill gas can generate revenue and create jobs in the community.

By better understanding the behavior of these methane-producing microorganisms, researchers hope to improve the capture of this vital resource and possibly limit the escape of methane and CO2 -- two potent greenhouse gases and leading contributors to climate change -- into the atmosphere.

"We're diving into ecological theory to try to get to the source of what might be driving the organizational patterns of the methane-producing organisms," Reynolds says. The study's multifaceted analysis indicates that temperature and dissolved solids are the two key parameters governing their abundance and diversification. This is good news, because this data is routinely captured at landfill sites commonly on a monthly basis and can provide accurate diagnostics -- telltale indicators of broad trends in overall methane production.

From garbage to fuel

Municipal solid waste landfills accounted for over 15% of methane emissions in 2019, representing the third largest source of global methane emissions. As the study notes, emissions of methane from landfills amount to the equivalent of a billion tons of CO2, or roughly the greenhouse emissions produced by nearly 22 million cars driven for a year.

Typically, most of the methane released by microorganisms in a landfill is captured as biogas and subsequently flared off, converting it to CO2. Although this method limits the climate-damaging effects of the methane itself, it is a short-term and inadequate solution to the problem of greenhouse gas emission from landfills.

In addition to its adverse effect on the climate, the lost methane represents a missed opportunity to capture this valuable resource. The study estimates that approximately one-fifth of the nation's landfills would be suitable for such capture and processing, if economic and other hurdles can be overcome.

Currently, microorganisms degrading municipal solid waste generate landfill gas consisting of roughly 50% methane and 50% CO2. By understanding the subtle workings of these microorganisms -- particularly, methanogenic Archaea, which are the real workhorses in the methane production cycle -- researchers hope to boost methane output.

The increased methane can be harvested and used to create electricity, carbon neutral fuels or to heat homes. The latter option is particularly attractive as no further processing of the methane would be required. Alternately, modifying microbial communities could potentially be used to limit methane output, where mitigation is desired.

On the hunt for archaea

Landfills provide an ideal setting for the detailed study of Archaea, which are notoriously challenging to culture in the laboratory. Roughly 80% of archaea diversity remains largely unexplored. "Our labs are really interested in the methanogens because the same metabolism they enact in the wetlands, which make them the highest source of methane, or instead the human gastrointestinal tract, they enact in the landfills," Reynolds says.

Because the methanogens are primitive, single-celled organisms, they can equally make use of plant, or food matter, or paper products. While the study found similar methane concentrations at their arid landfill site compared to other landfills, different communities of methanogens are doing the heavy lifting. The study demonstrates that microbial behavior is also dependent on the age of the solid waste deposited. Younger waste is higher in temperature compared with older waste and degrades according to different regimes. The aridity has also been shown to greatly affect solid waste breakdown over time.

Read more at Science Daily

Decoding a direct dialog between the gut microbiota and the brain

Gut microbiota by-products circulate in the bloodstream, regulating host physiological processes including immunity, metabolism and brain functions. Scientists from the Institut Pasteur (a partner research organization of Université Paris Cité), Inserm and the CNRS have discovered that hypothalamic neurons in an animal model directly detect variations in bacterial activity and adapt appetite and body temperature accordingly. These findings demonstrate that a direct dialog occurs between the gut microbiota and the brain, a discovery that could lead to new therapeutic approaches for tackling metabolic disorders such as diabetes and obesity. The findings are due to be published in Science on April 15, 2022.

The gut is the body's largest reservoir of bacteria. A growing body of evidence reveals the degree of interdependence between hosts and their gut microbiota, and emphasizes the importance of the gut-brain axis. At the Institut Pasteur, neurobiologists from the Perception and Memory Unit (Institut Pasteur/CNRS), immunobiologists from the Microenvironment and Immunity Unit (Institut Pasteur/Inserm), and microbiologists from the Biology and Genetics of the Bacterial Cell Wall Unit (Institut Pasteur/CNRS/Inserm) have shared their expertise to investigate how bacteria in the gut directly control the activity of particular neurons in the brain.

The scientists focused on the NOD2 (nucleotide oligomerization domain) receptor which is found inside of mostly immune cells. This receptor detects the presence of muropeptides, which are the building blocks of the bacterial cell wall. Moreover, it has previously been established that variants of the gene coding for the NOD2 receptor are associated with digestive disorders, including Crohn's disease, as well as neurological diseases and mood disorders. However, these data were insufficient to demonstrate a direct relationship between neuronal activity in the brain and bacterial activity in the gut. This was revealed by the consortium of scientists in the new study.

Using brain imaging techniques, the scientists initially observed that the NOD2 receptor in mice is expressed by neurons in different regions of the brain, and in particular, in a region known as the hypothalamus. They subsequently discovered that these neurons' electrical activity is suppressed when they come into contact with bacterial muropeptides from the gut. "Muropeptides in the gut, blood and brain are considered to be markers of bacterial proliferation," explains Ivo G. Boneca, Head of the Biology and Genetics of the Bacterial Cell Wall Unit at the Institut Pasteur (CNRS/Inserm). Conversely, if the NOD2 receptor is absent, these neurons are no longer suppressed by muropeptides. Consequently, the brain loses control of food intake and body temperature. The mice gain weight and are more susceptible to developing type 2 diabetes, particularly in older females.

In this study, the scientists have demonstrated the astonishing fact that neurons perceive bacterial muropeptides directly, while this task was thought to be primarily assigned to immune cells. "It is extraordinary to discover that bacterial fragments act directly on a brain center as strategic as the hypothalamus, which is known to manage vital functions such as body temperature, reproduction, hunger and thirst," comments Pierre-Marie Lledo, CNRS scientist and Head of the Institut Pasteur's Perception and Memory Unit.

The neurons thus appear to detect bacterial activity (proliferation and death) as a direct gauge of the impact of food intake on the intestinal ecosystem. "Excessive intake of a specific food may stimulate the disproportionate growth of certain bacteria or pathogens, thus jeopardizing intestinal balance," says Gérard Eberl, Head of the Microenvironment and Immunity Unit at the Institut Pasteur (Inserm).

Read more at Science Daily

Human fetuses evolved to slow shoulder growth for easier delivery

Why do human mothers have a much harder time giving birth compared to our evolutionary cousins, the chimpanzees and macaques?

The differences are a big head and wide shoulders. But it has made all the difference for safer births.

"The question is actually two-fold, " says study author Naoki Morimoto of Kyoto University. "What also makes childbirth difficult for women is the relatively narrow pelvis."

Morimoto's team discovered two central aspects of the female human skeletal anatomy that deserve attention when discussing the evolution of childbirth.

The first comes with its own set of points: initially, the growth of human shoulders slows down just before birth and speeds up thereafter; next, this phenomenon alleviates the problem of shoulder dystocia, where the shoulders interfere with safe passage of the fetus through the birth canal.

"It is important to note that the second point reconciles the incompatibility of wide shoulders with the narrow birth canal. The shoulders show an 'intelligent' modification in fetal development," notes lead author PhD candidate Mikaze Kawada.

What makes a human skeletal makeup 'human' in terms of the head and shoulders is size proportionality to the pelvis. Our largely developed brains have resulted in large heads, and our wide shoulders explain bipedal stability and an ability to throw objects far.

On the other hand, the need to make walking more efficient reduced the size of the pelvis as our ancestors treaded farther and more frequently.

Morimoto and his team used computed tomography to obtain cross-sectional representations of the clavicle in humans, chimpanzees, and Japanese macaques from fetal to adult samples.

The team then looked at different shoulder-width to birth-risk correlations between humans and the two other primates. Chimpanzees have proportionally large shoulders and yet, like macaques, fewer shoulder-related birth complications. Since chimpanzees move about less frequently on two feet, their pelvis -- and therefore their birth canal -- is larger than that of their human counterparts.

"We surmise that the wide shoulders, relative to the pelvis of our ancestors, emerged simultaneously with the narrower pelvis as we became fully bipedal," says Morimoto, "but before the brain evolved to today's size."

Read more at Science Daily

Apr 14, 2022

Hubble sheds light on origins of supermassive black holes

Astronomers have identified a rapidly growing black hole in the early universe that is considered a crucial "missing link" between young star-forming galaxies and the first supermassive black holes. They used data from NASA's Hubble Space Telescope to make this discovery.

Until now, the monster, nicknamed GNz7q, had been lurking unnoticed in one of the best-studied areas of the night sky, the Great Observatories Origins Deep Survey-North (GOODS-North) field.

Archival Hubble data from Hubble's Advanced Camera for Surveys helped the team determine that GNz7q existed just 750 million years after the big bang. The team obtained evidence that GNz7q is a newly formed black hole. Hubble found a compact source of ultraviolet (UV) and infrared light. This couldn't be caused by emission from galaxies, but is consistent with the radiation expected from materials that are falling onto a black hole.

Rapidly growing black holes in dusty, early star-forming galaxies are predicted by theories and computer simulations, but had not been observed until now.

"Our analysis suggests that GNz7q is the first example of a rapidly growing black hole in the dusty core of a starburst galaxy at an epoch close to the earliest supermassive black hole known in the universe," explained Seiji Fujimoto, an astronomer at the Niels Bohr Institute of the University of Copenhagen and lead author of the Nature paper describing this discovery. "The object's properties across the electromagnetic spectrum are in excellent agreement with predictions from theoretical simulations."

One of the outstanding mysteries in astronomy today is: How did supermassive black holes, weighing millions to billions of times the mass of the Sun, get to be so huge so fast?

Current theories predict that supermassive black holes begin their lives in the dust-shrouded cores of vigorously star-forming "starburst" galaxies before expelling the surrounding gas and dust and emerging as extremely luminous quasars. While extremely rare, both these dusty starburst galaxies and luminous quasars have been detected in the early universe.

The team believes that GNz7q could be a missing link between these two classes of objects. GNz7q has exactly both aspects of the dusty starburst galaxy and the quasar, where the quasar light shows the dust reddened color. Also, GNz7q lacks various features that are usually observed in typical, very luminous quasars (corresponding to the emission from the accretion disk of the supermassive black hole), which is most likely explained that the central black hole in GN7q is still in a young and less massive phase. These properties perfectly match with the young, transition phase quasar that has been predicted in simulations, but never identified at similarly high-redshift universe as the very luminous quasars so far identified up to a redshift of 7.6.

"GNz7q provides a direct connection between these two rare populations and provides a new avenue toward understanding the rapid growth of supermassive black holes in the early days of the universe," continued Fujimoto. "Our discovery provides an example of precursors to the supermassive black holes we observe at later epochs."

While other interpretations of the team's data cannot be completely ruled out, the observed properties of GNz7q are in strong agreement with theoretical predictions. GNz7q's host galaxy is forming stars at the rate of 1,600 solar masses per year, and GNz7q itself appears bright at UV wavelengths but very faint at X-ray wavelengths.

Generally, the accretion disk of a massive black hole should be very bright in both UV and X-ray light. But this time, although the team detected UV light with Hubble, X-ray light was invisible even with one of the deepest X-ray datasets. These results suggest that the core of the accretion disk, where X-rays originate, is still obscured; while the outer part of the accretion disk, where UV light originates, is becoming unobscured. This interpretation is that GNz7q is a rapidly growing black hole still obscured by the dusty core of its star-forming host galaxy.

"GNz7q is a unique discovery that was found just at the center of a famous, well-studied sky field -- it shows that big discoveries can often be hidden just in front of you," commented Gabriel Brammer, another astronomer from the Niels Bohr Institute of the University of Copenhagen and a member of the team behind this result. "It's unlikely that discovering GNz7q within the relatively small GOODS-North survey area was just 'dumb luck,' but rather that the prevalence of such sources may in fact be significantly higher than previously thought."

Finding GNz7q hiding in plain sight was only possible thanks to the uniquely detailed, multiwavelength datasets available for GOODS-North. Without this richness of data GNz7q would have been easy to overlook, as it lacks the distinguishing features usually used to identify quasars in the early universe. The team now hopes to systematically search for similar objects using dedicated high-resolution surveys and to take advantage of the NASA James Webb Space Telescope's spectroscopic instruments to study objects such as GNz7q in unprecedented detail.

"Fully characterizing these objects and probing their evolution and underlying physics in much greater detail will become possible with the James Webb Space Telescope," concluded Fujimoto. "Once in regular operation, Webb will have the power to decisively determine how common these rapidly growing black holes truly are."

Read more at Science Daily

Diverse life forms may have evolved earlier than previously thought

Diverse microbial life existed on Earth at least 3.75 billion years ago, suggests a new study led by UCL researchers that challenges the conventional view of when life began.

For the study, published in Science Advances, the research team analysed a fist-sized rock from Quebec, Canada, estimated to be between 3.75 and 4.28 billion years old. In an earlier Nature paper (see below) the team found tiny filaments, knobs and tubes in the rock which appeared to have been made by bacteria.

However, not all scientists agreed that these structures -- dating about 300 million years earlier than what is more commonly accepted as the first sign of ancient life -- were of biological origin.

Now, after extensive further analysis of the rock, the team have discovered a much larger and more complex structure -- a stem with parallel branches on one side that is nearly a centimetre long -- as well as hundreds of distorted spheres, or ellipsoids, alongside the tubes and filaments.

The researchers say that, while some of the structures could conceivably have been created through chance chemical reactions, the "tree-like" stem with parallel branches was most likely biological in origin, as no structure created via chemistry alone has been found like it.

The team also provide evidence of how the bacteria got their energy in different ways. They found mineralised chemical by-products in the rock that are consistent with ancient microbes living off iron, sulphur and possibly also carbon dioxide and light through a form of photosynthesis not involving oxygen.

These new findings, according to the researchers, suggest that a variety of microbial life may have existed on primordial Earth, potentially as little as 300 million years after the planet formed.

Lead author Dr Dominic Papineau (UCL Earth Sciences, UCL London Centre for Nanotechnology, Centre for Planetary Sciences and China University of Geosciences) said: "Using many different lines of evidence, our study strongly suggests a number of different types of bacteria existed on Earth between 3.75 and 4.28 billion years ago."

"This means life could have begun as little as 300 million years after Earth formed. In geological terms, this is quick -- about one spin of the Sun around the galaxy."

"These findings have implications for the possibility of extraterrestrial life. If life is relatively quick to emerge, given the right conditions, this increases the chance that life exists on other planets."

For the study, the researchers examined rocks from Quebec's Nuvvuagittuq Supracrustal Belt (NSB) that Dr Papineau collected in 2008. The NSB, once a chunk of seafloor, contains some of the oldest sedimentary rocks known on Earth, thought to have been laid down near a system of hydrothermal vents, where cracks on the seafloor let through iron-rich waters heated by magma.

The research team sliced the rock into sections about as thick as paper (100 microns) in order to closely observe the tiny fossil-like structures, which are made of haematite, a form of iron oxide or rust, and encased in quartz. These slices of rock, cut with a diamond-encrusted saw, were more than twice as thick as earlier sections the researchers had cut, allowing the team to see larger haematite structures in them.

They compared the structures and compositions to more recent fossils as well as to iron-oxidising bacteria located near hydrothermal vent systems today. They found modern-day equivalents to the twisting filaments, parallel branching structures and distorted spheres (irregular ellipsoids), for instance close to the Loihi undersea volcano near Hawaii, as well as other vent systems in the Arctic and Indian oceans.

As well as analysing the rock specimens under various optical and Raman microscopes (which measure the scattering of light), the research team also digitally recreated sections of the rock using a supercomputer that processed thousands of images from two high resolution imaging techniques. The first technique was micro-CT, or microtomography, which uses X-rays to look at the haematite inside the rocks. The second was focused ion beam, which shaves away miniscule -- 200 nanometre-thick -- slices of rock, with an integrated electron microscope taking an image in-between each slice.

Both techniques produced stacks of images used to create 3D models of different targets. The 3D models then allowed the researchers to confirm the haematite filaments were wavy and twisted, and contained organic carbon, which are characteristics shared with modern-day iron-eating microbes.

In their analysis, the team concluded that the haematite structures could not have been created through the squeezing and heating of the rock (metamorphism) over billions of years, pointing out that the structures appeared to be better preserved in finer quartz (less affected by metamorphism) than in the coarser quartz (which has undergone more metamorphism).

The researchers also looked at the levels of rare earth elements in the fossil-laden rock, finding that they had the same levels as other ancient rock specimens. This confirmed that the seafloor deposits were as old as the surrounding volcanic rocks, and not younger imposter infiltrations as some have proposed.

Prior to this discovery, the oldest fossils previously reported were found in Western Australia and dated at 3.46 billion years old, although some scientists have also contested their status as fossils, arguing they are non-biological in origin.

Read more at Science Daily

Research reveals human-driven changes to distinctive foraging patterns in North Pacific Ocean

The first large-scale study of its kind has uncovered more than 4,000 years' worth of distinctive foraging behaviour in a species once driven to the brink of extinction.

An international team of researchers, led by the University of Leicester, identified long-term patterns in the behaviour of the short-tailed albatross (Phoebastria albatrus) in the North Pacific Ocean by studying isotopes found in archaeological and museum-archived samples of the bird, dating as far back as 2300 BCE.

Their findings, published today (Thursday) in Communications Biology, show long-term patterns in foraging behaviour for the short-tailed albatross for the first time -- and demonstrate how individual birds foraged the same hyper-localised sites for thousands of years in spite of the species' huge potential foraging range across thousands of miles of Pacific coastline and open ocean.

But this behaviour, a demonstration of long-term individual foraging site fidelity (LT-IFSF), can pose significant risks for animals who specialise in areas which may be impacted by human activity.

The short-tailed albatross was brought to the brink of extinction by feather hunters between the 1880s and 1930s and though careful conservation has resulted in exponential population growth in recent decades, this trend of LT-IFSF has not been observed in the last century.

Dr Eric Guiry is Lecturer in Biomolecular Archaeology at the University of Leicester and corresponding author for the study, which focused on two locations close to Yuquot, Canada, and compared findings to sites in the USA, Russia and Japan. He said:

"Understanding migratory behaviour is critical for global biodiversity restoration because it helps identify vulnerable regions for environmental protection.

"Although evidence for the extent and depth of LT-IFSF across other species is still emerging, the extreme distances and time scale of the behaviour seen here indicates that this foraging strategy may be a fundamental, density-driven adaptation that could become widespread again as recovering animal populations reach pre-industrial levels."

The research team from Leicester, the Land of Maquinna Cultural Society (Canada), Vrije Universiteit Brussel (Belgium) and Simon Fraser University (Canada) were able to track this foraging behaviour by examining stable carbon and nitrogen isotope compositions in samples of bone collagen.

In contrast to most other tissues such as muscle or feather, which turn over on a scale of days, weeks, or months), isotopic compositions from bone collagen, which remodels slowly over the entire lifespan on an individual, reflect an average of foods consumed over the last several years of an individual's life.

This provides a unique perspective for exploring lifetime trends in animal diet and migration behaviour.

By mapping these biological markers against known isotopic baselines across the species' foraging range, linked to factors such as sea surface temperature and CO2 concentrations, the researchers were able to build up a picture of the short-tailed albatross' migratory and foraging behaviour over hundreds of generations.

But, crucially, as this behaviour is no longer observed among these birds, their findings show this hyper-specialised foraging in specific locations disappeared after the birds were hunted to near extinction in the 1880s, when only a handful of birds remained. Dr Guiry continued:

"We think this behaviour could be driven by competition among birds, meaning that, as the population recovers, we could see it re-emerge. This kind information is important because it provides advanced warning that monitoring for this remarkable behaviour, which can make the birds more vulnerable to human impacts, may need close attention.

"One of the most exciting findings, however, is actually quite a positive note. Our data also indicate that Indigenous communities at Yuquot were harvesting these birds with little impact on their population for thousands of years.

Read more at Science Daily

Physics models better define what makes pasta al dente

Achieving the perfect al dente texture for a pasta noodle can be tough. Noodles can take different times to fully cook, and different recipes call for different amounts of salt to be added. To boot, sometimes noodles will stick to each other or the saucepan.

In Physics of Fluids, by AIP Publishing, researchers from the United States examined how pasta swells, softens, and becomes sticky as it takes up water. They combined measurements of pasta parameters, such as expansion, bending rigidity, and water content to solve a variety of equations to form a theoretical model for the swelling dynamics of starch materials.

Author Sameh Tawfick, from the University of Illinois at Urbana-Champaign, said exploring the properties of noodles was a straightforward pivot from the lab's main work of studying the fluid structure interaction of very flexible and deformable fibers, hairs, and elastic structures.

"Over the last few years, we joked about how pasta noodle adhesion is very related to our work," he said. "We then realized that specifically, the mechanical texture of noodles changes as function of cooking, and our analysis can demonstrate a relation between adhesion, mechanical texture, and doneness."

When the pandemic hit, the idea gained traction, and students and postdocs started working on it at home and in the lab.

The team observed how the noodles come together when lifted from a plate by a fork. This provided them with a grounding of how water-driven hygroscopic swelling affects pasta's texture.

As pasta cooked, the relative rate of the noodle's increase in girth exceeded the rate of lengthening by a ratio of 3.5 to 1 until it reached the firm texture of al dente, before becoming uniformly soft and overcooked.

As pasta is pulled from liquid, the liquid surface energy creates a meniscus that sticks noodles to one another, balancing the elastic resistance from bending the noodles and aided by adhesion energy from the surface tension of the liquid.

The degree to which a noodle was cooked was directly related to the length of the portion that adhered to its neighbors.

"What surprised us the most is that the addition of salt to the boiling water completely changes the cooking time," Tawfick said. "So, depending on how much salt is added to the boiling water, the time to reach al dente can be very different."

Read more at Science Daily

Apr 13, 2022

Study explores effects of extended spaceflight on brain

Scientists from the U.S., Europe and Russia are part of a team releasing the results of a large collaborative study involving the effects of long duration spaceflight on the brain. It appears in the Proceedings of the National Academy of Sciences.

The researchers found that while all of the astronauts and cosmonauts they studied had a similar level of cerebrospinal fluid buildup in the brain, along with reduced space between the brain and the surrounding membrane at the top of the head, there was a noteworthy difference when it came to the Americans. They had more enlargement in the perivascular spaces in the brain, passages that serve as a cleaning system during sleep. That's something the researchers say warrants further investigation.

Donna Roberts, M.D., a neuroradiologist at the Medical University of South Carolina who helped lead the study, said a challenge when it comes to exploring the effects of spaceflight has been that there aren't many people in the U.S. who have traveled to space. Combining information about NASA astronauts with that of Russian cosmonauts and astronauts from the European Space Agency gave the study depth.

"By putting all our data together, we have a larger subject number. That's important when you do this type of study. When you're looking for statistical significance, you need to have larger numbers of subjects."

The study focused on 24 Americans, 13 Russians and a small, unspecified number of astronauts from the ESA. It used MRI scans of their brains before and after six months on the International Space Station to evaluate changes in the perivascular spaces.

Lead researcher Floris Wuyts, Ph.D., a professor at the University of Antwerp in Belgium, put the scope of the project in perspective. "I think it is one of the largest studies on space data, and for sure, one of the very few studies with NASA, ESA and Roscosmos data. It comprises data of almost 10% of all people who went into space." Roscosmos is the Russian space corporation.

Fellow researcher and neuroscientist Giuseppe Barisano, M.D., Ph.D., who works at the University of Southern California, said they looked for differences between the crews. "And in this analysis, we found an increased volume of fluid-filled channels in the brain after spaceflight that was more prominent in the NASA crew than in the Roscosmos crew."

Roberts explained what that might mean. "An important implication of our findings is that the volume of fluid-filled channels in the brain of astronauts is linked to the development of the spaceflight-associated neuro-ocular syndrome, a syndrome characterized by vision changes and whose mechanisms are still not completely clear."

But space physiologist Elena Tomilovskaya, Ph.D., of the Russian Academy of Sciences, said further study is needed to determine if there are clinical implications for future flights. "We need to understand how specific microgravity-countermeasure usage, exercise regimes, diet and other factors may play a role in the differences we found between crews."

Roberts agreed. "It is important not to speculate about pathology or brain health problems at this time. The observed effects are very small, but there are significant changes when we compare the post-flight scans with the preflight scans," she said.

The idea for the large study came about as the scientists gathered at annual meetings held by NASA and ESA. "Independently, we had previously reported similar changes in space crews at post-flight brain MRI, including enlargement of the cerebral ventricles. We discussed our findings and realized how valuable it would be to perform a joint analysis of our data. I would like to point out that Dr. Wuyts, in particular, was instrumental in organizing our group, which met regularly for two years to carry out this analysis," Roberts said."I believe it highlights the importance of international cooperation in understanding the effects of long-term spaceflight on the human body. In fact, we believe international cooperation in space medicine research is essential to ensure the safety of our crews as we return to the Moon and on to Mars."

Read more at Science Daily

Early human habitats linked to past climate shifts

A study published in Nature by an international team of scientists provides clear evidence for a link between astronomically-driven climate change and human evolution.

By combining the most extensive database of well-dated fossil remains and archeological artefacts with an unprecedented new supercomputer model simulating earth's climate history of the past 2 million years, the team of experts in climate modeling, anthropology and ecology was able to determine under which environmental conditions archaic humans likely lived.

The impact of climate change on human evolution has long been suspected, but has been difficult to demonstrate due to the paucity of climate records near human fossil-bearing sites. To bypass this problem, the team instead investigated what the climate in their computer simulation was like at the times and places humans lived, according to the archeological record. This revealed the preferred environmental conditions of different groups of hominins[1]. From there, the team looked for all the places and times those conditions occurred in the model, creating time-evolving maps of potential hominin habitats.

"Even though different groups of archaic humans preferred different climatic environments, their habitats all responded to climate shifts caused by astronomical changes in earth's axis wobble, tilt, and orbital eccentricity with timescales ranging from 21 to 400 thousand years," said Axel Timmermann, lead author of the study and Director of the IBS Center for Climate Physics (ICCP) at Pusan National University in South Korea.

To test the robustness of the link between climate and human habitats, the scientists repeated their analysis, but with ages of the fossils shuffled like a deck of cards. If the past evolution of climatic variables did not impact where and when humans lived, then both methods would result in the same habitats. However, the researchers found significant differences in the habitat patterns for the three most recent hominin groups (Homo sapiens, Homo neanderthalensis and Homo heidelbergensis) when using the shuffled and the realistic fossil ages. "This result implies that at least during the past 500 thousand years the real sequence of past climate change, including glacial cycles, played a central role in determining where different hominin groups lived and where their remains have been found," said Prof. Timmermann.

"The next question we set out to address was whether the habitats of the different human species overlapped in space and time. Past contact zones provide crucial information on potential species successions and admixture," said Prof. Pasquale Raia from the Università di Napoli Federico II, Naples, Italy, who together with his research team compiled the dataset of human fossils and archeological artefacts used in this study. From the contact zone analysis, the researchers then derived a hominin family tree, according to which Neanderthals and likely Denisovans derived from the Eurasian clade of Homo heidelbergensis around 500-400 thousand years ago, whereas Homo sapiens' roots can be traced back to Southern African populations of late Homo heidelbergensis around 300 thousand years ago.

"Our climate-based reconstruction of hominin lineages is quite similar to recent estimates obtained from either genetic data or the analysis of morphological differences in human fossils, which increases our confidence in the results," remarks Dr. Jiaoyang Ruan, co-author of the study and postdoctoral research fellow at the IBS Center for Climate Physics.

The new study was made possible by using one of South Korea's fastest supercomputers named Aleph. Located at the headquarters of the Institute for Basic Science in Daejeon, Aleph ran non-stop for over 6 months to complete the longest comprehensive climate model simulation to date. "The model generated 500 Terabytes of data, enough to fill up several hundred hard disks," said Dr. Kyung-Sook Yun, a researcher at the IBS Center for Climate Physics who conducted the experiments. "It is the first continuous simulation with a state-of-the-art climate model that covers earth's environmental history of the last 2 million years, representing climate responses to the waxing and waning of ice-sheets, changes in past greenhouse gas concentrations, as well as the marked transition in the frequency of glacial cycles around 1 million years ago," adds Dr. Yun.

"So far, the paleoanthropological community has not utilized the full potential of such continuous paleoclimate model simulations. Our study clearly illustrates the value of well-validated climate models to address fundamental questions on our human origins," says Prof. Christoph Zollikofer from the University of Zurich, Switzerland and co-author of the study.

Going beyond the question of early human habitats, and times and places of human species' origins, the research team further addressed how humans may have adapted to varying food resources over the past 2 million years. "When we looked at the data for the five major hominin groups, we discovered an interesting pattern. Early African hominins around 2-1 million years ago preferred stable climatic conditions. This constrained them to relatively narrow habitable corridors. Following a major climatic transition about 800 thousand year ago, a group known under the umbrella term Homo heidelbergensis adapted to a much wider range of available food resources, which enabled them to become global wanderers, reaching remote regions in Europe and eastern Asia," said Elke Zeller, PhD student at Pusan National University and co-author of the study.

Read more at Science Daily

What do you see when you listen to music?

Are we all imagining the same thing when we listen to music, or are our experiences hopelessly subjective? In other words, is music a truly universal language?

To investigate those questions, an international team of researchers (including a classical pianist, a rock drummer and a concert bassist) asked hundreds of people what stories they imagined when listening to instrumental music. The results appeared recently in the Proceedings of the National Academy of Sciences.

The researchers, led by Princeton's Elizabeth Margulis and Devin McAuley of Michigan State University, discovered that listeners in Michigan and Arkansas imagined very similar scenes, while listeners in China envisioned completely different stories.

"These results paint a more complex picture of music's power," said Margulis, a professor of music who uses theoretical, behavioral and neuroimaging methodologies to investigate the dynamic experience of listeners. "Music can generate remarkably similar stories in listeners' minds, but the degree to which these imagined narratives are shared depends on the degree to which culture is shared across listeners."

The 622 participants came from three regions across two continents: two suburban college towns in middle America -- one in Arkansas and the other in Michigan -- and a group from Dimen, a village in rural China where the primary language is Dong, a tonal language not related to Mandarin, and where the residents have little access to Western media.

All three groups of listeners -- in Arkansas, Michigan and Dimen -- heard the same 32 musical stimuli: 60-second snippets of instrumental music, half from Western music and half from Chinese music, all without lyrics. After each musical excerpt, they provided free-response descriptions of the stories they envisioned while they listened.

The results were striking. Listeners in Arkansas and Michigan described very similar stories, often using the same words, while the Dimen listeners envisioned stories that were similar to each other but very different from those of American listeners.

For example, a musical passage identified only as W9 brought to mind a sunrise over a forest, with animals waking and birds chirping for American listeners, while those in Dimen pictured a man blowing a leaf on a mountain, singing a song to his beloved. For musical passage C16, Arkansas and Michigan listeners described a cowboy, sitting alone in the desert sun, looking out over an empty town; participants in Dimen imagined a man in ancient times sorrowfully contemplating the loss of his beloved.

Quantifying similarities between free-response stories required huge amounts of natural language data processing. The tools and strategies that they developed will be useful in future studies, said Margulis, who is also the director of Princeton's Music Cognition lab. "Being able to map out these semantic overlaps, using tools from natural language processing, is exciting and very promising for future studies that, like this one, straddle the border between the humanities and the sciences."

"It's amazing," said co-author Benjamin Kubit, a drummer and a postdoctoral research associate previously in the Princeton Neuroscience Institute and now in the Department of Music. "You can take two random people who grew up in a similar environment, have them listen to a song they haven't heard before, ask them to imagine a narrative, and you'll find similarities. However, if those two people don't share a culture or geographical location, you won't see that same kind of similarity in experience. So while we imagine music can bring people together, the opposite can also be true -- it can distinguish between sets of people with a different background or culture."

Though the researchers had carefully ensured that the pieces they chose had never appeared in a movie soundtrack or any other setting that would prescribe visuals, the same music sparked very similar visuals in hundreds of listeners -- unless they had grown up in a different cultural context.

"It's stunning to me that some of these visceral, hard-to-articulate, imagined responses we have to music can actually be widely shared," said Margulis. "There's something about that that's really puzzling and compelling, especially because the way we encounter music in 2022 is often solitary, over headphones. But it turns out, it's still a shared experience, almost like a shared dream. I find it really surprising and fascinating -- with the caveat, of course, that it's not universally shared, but depends on a common set of cultural experiences."

Read more at Science Daily

Newborns’ brains already organized into functional networks

Right from birth, human brains are organized into networks that support mental functions such as vision and attention, a new study shows.

Previous studies had shown that adults have seven such functional networks in the brain. This study, the first to take a fine-grained, whole-brain approach in newborns, found five of those networks are operating at birth.

Crucially, the study also found individual variability in those networks in newborns, which may have implications for how genetics affects behavior in adults.

"For centuries, humans have wondered about what makes them unique and the role of genetic programming versus our lifetime of experience," said Zeynep Saygin, senior author of the study and assistant professor of psychology at The Ohio State University.

"Our study shows variability in the brain at birth that may be related to some of the behavioral differences we see in adults."

The study, published recently in the journal NeuroImage, was led by M. Fiona Molloy, a psychology graduate student at Ohio State.

The researchers analyzed fMRI scans of the brains of 267 newborns, most less than a week old, who were part of the Developing Human Connectome Project. All infants were scanned for 15 minutes while they were asleep.

The study involved analysis of the smallest bits of brain possible with MRI -- called voxels or volumetric pixels -- to see how the signals of each voxel were related to other voxels in the brain.

"Even when we're sleeping, the brain is active and different parts are communicating with each other," Saygin said.

"We identify networks by finding which parts of the brain show similar patterns of activity at the same time -- for example when one area activates, the other does too. They are talking to each other."

Findings showed five networks in newborns that resembled those found in adults: the visual, default, sensorimotor, ventral attention and high-level vision networks.

Adults have two additional networks not found in the brains of newborns: the control and limbic networks. These are both involved with higher-level functions, Saygin explained.

The control network allows adults to make plans to meet goals. The limbic network is involved in emotional regulation.

"Babies have little cognitive control and emotional regulation, so it is not surprising that these networks aren't developed," Saygin said.

"But one possibility would have been that they are set up at birth and just need to be honed. That's not what we found, though. Those networks are not there at all yet and must develop through experience."

The researchers also examined individual differences in the brain networks of the newborns studied. Results showed that the ventral attention network showed the most variability in the newborns. This is the network involved in directing attention to important stimuli encountered in the world, especially something that may be unexpected.

"Our results suggest that the ventral attention network is a stable source of individual variability that exists at birth and perhaps persists through the lifetime," she said.

In adults, this individual variability in network organization has been linked to behavior and different disorders.

"We see individual differences in network organization as early as birth, and it could be interesting to see if these differences predict behavior or risk of psychological disorders later in life," Molloy said.

In another analysis, the researchers used tissue samples of human brains available through the Allan Human Brain Atlas to explore how differences in the brain networks in the newborns may be tied to differences in gene expression -- the process of turning on or activating genes.

They found multiple genes from the brain tissue samples that may have led to the specific brain organizations they found in individual newborns in the study.

"This might uncover a potential genetic basis for why we're seeing these differences in the networks of newborns in our study," she said.

Read more at Science Daily

Apr 12, 2022

4 billion-year-old relic from early solar system heading our way

 An enormous comet -- approximately 80 miles across, more than twice the width of Rhode Island -- is heading our way at 22,000 miles per hour from the edge of the solar system. Fortunately, it will never get closer than 1 billion miles from the sun, which is slightly farther from Earth than Saturn; that will be in 2031.

Comets, among the oldest objects in the solar system, are icy bodies that were unceremoniously tossed out of the solar system in a gravitational pinball game among the massive outer planets, said David Jewitt. The UCLA professor of planetary science and astronomy co-authored a new study of the comet in the Astrophysical Journal Letters. The evicted comets took up residence in the Oort cloud, a vast reservoir of far-flung comets encircling the solar system out to many billions of miles into deep space, he said.

A typical comet's spectacular multimillion-mile-long tail, which makes it look like a skyrocket, belies the fact that the source at the heart of the fireworks is a solid nucleus of ice mixed with dust -- essentially a dirty snowball. This huge one, called Comet C/2014 UN271 and discovered by astronomers Pedro Bernardinelli and Gary Bernstein, could be as large as 85 miles across.

"This comet is literally the tip of the iceberg for many thousands of comets that are too faint to see in the more distant parts of the solar system," Jewitt said. "We've always suspected this comet had to be big because it is so bright at such a large distance. Now we confirm it is."

This comet has the largest nucleus ever seen in a comet by astronomers. Jewitt and his colleagues determined the size of its nucleus using NASA's Hubble Space Telescope. Its nucleus is about 50 times larger than those of most known comets. Its mass is estimated to be 500 trillion tons, a hundred thousand times greater than the mass of a typical comet found much closer to the sun.

"This is an amazing object, given how active it is when it's still so far from the sun," said lead author Man-To Hui, who earned his doctorate from UCLA in 2019 and is now with the Macau University of Science and Technology in Taipa, Macau. "We guessed the comet might be pretty big, but we needed the best data to confirm this."

So the researchers used Hubble to take five photos of the comet on Jan. 8, 2022, and incorporated radio observations of the comet into their analysis.

The comet is now less than 2 billion miles from the sun and in a few million years will loop back to its nesting ground in the Oort cloud, Jewitt said.

Comet C/2014 UN271 was first serendipitously observed in 2010, when it was 3 billion miles from the sun. Since then, it has been intensively studied by ground and space-based telescopes.

The challenge in measuring this comet was how to determine the solid nucleus from the huge dusty coma -- the cloud of dust and gas -- enveloping it. The comet is currently too far away for its nucleus to be visually resolved by Hubble. Instead, the Hubble data show a bright spike of light at the nucleus' location. Hui and his colleagues next made a computer model of the surrounding coma and adjusted it to fit the Hubble images. Then, they subtracted the glow of the coma, leaving behind the nucleus.

Hui and his team compared the brightness of the nucleus to earlier radio observations from the Atacama Large Millimeter/submillimeter Array, or ALMA, in Chile. The new Hubble measurements are close to the earlier size estimates from ALMA, but convincingly suggest a darker nucleus surface than previously thought.

"It's big and it's blacker than coal," Jewitt said.

The comet has been falling toward the sun for well over 1 million years. The Oort cloud is thought to be the nesting ground for trillions of comets. Jewitt thinks the Oort cloud extends from a few hundred times the distance between the sun and the Earth to at least a quarter of the way out to the distance of the nearest stars to our sun, in the Alpha Centauri system.

The Oort cloud's comets were tossed out of the solar system billions of years ago by the gravitation of the massive outer planets, according to Jewitt. The far-flung comets travel back toward the sun and planets only if their orbits are disturbed by the gravitational tug of a passing star, the professor said.

Read more at Science Daily

Study sheds new light on the origin of civilization

New research from the University of Warwick, the Hebrew University of Jerusalem, Reichman University, Universitat Pompeu Fabra and the Barcelona School of Economics challenges the conventional theory that the transition from foraging to farming drove the development of complex, hierarchical societies by creating agricultural surplus in areas of fertile land.

In The Origin of the State: Land Productivity or Appropriability?, published in the April issue of the Journal of Political Economy, Professors Joram Mayshar, Omer Moav and Luigi Pascali show that high land productivity on its own does not lead to the development of tax-levying states.

It is the adoption of cereal crops that is the key factor for the emergence of hierarchy.

The authors theorise that this is because the nature of cereals require that they be harvested and stored in accessible locations, making them easier to appropriate as tax than root crops which remain in the ground, and are less storable.

The researchers demonstrate a causal effect of cereal cultivation on the emergence of hierarchy using empirical evidence drawn from multiple data sets spanning several millennia, and find no similar effect for land productivity.

Professor Mayshar said: "A theory linking land productivity and surplus to the emergence of hierarchy has developed over a few centuries and became conventional in thousands of books and articles. We show, both theoretically and empirically, that this theory is flawed."

Underpinning the study, Mayshar, Moav and Pascali developed and examined a large number of data sets including the level of hierarchical complexity in society; the geographic distribution of wild relatives of domesticated plants; and land suitability for various crops to explore why in some regions, despite thousands of years of successful farming, well-functioning states did not emerge, while states that could tax and provide protection to lives and property emerged elsewhere.

Professor Pascali said: "Using these novel data, we were able to show that complex hierarchies, like complex chiefdoms and states, arose in areas in which cereal crops, which are easy to tax and to expropriate, were de-facto the only available crops. Paradoxically, the most productive lands, those in which not only cereals but also roots and tubers were available and productive, did not experience the same political developments."

They also employed the natural experiment of the Columbian Exchange, the interchange of crops between the New World and the Old World in the late 15th century which radically changed land productivity and the productivity advantage of cereals over roots and tubers in most countries in the world.

Professor Pascali said "Constructing these new data sets, investigating case studies, and developing the theory and empirical strategy took us nearly a decade of hard work. We are very pleased to see that the paper is finally printed in a journal with the standing of the JPE."

Professor Moav said: "Following the transition from foraging to farming, hierarchical societies and, eventually, tax-levying states have emerged. These states played a crucial role in economic development by providing protection, law and order, which eventually enabled industrialization and the unprecedented welfare enjoyed today in many countries."

"The conventional theory is that this disparity is due to differences in land productivity. The conventional argument is that food surplus must be produced before a state can tax farmers' crops, and therefore that high land productivity plays the key role.

Professor Mayshar added: "We challenge the conventional productivity theory, contending that it was not an increase in food production that led to complex hierarchies and states, but rather the transition to reliance on appropriable cereal grains that facilitate taxation by the emerging elite. When it became possible to appropriate crops, a taxing elite emerged, and this led to the state.

Read more at Science Daily

Critical benefits of snowpack for winter wheat are diminishing

University of Minnesota scientists are partnering with a global team to study the complex effects of climate change on winter crops.

Warming winters may sound like a welcome change for some farmers because the change in temperature could reduce freezing stress on plants and create more ideal conditions for growing overwinter cash crops and winter cover crops. However, when looking at climate change from a cross-seasonal perspective and accounting for declining snowpack, researchers are finding that the whole picture isn't so sunny.

Reduced snow may result in more exposure of winter crops to freeze and could mean greater risks for agricultural drought.

In a new study published in Nature Climate Change, Zhenong Jin, Ph.D., an assistant professor in the Department of Bioproducts and Biosystems Engineering at the University of Minnesota, led an international team in researching the implications that could be associated with warmer winters and declining snowpack, using winter wheat (the largest winter crop in the U.S.) as an example.

"Although the implications of changes in snow for agricultural irrigation are beginning to be understood, the consequences of such for predominantly rainfed winter crops such as winter wheat remain largely unknown. There might be risks for being overoptimistic about growing overwinter crops under climate change," said Jin.

Researchers used panel regression, a powerful statistical method to analyze repeated observations over time, to attribute the interannual variability of winter wheat yield to multiple interactive environmental factors. These factors included cold season freezing degree days, growing degree days, rainfall and snowfall during the growing season and snow cover fraction during frozen days.

The researchers found:
 

  • From 1999-2019, snow cover insulation weakened yield losses due to freezing stress by 22%.
  • Projections show that future reduced snow cover could offset up to one-third of the yield benefit from reduced frost.


"Our study highlighted the potential freezing risk in winters with decreased snow cover, especially when seedlings were exposed to comparatively warmer conditions that caused loss of winter-hardiness, which can cause significant yield losses of winter crops," said Peng Zhu, Ph.D., a Researcher from the Climate and Environment Sciences Laboratory of the Pierre Simon Laplace Institute, who co-led this study.

This research will help inform breeders as they consider the complex tradeoffs among warming, reduced snowpack and occasional freezing threats when developing climate-smart cultivars.

These results also highlight the necessity of improving the representation of snow associated processes in crop models to better evaluate climate change effects and adaptation potential in cropping systems.

"It is worth noting that in some cropping systems freezing stress is appreciated, since it helps farmers control pests and diseases and snow is even removed or at least made more compact by farmers to increase the freezing of the soil," said Jin. "When data becomes available, future studies might also need to account for the influence of snow on pests and diseases to comprehensively understand what future changes in snowpack mean for the cropping system."

Read more at Science Daily

Bacteria generate electricity from methane

Generating power while purifying the environment of greenhouse gases should be achievable using bacteria. In a new publication, microbiologists from Radboud University have demonstrated that it is possible to make methane-consuming bacteria generate power in the lab. The study will be published in Frontiers in Microbiology on April 12.

The bacteria, Candidatus Methanoperedens, use methane to grow and naturally occur in fresh water such as ditches and lakes. In the Netherlands, the bacteria mostly thrive in locations where the surface and groundwater are contaminated with nitrogen, as they require nitrate to break down methane.

The researchers initially wanted to know more about the conversion processes occurring in the microorganism. In addition, they were also curious whether it would be possible to use it to generate power. "This could be very useful for the energy sector," says microbiologist and author Cornelia Welte. "In the current biogas installations, methane is produced by microorganisms and subsequently burnt, which drives a turbine, thus generating power. Less than half of the biogas is converted into power, and this is the maximum achievable capacity. We want to evaluate whether we can do better using microorganisms."

A kind of battery


Fellow microbiologists from Nijmegen have previously shown that it is possible to generate power using anammox bacteria that use ammonium during the process instead of methane. "The process in these bacteria is basically the same," says microbiologist Heleen Ouboter. "We create a kind of battery with two terminals, where one of these is a biological terminal and the other one is a chemical terminal. We grow the bacteria on one of the electrodes, to which the bacteria donate electrons resulting from the conversion of methane."

Through this approach, the researchers managed to convert 31 percent of the methane into electricity, but they aim at higher efficiencies. "We will continue focusing on improving the system," Welte says.

From Science Daily

Apr 11, 2022

Neptune is cooler than we thought: Study reveals unexpected changes in atmospheric temperatures

New research led by space scientists at the University of Leicester has revealed how temperatures in Neptune's atmosphere have unexpectedly fluctuated over the past two decades.

The study, published today (Monday) in Planetary Science Journal, used observations in thermal-infrared wavelengths beyond the visible light spectrum, effectively sensing heat emitted from the planet's atmosphere.

An international team of researchers, including scientists from Leicester and NASA's Jet Propulsion Laboratory (JPL), combined all existing thermal infrared images of Neptune gathered from multiple observatories over almost two decades. These include the European Southern Observatory's Very Large Telescope and Gemini South telescope in Chile, together with the Subaru Telescope, Keck Telescope, and the Gemini North telescope, all in Hawai'i, and spectra from NASA's Spitzer Space Telescope.

By analysing the data, the researchers were able to reveal a more complete picture of trends in Neptune's temperatures than ever before.

But to the researchers' surprise, these collective datasets show a decline in Neptune's thermal brightness since reliable thermal imaging began in 2003, indicating that globally-averaged temperatures in Neptune's stratosphere -- the layer of the atmosphere just above its active weather layer -- have dropped by roughly 8 degrees Celsius (14 degrees Fahrenheit) between 2003 and 2018.

Dr Michael Roman, Postdoctoral Research Associate at the University of Leicester and lead author on the paper, said:

"This change was unexpected. Since we have been observing Neptune during its early southern summer, we would expect temperatures to be slowly growing warmer, not colder."

Neptune has an axial tilt, and so it experiences seasons, just like Earth. However, given its great distance from the Sun, Neptune takes over 165 years to complete an orbit around its host star, and so its seasons change slowly, lasting over 40 Earth-years each.

Dr Glenn Orton, Senior Research Scientist at JPL and co-author on the study, noted:

"Our data cover less than half of a Neptune season, so no one was expecting to see large and rapid changes."

Yet, at Neptune's south pole, the data reveal a different and surprisingly dramatic change. A combination of observations from Gemini North in 2019 and Subaru in 2020 reveal that Neptune's polar stratosphere warmed by roughly 11?C (~20?F) between 2018 and 2020, reversing the previous globally-averaged cooling trend. Such polar warming has never been observed on Neptune before.

The cause of these unexpected stratospheric temperature changes is currently unknown, and the results challenge scientists' understanding of Neptune's atmospheric variability.

Dr Roman continued:

"Temperature variations may be related to seasonal changes in Neptune's atmospheric chemistry, which can alter how effectively the atmosphere cools.

"But random variability in weather patterns or even a response to the 11-year solar activity cycle may also have an effect."

The 11-year solar cycle (marked by periodic variation in the Sun's activity and sunspots) has been previously suggested to affect Neptune's visible brightness, and the new study reveals a possible, but tentative, correlation between the solar activity, stratospheric temperatures, and the number of bright clouds seen on Neptune.

Follow-up observations of the temperature and cloud patterns are needed to further assess any possible connection in the years ahead.

Answers to these mysteries and more will come from the James Webb Space Telescope (JWST), which is set to observe both ice giants, Uranus and Neptune, later this year.

Leigh Fletcher, Professor of Planetary Science at the University of Leicester, will lead such observations with allocated time of JWST's suite of instruments. Professor Fletcher, also a co-author on this study, said:

"The exquisite sensitivity of the space telescope's mid-infrared instrument, MIRI, will provide unprecedented new maps of the chemistry and temperatures in Neptune's atmosphere, helping to better identify the nature of these recent changes."

 Read more at Science Daily

Certain personality traits associated with cognitive functioning late in life

People who are organized, with high levels of self-discipline, may be less likely to develop mild cognitive impairment as they age, while people who are moody or emotionally unstable are more likely to experience cognitive decline late in life, according to research published by the American Psychological Association.

The research, published in the Journal of Personality and Social Psychology, focused on the role three of the so-called "Big Five" personality traits (conscientiousness, neuroticism and extraversion) play in cognitive functioning later in life.

"Personality traits reflect relatively enduring patterns of thinking and behaving, which may cumulatively affect engagement in healthy and unhealthy behaviors and thought patterns across the lifespan," said lead author Tomiko Yoneda, PhD, of the University of Victoria. "The accumulation of lifelong experiences may then contribute to susceptibility of particular diseases or disorders, such as mild cognitive impairment, or contribute to individual differences in the ability to withstand age-related neurological changes."

Individuals who score high in conscientiousness tend to be responsible, organized, hard-working and goal-directed. Those who score high on neuroticism have low emotional stability and have a tendency toward mood swings, anxiety, depression, self-doubt and other negative feelings. Extraverts draw energy from being around others and directing their energies toward people and the outside world. They tend to be enthusiastic, gregarious, talkative and assertive, according to Yoneda.

To better understand the relationship between personality traits and cognitive impairment later in life, researchers analyzed data from 1,954 participants in the Rush Memory and Aging Project, a longitudinal study of older adults living in the greater Chicago metropolitan region and northeastern Illinois. Participants without a formal diagnosis of dementia were recruited from retirement communities, church groups, and subsidized senior housing facilities beginning in 1997 and continuing to the present. Participants received a personality assessment and agreed to annual assessments of their cognitive abilities. The study included participants who had received at least two annual cognitive assessments or one assessment prior to death.

Participants who scored either high on conscientiousness or low in neuroticism were significantly less likely to progress from normal cognition to mild cognitive impairment over the course of the study.

"Scoring approximately six more points on a conscientiousness scale ranging 0 to 48 was associated with a 22% decreased risk of transitioning from normal cognitive functioning to mild cognitive impairment," said Yoneda. "Additionally, scoring approximately seven more points on a neuroticism scale of 0 to 48 was associated with a 12% increased risk of transition."

Researchers found no association between extraversion and ultimate development of mild cognitive impairment, but they did find that participants who scored high on extraversion -- along with those who scored either high on conscientiousness or low in neuroticism -- tended to maintain normal cognitive functioning longer than others.

For example, 80-year-old participants who were high in conscientiousness were estimated to live nearly two years longer without cognitive impairment compared with individuals who were low in conscientiousness. Participants high in extraversion were estimated to maintain healthy cognition for approximately a year longer. In contrast, high neuroticism was associated with at least one less year of healthy cognitive functioning, highlighting the harms associated with the long-term experience of perceived stress and emotional instability, according to Yoneda.

Additionally, individuals lower in neuroticism and higher in extraversion were more likely to recover to normal cognitive function after receiving a previous diagnosis of mild cognitive impairment, suggesting that these traits may be protective even after an individual starts to progress to dementia. In the case of extraversion, this finding may be indicative of the benefits of social interaction for improving cognitive outcomes, according to Yoneda.

Read more at Science Daily

Converting solar energy to electricity on demand

The researchers behind an energy system that makes it possible to capture solar energy, store it for up to eighteen years and release it when and where it is needed have now taken the system a step further. After previously demonstrating how the energy can be extracted as heat, they have now succeeded in getting the system to produce electricity, by connecting it to a thermoelectric generator. Eventually, the research -- developed at Chalmers University of Technology, Sweden -- could lead to self-charging electronics using stored solar energy on demand.

"This is a radically new way of generating electricity from solar energy. It means that we can use solar energy to produce electricity regardless of weather, time of day, season, or geographical location. It is a closed system that can operate without causing carbon dioxide emissions," says research leader Kasper Moth-Poulsen, Professor at the Department of Chemistry and Chemical Engineering at Chalmers.

The new technology is based on the solar energy system MOST -- Molecular Solar Thermal Energy Storage Systems, developed at Chalmers University of Technology. Very simply, the technology is based on a specially designed molecule that changes shape when it comes into contact with sunlight. The research has already attracted great interest worldwide when it has been presented at earlier stages.

The new study, published in Cell Reports Physical Science and carried out in collaboration with researchers in Shanghai, takes the solar energy system a step further, detailing how it can be combined with a compact thermoelectric generator to convert solar energy into electricity.

Ultra-thin chip converts heat into electricity

The Swedish researchers sent their specially designed molecule, loaded with solar energy, to colleagues Tao Li and Zhiyu Hu at Shanghai Jiao Tong University, where the energy was released and converted into electricity using the generator they developed there. Essentially, Swedish sunshine was sent to the other side of the world and converted into electricity in China.

"The generator is an ultra-thin chip that could be integrated into electronics such as headphones, smart watches and telephones. So far, we have only generated small amounts of electricity, but the new results show that the concept really works. It looks very promising," says researcher Zhihang Wang from Chalmers University of Technology.

Fossil free, emissions free

The research has great potential for renewable and emissions-free energy production. But a lot of research and development remains before we will be able to charge our technical gadgets or heat our homes with the system's stored solar energy.

"Together with the various research groups included in the project, we are now working to streamline the system. The amount of electricity or heat it can extract needs to be increased. Even if the energy system is based on simple basic materials, it needs to be adapted to be sufficiently cost-effective to produce, and thus possible to launch more broadly," says Kasper Moth-Poulsen.

More about the Most technology


Molecular Solar Thermal Energy Storage Systems, Most, is a closed energy system based on a specially designed molecule of carbon, hydrogen and nitrogen, which when hit by sunlight changes shape into an energy-rich isomer -- a molecule made up of the same atoms but arranged together in a different way. The isomer can then be stored in liquid form for later use when needed, such as at night or in winter. The researchers have refined the system to the point that it is now possible to store the energy for up to 18 years. A specially designed catalyst releases the saved energy as heat while returning the molecule to its original shape, so it can then be reused in the heating system. Now, in combination with a micrometer-thin thermoelectric generator, the energy system can also generate electricity to order.

Read more at Science Daily

Children think farm animals deserve same treatment as pets

Children differ dramatically from adults in their moral views on animals, new research shows.

University of Exeter researchers asked children aged 9-11 about the moral status and treatment of farm animals (pigs), pets (dogs) and people.

Unlike adults, children say farm animals should be treated the same as people and pets, and think eating animals is less morally acceptable than adults do.

The findings suggest that "speciesism" -- a moral hierarchy that gives different value to different animals -- is learned during adolescence.

"Humans' relationship with animals is full of ethical double standards," said Dr Luke McGuire, from the University of Exeter.

"Some animals are beloved household companions, while others are kept in factory farms for economic benefit.

"Judgements seem to largely depend on the species of the animal in question: dogs are our friends, pigs are food."

The research team -- including the University of Oxford -- surveyed 479 people, all living in England, from three age groups: 9-11, 18-21 and 29-59.

The two adult groups had relatively similar views -- suggesting attitudes to animals typically change between the ages of 11 and 18.

"Something seems to happen in adolescence, where that early love for animals becomes more complicated and we develop more speciesism," said Dr McGuire

"It's important to note that even adults in our study thought eating meat was less morally acceptable than eating animal products like milk.

"So aversion to animals -- including farm animals -- being harmed does not disappear entirely."

The study also found that, as people age, they are more likely to classify farm animals as "food" rather than "pets" -- while children were equally likely to consider pigs to fall into either of these categories.

While adjusting attitudes is a natural part of growing up, Dr McGuire said the "moral intelligence of children" is also valuable.

"If we want people to move towards more plant-based diets for environmental reasons, we have to disrupt the current system somewhere," he said.

Read more at Science Daily

SARS-CoV-2: Neutralization of BA.1 and BA.2 by therapeutic monoclonal antibodies

The SARS-CoV-2 Omicron BA.1 sublineage has been supplanted in many countries by the BA.2 sublineage. Although Omicron is responsible for less severe forms in the general population, immunocompromised people are still at higher risk of developing severe forms of COVID-19. Several monoclonal antibodies are currently available in clinical practice as a preventive treatment for these patients. Scientists from the Institut Pasteur, the CNRS, the Vaccine Research Institute (VRI), in collaboration with Orléans Regional Hospital, the Paris Public Hospital Network (AP-HP), KU Leuven (the Catholic University of Leuven) and Université Paris Cité, studied the sensitivity of Omicron BA.1 and BA.2 to nine monoclonal antibodies, some of which are used in pre-exposure prophylaxis in immunocompromised individuals. The scientists showed a loss of neutralizing activity against BA.1 and BA.2 in people treated with two antibody cocktails (Ronapreve® or Evusheld®). These findings were published in Nature Medicine on March 23, 2022.

The Omicron sublineage BA.2 has become increasingly common and is now dominant in several countries, including France. Scientists from the Institut Pasteur's Virus and Immunity Unit (a joint research unit with the CNRS) and the VRI began by studying the sensitivity of the Omicron BA.1 and BA.2 sublineages to therapeutic monoclonal antibodies in a cell culture system. This step involved isolating an infectious BA.2 strain in collaboration with the Rega Institute at KU Leuven. They then examined the efficacy of pre-exposure prophylaxis in immunocompromised individuals at risk of developing severe COVID-19. The scientists first described the in vitro sensitivity of BA.2 to nine therapeutic antibodies, as compared to the Delta variant and Omicron BA.1. They went on to examine the clinical implications of these observations by measuring the neutralizing activity of the antibodies in sera from 29 individuals who had been treated with Ronapreve® (a cocktail of two antibodies developed by Roche/Regeneron) and/or Evusheld® (a cocktail of two antibodies developed by AstraZeneca).

The scientists compared the ability of the patients' sera to tackle BA.1 and BA.2 between 3 and 30 days after treatment. The results of the study show that therapeutic sensitivity varies depending on the Omicron sublineage.

"We show that the antibodies and corresponding sera are inactive or only weakly active against BA.1, but more active against BA.2. As compared to the Delta variant, neutralizing titers were more markedly decreased against BA.1 (344-fold) than BA.2 (9-fold)," explained Timothée Bruel, lead author of the study and a scientist in the Virus and Immunity Unit at the Institut Pasteur (a joint research unit with the CNRS) with regard to Evusheld®.

Four Omicron infections were also reported among the 29 patients treated with antibodies (including one severe case). "This shows that, in this case, treatment does not fully protect against infection or against severe forms," explained Thierry Prazuck, co-last author of the study and Head of the Infectious Diseases Department at Orléans Regional Hospital.

"To our knowledge, this is the first study to directly describe the seroneutralization of individuals treated with monoclonal antibodies against Delta, BA.1 and BA.2, and to link the results with infections. BA.1, and to a lesser extent BA.2, is less sensitive to Evusheld® and Ronapreve® than Delta. This suggests that these treatments are probably less clinically effective against Omicron infection than against Delta," commented Olivier Schwartz, last author of the study and Head of the Virus and Immunity Unit at the Institut Pasteur (a joint research unit with the CNRS).

Read more at Science Daily

Apr 10, 2022

Shedding new light on controlling material properties

Materials scientists may soon be able to control material properties with light.

A team consisting of researchers at Kyoto University and Kurume Institute of Technology have discovered a scaling law that determines high-order harmonic generation in the solid-layered perovskite material, Ca2RuO4.

High-order harmonic generation is a nonlinear optical phenomenon where extreme ultraviolet photons are emitted by a material as a result of interactions with high intensity light.

"The phenomenon, which was first observed in atomic gas systems, has since paved the way to attosecond science," says study author Kento Uchida. "But it is slightly more unpredictable in some strongly correlated solids, like Ca2RuO4."

Due to the strong interaction between electrons in these solids, the characteristics of high-order harmonic generation can only be established by understanding how these electrons move in the presence of light.

To tackle this question, which has never been confirmed experimentally, the team set out to observe the relationship between temperature and photon emission in Ca2RuO4. They used a mid-infrared pulse to measure and map out high harmonic generation intensity at temperatures from an extremely low 50 to a moderate 290 Kelvin.

At the low end, the team recorded high-order harmonic generation several hundred times more intense than at room temperature. Photon emissions continued to intensify with increasing gap energy -- the energy required for electrons to conduct electricity -- along with the drop in temperature.

The team found that such emissions occurred in the Mott-insulating phase of the material, where the strong repulsion between electrons and high gap energy transforms the metal from an electrical conductor to an insulator.

"We discovered that high-order harmonics in strongly correlated materials highly depend on the gap energy of the materials," explains Uchida.

This scaling law can direct theoretical studies towards more refined descriptions of non-equilibrium electron dynamics in strongly correlated materials: a central issue in condensed matter physics.

Read more at Science Daily

Are people more willing to empathize with animals or with other humans?

Stories about animals such as Harambe the gorilla and Cecil the lion often sweep the media as they pull at people's heartstrings. But are people more likely to feel empathy for animals than humans?

A new Penn State study led by Daryl Cameron, associate professor of psychology and senior research associate at Rock Ethics Institute, found that the answer is complicated. The findings could have implications for how messaging to the public about issues like new environmental policies is framed, among others.

The researchers found that when people were asked to choose between empathizing with a human stranger or an animal -- in this study, a koala bear -- the participants were more likely to choose empathizing with a fellow human.

However, in a second pair of studies, the researchers had participants take part in two separate tasks: one in which they could choose whether or not they wanted to empathize with a person, and one in which they could choose whether or not they wanted to empathize with an animal. This time, people were more likely to choose empathy when faced with an animal than when faced with a person.

Cameron said the findings -- recently published in a special issue on empathy in the Journal of Social Psychology -- suggest that when people are deciding whether to engage in empathy, context matters.

"It's possible that if people are seeing humans and animals in competition, it might lead to them preferring to empathize with other humans," Cameron said. "But if you don't see that competition, and the situation is just deciding whether to empathize with an animal one day and a human the other, it seems that people don't want to engage in human empathy but they're a little bit more interested in animals."

According to the researchers, empathy is the process of thinking about another living thing's suffering and experiences as if they were their own. For example, not just having compassion for someone who is sad after an argument with a friend, but actually imagining and sharing in what that person is feeling.

While there are plenty of examples of people feeling empathy and compassion for animals, Cameron said there is also a theory that it may be more difficult for people to feel true empathy for animals since their minds are different than those of humans.

In the first study, the researchers recruited 193 people to participate in an experiment in which they were asked to make a series of choices between empathizing with a human or an animal. If they chose a human, they were shown a photo of a college-aged adult and asked to mentally share their experience. If they chose an animal, they were shown a photo of a koala and asked to do the same. The experiment was based on a novel empathy selection task developed in Cameron's Empathy and Moral Psychology Lab.

Cameron said that when participants had to choose between empathizing with a person or an animal in the first study, it's possible the participants thought it might be easier to empathize with another human.

"Participants indicated that empathizing with animals felt more challenging, and that belief of empathy being more difficult drove them to choose animal empathy less," Cameron said. "It's possible that people felt empathizing with a mind that's unlike our own was more challenging than imagining the experience of another human."

In the second pair of studies, the researchers recruited an additional 192 and 197 participants, respectively, who completed a pair of choice tasks.

In the first task, the participants were given the choice between empathizing with a person or not engaging in empathy and simply describing the person. Then, in a separate task, the participants were given the same choice but with an animal.

"Once humans and animals were no longer in competition, the story changed," Cameron said. "When people had the chance to either empathize with or remain detached from a human stranger, people avoided empathy, which replicates the previous studies we've done. For animals, though, they didn't show that avoidance pattern. And actually, when we decoupled humans from animals, people actually were more likely to choose to empathize with an animal than a human."

While further studies will need to be done to see if these findings extend to other animals, Cameron said the results could have interesting implications. For example, if it's true that people empathize less with animals if animal interests are pitted against human interests, that could affect how people feel about environmental policies.

"If people perceive choices about empathy in a way that makes it seem like we need to choose between humans or animals with no compromise -- for example, choosing between using a parcel of land or conserving it for animals -- they may be more likely to side with humans," Cameron said. "But there may be ways in which those conversations could be tweaked to shape how people are thinking about managing their empathy."

Read more at Science Daily