Jul 27, 2024

Researchers explore the effects of stellar magnetism on potential habitability of exoplanets

Interest in Earth-like planets orbiting within the habitable zone of their host stars has surged, driven by the quest to discover life beyond our solar system. But the habitability of such planets, known as exoplanets, is influenced by more than just their distance from the star.

A new study by Rice University's David Alexander and Anthony Atkinson extends the definition of a habitable zone for planets to include their star's magnetic field. This factor, well studied in our solar system, can have significant implications for life on other planets, according to the research published in The Astrophysical Journal on July 9.

The presence and strength of a planet's magnetic field and its interaction with the host star's magnetic field are pivotal factors in a planet's ability to support life. An exoplanet needs a strong magnetic field to protect it from stellar activity, and it must orbit far enough from its star to avoid a direct and potentially catastrophic magnetic connection.

"The fascination with exoplanets stems from our desire to understand our own planet better," said Alexander, professor of physics and astronomy, director of the Rice Space Institute and member of the Texas Aerospace Research and Space Economy Consortium. "Questions about the Earth's formation and habitability are the key drivers behind our study of these distant worlds."

Magnetic interactions

Traditionally, scientists have focused on the "Goldilocks Zone," the area around a star where conditions are just right for liquid water to exist. By adding the star's magnetic field to the habitability criteria, Alexander's team offers a more nuanced understanding of where life might thrive in the universe.

The investigation focused on the magnetic interactions between planets and their host stars, a concept known as space weather. On Earth, space weather is driven by the sun and affects our planet's magnetic field and atmosphere. For the study, the researchers simplified the complex modeling usually required to understand these interactions.

The researchers characterized stellar activity using a measure of a star's activity known as the Rossby number (Ro): the ratio of the star's rotation period to its convective turnover time. This helped them estimate the star's Alfvén radius -- the distance at which the stellar wind effectively becomes decoupled from the star.

Planets within this radius would not be viable candidates for habitability because they would be magnetically connected back to the star, leading to rapid erosion of their atmosphere.

By applying this approach, the team examined 1,546 exoplanets to determine if their orbits lay inside or outside their star's Alfvén radius.

Life elsewhere in the galaxy

The study found that only two planets, K2-3 d and Kepler-186 f, of the 1,546 examined met all the conditions for potential habitability. These planets are Earth-sized, orbit at a distance conducive to the formation of liquid water, lie outside their star's Alfvén radius and have strong enough magnetic fields to protect them from stellar activity.

"While these conditions are necessary for a planet to host life, they do not guarantee it," said Atkinson, a graduate student of physics and astronomy and lead author of the study. "Our work highlights the importance of considering a wide range of factors when searching for habitable planets."

Read more at Science Daily

A recipe for zero-emissions fuel: Soda cans, seawater, and caffeine

A sustainable source for clean energy may lie in old soda cans and seawater.

MIT engineers have found that when the aluminum in soda cans is exposed in its pure form and mixed with seawater, the solution bubbles up and naturally produces hydrogen -- a gas that can be subsequently used to power an engine or fuel cell without generating carbon emissions. What's more, this simple reaction can be sped up by adding a common stimulant: caffeine.

In a study appearing today in the journal Cell Reports Physical Science, the researchers show they can produce hydrogen gas by dropping pretreated, pebble-sized aluminum pellets into a beaker of filtered seawater. The aluminum is pretreated with a rare-metal alloy that effectively scrubs aluminum into a pure form that can react with seawater to generate hydrogen. The salt ions in the seawater can in turn attract and recover the alloy, which can be reused to generate more hydrogen, in a sustainable cycle.

The team found that this reaction between aluminum and seawater successfully produces hydrogen gas, though slowly. On a lark, they tossed into the mix some coffee grounds and found, to their surprise, that the reaction picked up its pace.

In the end, the team discovered that a low concentration of imidazole -- an active ingredient in caffeine -- is enough to significantly speed up the reaction, producing the same amount of hydrogen in just five minutes, compared to two hours without the added stimulant.

The researchers are developing a small reactor that could run on a marine vessel or underwater vehicle. The vessel would hold a supply of aluminum pellets (recycled from old soda cans and other aluminum products), along with a small amount of gallium-indium and caffeine. These ingredients could be periodically funneled into the reactor, along with some of the surrounding seawater, to produce hydrogen on demand. The hydrogen could then fuel an onboard engine to drive a motor or generate electricity to power the ship.

"This is very interesting for maritime applications like boats or underwater vehicles because you wouldn't have to carry around seawater -- it's readily available," says study lead author Aly Kombargi, a PhD student in MIT's Department of Mechanical Engineering. "We also don't have to carry a tank of hydrogen. Instead, we would transport aluminum as the 'fuel,' and just add water to produce the hydrogen that we need."

The study's co-authors include Enoch Ellis, an undergraduate in chemical engineering; Peter Godart PhD '21, who has founded a company to recycle aluminum as a source of hydrogen fuel; and Douglas Hart, MIT professor of mechanical engineering.

Shields up

The MIT team, led by Hart, is developing efficient and sustainable methods to produce hydrogen gas, which is seen as a "green" energy source that could power engines and fuel cells without generating climate-warming emissions.

One drawback to fueling vehicles with hydrogen is that some designs would require the gas to be carried onboard like traditional gasoline in a tank -- a risky setup, given hydrogen's volatile potential. Hart and his team have instead looked for ways to power vehicles with hydrogen without having to constantly transport the gas itself.

They found a possible workaround in aluminum -- a naturally abundant and stable material that, when in contact with water, undergoes a straightforward chemical reaction that generates hydrogen and heat.

The reaction, however, comes with a sort of Catch-22: While aluminum can generate hydrogen when it mixes with water, it can only do so in a pure, exposed state. The instant aluminum meets with oxygen, such as in air, the surface immediately forms a thin, shield-like layer of oxide that prevents further reactions. This barrier is the reason hydrogen doesn't immediately bubble up when you drop a soda can in water.

In previous work, using fresh water, the team found they could pierce aluminum's shield and keep the reaction with water going by pretreating the aluminum with a small amount of rare metal alloy made from a specific concentration of gallium and indium. The alloy serves as an "activator," scrubbing away any oxide buildup and creating a pure aluminum surface that is free to react with water. When they ran the reaction in fresh, de-ionized water, they found that one pretreated pellet of aluminum produced 400 milliliters of hydrogen in just five minutes. They estimate that just 1 gram of pellets would generate 1.3 liters of hydrogen in the same amount of time.

But to further scale up the system would require a significant supply of gallium indium, which is relatively expensive and rare.

"For this idea to be cost-effective and sustainable, we had to work on recovering this alloy postreaction," Kombargi says.

By the sea

In the team's new work, they found they could retrieve and reuse gallium indium using a solution of ions. The ions -- atoms or molecules with an electrical charge -- protect the metal alloy from reacting with water and help it to precipitate into a form that can be scooped out and reused.

"Lucky for us, seawater is an ionic solution that is very cheap and available," says Kombargi, who tested the idea with seawater from a nearby beach. "I literally went to Revere Beach with a friend and we grabbed our bottles and filled them, and then I just filtered out algae and sand, added aluminum to it, and it worked with the same consistent results."

He found that hydrogen indeed bubbled up when he added aluminum to a beaker of filtered seawater. And he was able to scoop out the gallium indium afterward. But the reaction happened much more slowly than it did in fresh water. It turns out that the ions in seawater act to shield gallium indium, such that it can coalesce and be recovered after the reaction. But the ions have a similar effect on aluminum, building up a barrier that slows its reaction with water.

As they looked for ways to speed up the reaction in seawater, the researchers tried out various and unconventional ingredients.

"We were just playing around with things in the kitchen, and found that when we added coffee grounds into seawater and dropped aluminum pellets in, the reaction was quite fast compared to just seawater," Kombargi says.

To see what might explain the speedup, the team reached out to colleagues in MIT's chemistry department, who suggested they try imidazole -- an active ingredient in caffeine, which happens to have a molecular structure that can pierce through aluminum (allowing the material to continue reacting with water), while leaving gallium indium's ionic shield intact.

"That was our big win," Kombargi says. "We had everything we wanted: recovering the gallium indium, plus the fast and efficient reaction."

The researchers believe they have the essential ingredients to run a sustainable hydrogen reactor. They plan to test it first in marine and underwater vehicles. They've calculated that such a reactor, holding about 40 pounds of aluminum pellets, could power a small underwater glider for about 30 days by pumping in surrounding seawater and generating hydrogen to power a motor.

Read more at Science Daily

'Dancing molecules' heal cartilage damage

In November 2021, Northwestern University researchers introduced an injectable new therapy, which harnessed fast-moving "dancing molecules," to repair tissues and reverse paralysis after severe spinal cord injuries.

Now, the same research group has applied the therapeutic strategy to damaged human cartilage cells. In the new study, the treatment activated the gene expression necessary to regenerate cartilage within just four hours. And, after only three days, the human cells produced protein components needed for cartilage regeneration.

The researchers also found that, as the molecular motion increased, the treatment's effectiveness also increased. In other words, the molecules' "dancing" motions were crucial for triggering the cartilage growth process.

The study was published today (July 26) in the Journal of the American Chemical Society.

"When we first observed therapeutic effects of dancing molecules, we did not see any reason why it should only apply to the spinal cord," said Northwestern's Samuel I. Stupp, who led the study. "Now, we observe the effects in two cell types that are completely disconnected from one another -- cartilage cells in our joints and neurons in our brain and spinal cord. This makes me more confident that we might have discovered a universal phenomenon. It could apply to many other tissues."

An expert in regenerative nanomedicine, Stupp is Board of Trustees Professor of Materials Science and Engineering, Chemistry, Medicine and Biomedical Engineering at Northwestern, where he is founding director of the Simpson Querrey Institute for BioNanotechnology and its affiliated center, the Center for Regenerative Nanomedicine. Stupp has appointments in the McCormick School of Engineering, Weinberg College of Arts and Sciences and Feinberg School of Medicine. Shelby Yuan, a graduate student in the Stupp laboratory, was primary author of the study.

Big problem, few solutions

As of 2019, nearly 530 million people around the globe were living with osteoarthritis, according to the World Health Organization. A degenerative disease in which tissues in joints break down over time, osteoarthritis is a common health problem and leading cause of disability.

In patients with severe osteoarthritis, cartilage can wear so thin that joints essentially transform into bone on bone -- without a cushion between. Not only is this incredibly painful, patients' joints also can no longer properly function. At that point, the only effective treatment is a joint replacement surgery, which is expensive and invasive.

"Current treatments aim to slow disease progression or postpone inevitable joint replacement," Stupp said. "There are no regenerative options because humans do not have an inherent capacity to regenerate cartilage in adulthood."

What are 'dancing molecules'?

Stupp and his team posited that "dancing molecules" might encourage the stubborn tissue to regenerate. Previously invented in Stupp's laboratory, dancing molecules are assemblies that form synthetic nanofibers comprising tens to hundreds of thousands of molecules with potent signals for cells. By tuning their collective motions through their chemical structure, Stupp discovered the moving molecules could rapidly find and properly engage with cellular receptors, which also are in constant motion and extremely crowded on cell membranes.

Once inside the body, the nanofibers mimic the extracellular matrix of the surrounding tissue. By matching the matrix's structure, mimicking the motion of biological molecules and incorporating bioactive signals for the receptors, the synthetic materials are able to communicate with cells.

"Cellular receptors constantly move around," Stupp said. "By making our molecules move, 'dance' or even leap temporarily out of these structures, known as supramolecular polymers, they are able to connect more effectively with receptors."

Motion matters


In the new study, Stupp and his team looked to the receptors for a specific protein critical for cartilage formation and maintenance. To target this receptor, the team developed a new circular peptide that mimics the bioactive signal of the protein, which is called transforming growth factor beta-1 (TGFb-1).

Then, the researchers incorporated this peptide into two different molecules that interact to form supramolecular polymers in water, each with the same ability to mimic TGFb-1. The researchers designed one supramolecular polymer with a special structure that enabled its molecules to move more freely within the large assemblies. The other supramolecular polymer, however, restricted molecular movement.

"We wanted to modify the structure in order to compare two systems that differ in the extent of their motion," Stupp said. "The intensity of supramolecular motion in one is much greater than the motion in the other one."

Although both polymers mimicked the signal to activate the TGFb-1 receptor, the polymer with rapidly moving molecules was much more effective. In some ways, they were even more effective than the protein that activates the TGFb-1 receptor in nature.

"After three days, the human cells exposed to the long assemblies of more mobile molecules produced greater amounts of the protein components necessary for cartilage regeneration," Stupp said. "For the production of one of the components in cartilage matrix, known as collagen II, the dancing molecules containing the cyclic peptide that activates the TGF-beta1 receptor were even more effective than the natural protein that has this function in biological systems."

What's next?

Stupp's team is currently testing these systems in animal studies and adding additional signals to create highly bioactive therapies.

"With the success of the study in human cartilage cells, we predict that cartilage regeneration will be greatly enhanced when used in highly translational pre-clinical models," Stupp said. "It should develop into a novel bioactive material for regeneration of cartilage tissue in joints."

Stupp's lab is also testing the ability of dancing molecules to regenerate bone -- and already has promising early results, which likely will be published later this year. Simultaneously, he is testing the molecules in human organoids to accelerate the process of discovering and optimizing therapeutic materials.

Stupp's team also continues to build its case to the Food and Drug Administration, aiming to gain approval for clinical trials to test the therapy for spinal cord repair.

Read more at Science Daily

Jul 26, 2024

Chemical analyses find hidden elements from renaissance astronomer Tycho Brahe's alchemy laboratory

In the Middle Ages, alchemists were notoriously secretive and didn't share their knowledge with others. Danish Tycho Brahe was no exception. Consequently, we don't know precisely what he did in the alchemical laboratory located beneath his combined residence and observatory, Uraniborg, on the now Swedish island of Ven.

Only a few of his alchemical recipes have survived, and today, there are very few remnants of his laboratory. Uraniborg was demolished after his death in 1601, and the building materials were scattered for reuse.

However, during an excavation in 1988-1990, some pottery and glass shards were found in Uraniborg's old garden. These shards were believed to originate from the basement's alchemical laboratory. Five of these shards -- four glass and one ceramic -- have now undergone chemical analyses to determine which elements the original glass and ceramic containers came into contact with.

The chemical analyses were conducted by Professor Emeritus and expert in archaeometry, Kaare Lund Rasmussen from the Department of Physics, Chemistry, and Pharmacy, University of Southern Denmark. Senior researcher and museum curator Poul Grinder-Hansen from the National Museum of Denmark oversaw the insertion of the analyses into historical context.

Enriched levels of trace elements were found on four of them, while one glass shard showed no specific enrichments. The study has been published in the journal Heritage Science.

"Most intriguing are the elements found in higher concentrations than expected -- indicating enrichment and providing insight into the substances used in Tycho Brahe's alchemical laboratory," said Kaare Lund Rasmussen.

The enriched elements are nickel, copper, zinc, tin, antimony, tungsten, gold, mercury, and lead, and they have been found on either the inside or outside of the shards.

Most of them are not surprising for an alchemist's laboratory. Gold and mercury were -- at least among the upper echelons of society -- commonly known and used against a wide range of diseases.

"But tungsten is very mysterious. Tungsten had not even been described at that time, so what should we infer from its presence on a shard from Tycho Brahe's alchemy workshop?," said Kaare Lund Rasmussen.

Tungsten was first described and produced in pure form more than 180 years later by the Swedish chemist Carl Wilhelm Scheele. Tungsten occurs naturally in certain minerals, and perhaps the element found its way to Tycho Brahe's laboratory through one of these minerals. In the laboratory, the mineral might have undergone some processing that separated the tungsten, without Tycho Brahe ever realizing it.

However, there is also another possibility that Professor Kaare Lund Rasmussen emphasizes has no evidence whatsoever -- but which could be plausible.

Already in the first half of the 1500s, the German mineralogist Georgius Agricola described something strange in tin ore from Saxony, which caused problems when he tried to smelt tin. Agricola called this strange substance in the tin ore "Wolfram" (German for Wolf's froth, later renamed to tungsten in English).

"Maybe Tycho Brahe had heard about this and thus knew of tungsten's existence. But this is not something we know or can say based on the analyses I have done. It is merely a possible theoretical explanation for why we find tungsten in the samples," said Kaare Lund Rasmussen.

Tycho Brahe belonged to the branch of alchemists who, inspired by the German physician Paracelsus, tried to develop medicine for various diseases of the time: plague, syphilis, leprosy, fever, stomach aches, etc. But he distanced himself from the branch that tried to create gold from less valuable minerals and metals.

In line with the other medical alchemists of the time, he kept his recipes close to his chest and shared them only with a few selected individuals, such as his patron, Emperor Rudolph II, who allegedly received Tycho Brahe's prescriptions for plague medicine.

We know that Tycho Brahe's plague medicine was complicated to produce. It contained theriac, which was one of the standard remedies for almost everything at the time and could have up to 60 ingredients, including snake flesh and opium. It also contained copper or iron vitriol (sulphates), various oils, and herbs.

After various filtrations and distillations, the first of Brahe's three recipes against plague was obtained. This could be made even more potent by adding tinctures of, for example, coral, sapphires, hyacinths, or potable gold.

"It may seem strange that Tycho Brahe was involved in both astronomy and alchemy, but when one understands his worldview, it makes sense. He believed that there were obvious connections between the heavenly bodies, earthly substances, and the body's organs. Thus, the Sun, gold, and the heart were connected, and the same applied to the Moon, silver, and the brain; Jupiter, tin, and the liver; Venus, copper, and the kidneys; Saturn, lead, and the spleen; Mars, iron, and the gallbladder; and Mercury, mercury, and the lungs. Minerals and gemstones could also be linked to this system, so emeralds, for example, belonged to Mercury," explained Poul Grinder-Hansen.

Read more at Science Daily

Great Salt Lake a significant source of greenhouse gas emissions

Newly announced research by Royal Ontario Museum (ROM) examining greenhouse gas emissions from the drying lake bed of Great Salt Lake, Utah, calculates that 4.1 million tons of carbon dioxide and other greenhouse gases were released in 2020. This research suggests that drying lake beds are an overlooked but potentially significant source of greenhouse gases, which may further increase due to climate change. These results were announced in the paper, "A desiccating saline lake bed is a significant source of anthropogenic greenhouse gas emissions," published in the journal One Earth.

"Human-caused desiccation of Great Salt Lake is exposing huge areas of lake bed and releasing massive quantities of greenhouse gases into the atmosphere," said Soren Brothers, who led this research and is ROM's Allan and Helaine Shiff Curator of Climate Change. "The significance of lake desiccation as a driver of climate change needs to be addressed in greater detail and considered in climate change mitigation and watershed planning."

From year to year, Great Salt Lake's water level varies, largely depending on the volume of meltwater that flows into the lake from the surrounding mountains -- from record highs in the 1980s to a record low in 2022. However, it is human-related consumption by agriculture, industry, and municipal uses, that consume ever-increasing amounts of freshwater that, over the years, has depleted the lake. Elsewhere around the world, these same competing uses for water are having a significant impact on lake levels. As iconic saline lakes such as the Aral Sea, Lake Urmia, the Caspian Sea, and Great Salt Lake dry up, they not only destroy critical habitat for biodiversity and create air quality conditions that deteriorate human health, but they also accelerate climate change as newly exposed sediments emit carbon dioxide and methane.

The research team measured carbon dioxide and methane emissions from the exposed sediments of Great Salt Lake, Utah, from April to November 2020, and compared them with aquatic emissions estimates to determine the anthropogenic greenhouse gas emissions associated with desiccation. Calculations based on this sampling indicate the lake bed emitted 4.1 million tons of greenhouse gases to the atmosphere, primarily (94%) as carbon dioxide, constituting an approximately 7% increase to Utah's human-caused greenhouse gas emissions.

Fieldwork was conducted while Soren Brothers was Assistant Professor of Limnology at Utah State University, and lead author, Melissa Cobo, was a master's student at USU. Co-author Tobias Goldhammer is a collaborating researcher at the Leibniz Institute for Freshwater Research (IGB Institute) in Berlin, Germany. Measurements of carbon dioxide and methane gases were made every two weeks from the dried-up lake bed using a portable greenhouse gas analyzer attached to a closed chamber. Seven sites at one location at the south end of the lake were visited repeatedly over the course of the year, and another three locations were sampled during an intensive three-day campaign to determine spatial variability across the lake, which at 1,700 square miles (4,400 square kilometres) is the largest saline lake in the western hemisphere.

As methane is 28 times more powerful a greenhouse gas than carbon dioxide, the global warming impact of these emissions was calculated as "carbon dioxide equivalents" to account for the greater impact of methane. Ultimately, these data indicated that greenhouse gas emissions from the dried lake bed were strongly and positively related to warm temperatures, even at sites that have been exposed for over two decades. To determine whether the lake historically would have been a significant source of greenhouse gases, the team carried out measurements of near-shore greenhouse gas emissions from the lake, as well as analyzing water chemistry collected by the team and government data sets. Together, these analyses showed that the original lake was not likely a significant source of greenhouse gases to the atmosphere, making the dried-up lake bed a novel driver of atmospheric warming.

Read more at Science Daily

Size doesn't matter for mammals with more complex brains

In many mammal species, the males can be bigger than the females (or vice versa), a trait called sexual size dimorphism (SSD). For example, male elephant seals are around three times bigger than females. In contrast, dolphins have no difference in sizes between the sexes. Humans are somewhere in between, with the average male being larger than the average female, but across the population there is an overlap.

To understand how this trait is associated with genome evolution, scientists from the Milner Centre for Evolution at the University of Bath in the UK looked at similarities between the genomes of 124 species of mammals.

They grouped the genes into families of similar functions and measured the size of these gene families.

They found that those species with a big difference in size between the sexes had bigger gene families linked to olfactory functions (sense of smell) and smaller gene families associated with brain development.

Therefore, this could also mean that those species with very little difference in sizes between males and females (termed monomorphic) had bigger gene families associated with brain development.

Publishing in Nature Communications, the authors suggest that in species with a large SSD, traits such as the sense of smell could be important for identifying mates and territories.

In contrast, mammals with a smaller SSD are potentially investing in their brain development and tend to have more complex social structures.

This means they compete for mates using other means than simply using size to select who to breed with.

Dr Benjamin Padilla-Morales, from the Milner Centre for Evolution at the University of Bath led the research.

He said: "We were surprised to see such a strong statistical link between a large SSD and expanded gene families for olfactory function. Even more interestingly, the gene families under contraction were linked with brain development.

"This could mean that those species with a small SSD have bigger gene families associated with brain function and tend to show more complex behaviours such as biparental care and monogamous breeding systems.

"It shows that while size in some species is an important sexual selection pressure for evolution, for others it doesn't matter so much.

"It makes us ask the question how traits like SSD are shaping the evolution of our brains and genomes."

In future work, the researchers want to investigate how testes size impacts the evolution of mammals' genomes.

Read more at Science Daily

Scientists assess how large dinosaurs could really get

A new study published today in the scientific journal Ecology and Evoiution looks at the maximum possible sizes of dinosaurs, using the carnivore, Tyrannosaurus rex, as an example. Using computer modelling, Dr. Jordan Mallon of the Canadian Museum of Nature and Dr. David Hone of Queen Mary University of London, produced estimates that T. Rex might have been 70% heavier than what the fossil evidence suggests.

The researchers assert that the huge sizes attained by many dinosaurs make them a source of endless fascination, raising the question as to how these animals evolved to be so big. There are perennial claims and counter-claims about which dinosaur species was the largest of its group or even the largest ever.

Most dinosaur species are known from only one or a handful of specimens, so it's extraordinarily unlikely that their size ranges will include the largest individuals that ever existed. The question remains: how big were the largest individuals, and are we likely to find them?

To address this question, Mallon and Hone used computer modelling to assess a population of T. rex. They factored in variables such as population size, growth rate, lifespan, the incompleteness of the fossil record, and more.

T. rex was chosen for the model because it is a familiar dinosaur for which many of these details are already well estimated. Body-size variance at adulthood, which is still poorly known in T. rex, was modelled with and without sex differences, and is based on examples of living alligators, chosen for their large size and close kinship with the dinosaurs.

The palaeontologists found that the largest known T. rex fossils probably fall in the 99th percentile, representing the top 1% of body size, but to find an animal in the top 99.99% (a one-in-ten-thousand individual) scientists would need to excavate fossils at the current rate for another 1,000 years.

The computer models suggest that the largest individual that could have existed (one in 2.5 billion animals) may have been 70% more massive than the current largest-known T. rex specimens (an estimated 15 tonnes vs 8.8 tonnes) and 25% longer (15 metres vs 12 metres).

The values are estimates based on the model, but patterns of discovery of giants of modern species tell us there must have been larger dinosaurs out there that have not yet been found. "Some isolated bones and pieces certainly hint at still larger individuals than for which we currently have skeletons," says Hone.

This study adds to the debates about the largest fossil animals. Many of the largest dinosaurs in various groups are known from a single good specimen, so it's impossible to know if that one animal was a big or small example of the species. An apparently large species might be based on a single giant individual, and a small species based on a particularly tiny individual -- neither of which reflect the average size of their respective species.

The chances that palaeontologists will find the largest ever individuals for a given species are incredibly small. So, despite the giant skeletons that can be seen in museums around the world, the very largest individuals of these species were likely even larger than those on display.

Read more at Science Daily

Jul 25, 2024

Astrophysicists uncover supermassive blackhole/dark matter connection in solving the 'final parsec problem'

Researchers have found a link between some of the largest and smallest objects in the cosmos: supermassive black holes and dark matter particles.

Their new calculations reveal that pairs of supermassive black holes (SMBHs) can merge into a single larger black hole because of previously overlooked behaviour of dark matter particles, proposing a solution to the longstanding "final parsec problem" in astronomy.

The research is described in Self-interacting dark matter solves the final parsec problem of supermassive black hole mergers published this month in the journal Physical Review Letters.

In 2023, astrophysicists announced the detection of a "hum" of gravitational waves permeating the universe. They hypothesized that this background signal emanated from millions of merging pairs of SMBHs each billions of times more massive than our Sun.

However, theoretical simulations showed that as pairs of these mammoth celestial objects spiral closer together, their approach stalls when they are roughly a parsec apart -- a distance of about three light years -- thereby preventing a merger.

Not only did this "final parsec problem" conflict with the theory that merging SMBHs were the source of the gravitational wave background, it was also at odds with the theory that SMBHs grow from the merger of less massive black holes.

"We show that including the previously overlooked effect of dark matter can help supermassive black holes overcome this final parsec of separation and coalesce," says paper co-author Gonzalo Alonso-Álvarez, a postdoctoral fellow in the Department of Physics at the University of Toronto and the Department of Physics and Trottier Space Institute at McGill University. "Our calculations explain how that can occur, in contrast to what was previously thought."

The paper's co-authors include Professor James Cline from McGill University and the CERN Theoretical Physics Department in Switzerland and Caitlyn Dewar, a master of science student in physics at McGill.

SMBHs are thought to lie in the centres of most galaxies and when two galaxies collide, the SMBHs fall into orbit around each other. As they revolve around each other, the gravitational pull of nearby stars tugs at them and slows them down. As a result, the SMBHs spiral inward toward a merger.

Previous merger models showed that when the SMBHs approached to within roughly a parsec, they begin to interact with the dark matter cloud or halo in which they are embedded. They indicated that the gravity of the spiraling SMBHs throws dark matter particles clear of the system and the resulting sparsity of dark matter means that energy is not drawn from the pair and their mutual orbits no longer shrink.

While those models dismissed the impact of dark matter on the SMBH's orbits, the new model from Alonso-Álvarez and his colleagues reveals that dark matter particles interact with each other in such a way that they are not dispersed. The density of the dark matter halo remains high enough that interactions between the particles and the SMBHs continue to degrade the SMBH's orbits, clearing a path to a merger.

"The possibility that dark matter particles interact with each other is an assumption that we made, an extra ingredient that not all dark matter models contain," says Alonso-Álvarez. "Our argument is that only models with that ingredient can solve the final parsec problem."

The background hum generated by these colossal cosmic collisions is made up of gravitational waves of much longer wavelength than those first detected in 2015 by astrophysicists operating the Laser Interferometer Gravitational-Wave Observatory (LIGO). Those gravitational waves were generated by the merger of two black holes, both some 30 times the mass of the Sun.

The background hum has been detected in recent years by scientists operating the Pulsar Timing Array. The array reveals gravitational waves by measuring minute variations in signals from pulsars, rapidly rotating neutron stars that emit strong radio pulses.

"A prediction of our proposal is that the spectrum of gravitational waves observed by pulsar timing arrays should be softened at low frequencies," says Cline. "The current data already hint at this behavior, and new data may be able to confirm it in the next few years."

In addition to providing insight into SBMH mergers and the gravitational wave background signal, the new result also provides a window into the nature of dark matter.

"Our work is a new way to help us understand the particle nature of dark matter," says Alonso-Álvarez. "We found that the evolution of black hole orbits is very sensitive to the microphysics of dark matter and that means we can use observations of supermassive black hole mergers to better understand these particles."

For example, the researchers found that the interactions between dark matter particles they modeled also explains the shapes of galactic dark matter halos.

Read more at Science Daily

How well does tree planting work in climate change fight? It depends

Using trees as a cost-effective tool against climate change is more complicated than simply planting large numbers of them, an international collaboration that includes an Oregon State University scientist has shown.

Jacob Bukoski of the OSU College of Forestry and seven other researchers synthesized data from thousands of reforestation sites in 130 countries and found that roughly half the time it's better just to let nature take its course.

Findings of the study led by Conservation International were published today in Nature Climate Change.

"Trees can play a role in climate change mitigation, for multiple reasons," Bukoski said. "It's pretty easy to understand that forests pull carbon dioxide from the atmosphere and store it, and trees are something pretty much everyone can get behind -- we have seen multiple bipartisan acts for tree planting introduced in Congress. This study brings a nuanced perspective to the whole 'should we plant trees to solve climate change' debate."

Bukoski notes that expanding forests globally has been widely proposed as a key tactic against climate change since forests sequester atmospheric carbon dioxide in their biomass and soils. Harvested timber also stores carbon in the form of wood products.

There are two basic approaches to forest expansion, Bukoski said.

"Generally speaking, we can let forests regenerate on their own, which is slow but cheap, or take a more active approach and plant them, which speeds up growth but is more expensive," he said. "Our study compares these two approaches across reforestable landscapes in low- and middle-income countries, identifying where naturally regenerating or planting forests is likely to make more sense."

Using machine learning and regression models, the scientists found that natural regeneration would be most cost effective over a 30-year period for 46% of the areas studied, and planting would be most cost effective for 54%.

They also determined that using a combination of the two approaches across all areas would be 44% better than natural regeneration alone and 39% better than planting by itself.

"If your objective is to sequester carbon as quickly and as cheaply as possible, the best option is a mix of both naturally regenerating forests and planting forests." Bukoski said.

The study suggests that natural regeneration is especially cost effective relative to plantation forestry in much of western Mexico, the Andean region, the Southern Cone of South America, West and Central Africa, India, Southern China, Malaysia and Indonesia.

Conversely, plantations are preferable to natural regeneration in much of the Caribbean, Central America, Brazil, northern China, mainland Southeast Asia, the Philippines and North, East and Southern Africa.

"Which method is more cost effective in a given location is a function of multiple factors, including opportunity cost, relative carbon accumulation and harvest rates, and relative implementation costs," Bukoski said.

Other scientists in the collaboration were Jonah Busch and Bronson Griscom of Conservation International, Susan Cook-Patton of The Nature Conservancy, David Kaczan of the World Bank, Yuanyuan Yi of Peking University, Jeff Vincent of Duke University and Matthew Potts of the University of California, Berkeley.

The authors stress that reforestation is a complement to, not a replacement for, reducing emissions from fossil fuels. Achieving the entire mitigation potential of reforestation over 30 years would amount to less than eight months of global greenhouse gas emissions, they note.

The authors add that carbon is just one consideration when growing trees. Biodiversity, demand for wood products, support of local livelihoods, and non-carbon biophysical effects must also be considered when deciding where and how to reforest landscapes.

Read more at Science Daily

How Saharan dust regulates hurricane rainfall

Giant plumes of Sahara Desert dust that gust across the Atlantic can suppress hurricane formation over the ocean and affect weather in North America.

But thick dust plumes can also lead to heavier rainfall -- and potentially more destruction -- from landfalling storms, according to a July 24 study in Science Advances. The research shows a previously unknown relationship between hurricane rainfall and Saharan dust plumes.

"Surprisingly, the leading factor controlling hurricane precipitation is not, as traditionally thought, sea surface temperature or humidity in the atmosphere. Instead, it's Sahara dust," said the corresponding author Yuan Wang, an assistant professor of Earth system science at the Stanford Doerr School of Sustainability.

Previous studies have found that Saharan dust transport may decline dramatically in the coming decades and hurricane rainfall will likely increase due to human-caused climate change.

However, uncertainty remains around the questions of how climate change will affect outflows of dust from the Sahara and how much more rainfall we should expect from future hurricanes. Additional questions surround the complex relationships among Saharan dust, ocean temperatures, and hurricane formation, intensity, and precipitation. Filling in the gaps will be critical to anticipating and mitigating the impacts of climate change.

"Hurricanes are among the most destructive weather phenomena on Earth," said Wang. Even relatively weak hurricanes can produce heavy rains and flooding hundreds of miles inland. "For conventional weather predictions, especially hurricane predictions, I don't think dust has received sufficient attention to this point."

Competing effects

Dust can have competing effects on tropical cyclones, which are classified as hurricanes in the North Atlantic, central North Pacific, and eastern North Pacific when maximum sustained wind speeds reach 74 miles per hour or higher.

"A dust particle can make ice clouds form more efficiently in the core of the hurricane, which can produce more precipitation," Wang explained, referring to this effect as microphysical enhancement. Dust can also block solar radiation and cool sea surface temperatures around a storm's core, which weakens the tropical cyclone.

Wang and colleagues set out to first develop a machine learning model capable of predicting hurricane rainfall, and then identify the underlying mathematical and physical relationships.

The researchers used 19 years of meteorological data and hourly satellite precipitation observations to predict rainfall from individual hurricanes.

The results show a key predictor of rainfall is dust optical depth, a measure of how much light filters through a dusty plume. They revealed a boomerang-shaped relationship in which rainfall increases with dust optical depths between 0.03 and 0.06, and sharply decreases thereafter. In other words, at high concentrations, dust shifts from boosting to suppressing rainfall.

"Normally, when dust loading is low, the microphysical enhancement effect is more pronounced. If dust loading is high, it can more efficiently shield [the ocean] surface from sunlight, and what we call the 'radiative suppression effect' will be dominant," Wang said.

Read more at Science Daily

Neuroscientists discover brain circuitry of placebo effect for pain relief

The placebo effect is very real. This we've known for decades, as seen in real-life observations and the best double-blinded randomized clinical trials researchers have devised for many diseases and conditions, especially pain. And yet, how and why the placebo effect occurs has remained a mystery. Now, neuroscientists have discovered a key piece of the placebo effect puzzle.

Publishing in Nature, researchers at the University of North Carolina School of Medicine- with colleagues from Stanford, the Howard Hughes Medical Institute, and the Allen Institute for Brain Science -- discovered a pain control pathway that links the cingulate cortex in the front of the brain, through the pons region of the brainstem, to cerebellum in the back of the brain.

The researchers, led by Greg Scherrer, PharmD, PhD, associate professor in the UNC Department of Cell Biology and Physiology, the UNC Neuroscience Center, and the UNC Department of Pharmacology, then showed that certain neurons and synapses along this pathway are highly activated when mice expect pain relief and experience pain relief, even when there is no medication involved.

"That neurons in our cerebral cortex communicate with the pons and cerebellum to adjust pain thresholds based on our expectations is both completely unexpected, given our previous understanding of the pain circuitry, and incredibly exciting," said Scherrer. "Our results do open the possibility of activating this pathway through other therapeutic means, such as drugs or neurostimulation methods to treat pain."

Scherrer and colleagues said research provides a new framework for investigating the brain pathways underlying other mind-body interactions and placebo effects beyond the ones involved in pain.

The Placebo Paradox

It is the human experience, in the face of pain, to want to feel better. As a result -- and in conjunction with millennia of evolution -- our brains can search for ways to help us feel better. It releases chemicals, which can be measured. Positive thinking and even prayer have been shown to benefit some patients. And the placebo effect -- feeling better even though there was no "real" treatment -- has been documented as a very real phenomenon for decades.

In clinical research, the placebo effect is often seen in what we call the "sham" treatment group. That is, individuals in this group receive a fake pill or intervention that is supposed to be inert; no one in the control group is supposed to see a benefit. Except that the brain is so powerful and individuals so desire to feel better that some experience a marked improvement in their symptoms. Some placebo effects are so strong that individuals are convinced they received a real treatment meant to help them.

In fact, it's thought that some individuals in the "actual" treatment group also derive benefit from the placebo effect. This is one of the reasons why clinical research of therapeutics is so difficult and demands as many volunteers as possible so scientists can parse the treatment benefit from the sham. One way to help scientists do this is to first understand what precisely is happening in the brain of someone experiencing the placebo effect.

Enter the Scherrer lab

The authors of the Nature paper knew that the scientific community's understanding of the biological underpinnings of pain relief through placebo analgesia -- when the positive expectation of pain relief is sufficient for patients to feel better -- came from human brain imaging studies, which showed activity in certain brain regions. Those imaging studies did not have enough precision to show what was actually happening in those brain regions. So Scherrer's team designed a set of meticulous, complementary, and time-consuming experiments to learn in more detail, with single nerve cell precision, what was happening in those regions.

First, the researchers created an assay that generates in mice the expectation of pain relief and then very real placebo effect of pain relief. Then the researchers used a series of experimental methods to study the intricacies of the anterior cingulate cortex (ACC), which had been previously associated with the pain placebo effect. While mice were experiencing the effect, the scientists used genetic tagging of neurons in the ACC, imaging of calcium in neurons of freely behaving mice, single-cell RNA sequencing techniques, electrophysiological recordings, and optogenetics -- the use of light and fluorescent-tagged genes to manipulate cells.

These experiments helped them see and study the intricate neurobiology of the placebo effect down to the brain circuits, neurons, and synapses throughout the brain.

The scientists found that when mice expected pain relief, the rostral anterior cingulate cortex neurons projected their signals to the pontine nucleus, which had no previously established function in pain or pain relief. And they found that expectation of pain relief boosted signals along this pathway.

"There is an extraordinary abundance of opioid receptors here, supporting a role in pain modulation," Scherrer said. "When we inhibited activity in this pathway, we realized we were disrupting placebo analgesia and decreasing pain thresholds. And then, in the absence of placebo conditioning, when we activated this pathway, we caused pain relief.

Lastly, the scientists found that Purkinje cells -- a distinct class of large branch-like cells of the cerebellum -- showed activity patterns similar to those of the ACC neurons during pain relief expectation. Scherrer and first author Chong Chen, MD, PhD, a postdoctoral research associate in the Scherrer lab, said that this is cellular-level evidence for the cerebellum's role in cognitive pain modulation.

"We all know we need better ways to treat chronic pain, particularly treatments without harmful side effects and addictive properties," Scherrer said. "We think our findings open the door to targeting this novel neural pain pathway to treat people in a different but potentially more effective way."

Read more at Science Daily

Jul 23, 2024

Life signs could survive near surfaces of Enceladus and Europa

Europa, a moon of Jupiter, and Enceladus, a moon of Saturn, have evidence of oceans beneath their ice crusts. A NASA experiment suggests that if these oceans support life, signatures of that life in the form of organic molecules (e.g. amino acids, nucleic acids, etc.) could survive just under the surface ice despite the harsh radiation on these worlds. If robotic landers are sent to these moons to look for life signs, they would not have to dig very deep to find amino acids that have survived being altered or destroyed by radiation.

"Based on our experiments, the 'safe' sampling depth for amino acids on Europa is almost 8 inches (around 20 centimeters) at high latitudes of the trailing hemisphere (hemisphere opposite to the direction of Europa's motion around Jupiter) in the area where the surface hasn't been disturbed much by meteorite impacts," said Alexander Pavlov of NASA's Goddard Space Flight Center in Greenbelt, Maryland, lead author of a paper on the research published July 18 in Astrobiology. "Subsurface sampling is not required for the detection of amino acids on Enceladus -- these molecules will survive radiolysis (breakdown by radiation) at any location on the Enceladus surface less than a tenth of an inch (under a few millimeters) from the surface."

The frigid surfaces of these nearly airless moons are likely uninhabitable due to radiation from both high-speed particles trapped in their host planet's magnetic fields and powerful events in deep space, such as exploding stars. However, both have oceans under their icy surfaces that are heated by tides from the gravitational pull of the host planet and neighboring moons. These subsurface oceans could harbor life if they have other necessities, such as an energy supply as well as elements and compounds used in biological molecules.

The research team used amino acids in radiolysis experiments as possible representatives of biomolecules on icy moons. Amino acids can be created by life or by non-biological chemistry. However, finding certain kinds of amino acids on Europa or Enceladus would be a potential sign of life because they are used by terrestrial life as a component to build proteins. Proteins are essential to life as they are used to make enzymes which speed up or regulate chemical reactions and to make structures. Amino acids and other compounds from subsurface oceans could be brought to the surface by geyser activity or the slow churning motion of the ice crust.

To evaluate the survival of amino acids on these worlds, the team mixed samples of amino acids with ice chilled to about minus 321 Fahrenheit (-196 Celsius) in sealed, airless vials and bombarded them with gamma-rays, a type of high-energy light, at various doses. Since the oceans might host microscopic life, they also tested the survival of amino acids in dead bacteria in ice. Finally, they tested samples of amino acids in ice mixed with silicate dust to consider the potential mixing of material from meteorites or the interior with surface ice.

The experiments provided pivotal data to determine the rates at which amino acids break down, called radiolysis constants. With these, the team used the age of the ice surface and the radiation environment at Europa and Enceladus to calculate the drilling depth and locations where 10 percent of the amino acids would survive radiolytic destruction.

Although experiments to test the survival of amino acids in ice have been done before, this is the first to use lower radiation doses that don't completely break apart the amino acids, since just altering or degrading them is enough to make it impossible to determine if they are potential signs of life. This is also the first experiment using Europa/Enceladus conditions to evaluate the survival of these compounds in microorganisms and the first to test the survival of amino acids mixed with dust.

The team found that amino acids degraded faster when mixed with dust but slower when coming from microorganisms.

"Slow rates of amino acid destruction in biological samples under Europa and Enceladus-like surface conditions bolster the case for future life-detection measurements by Europa and Enceladus lander missions," said Pavlov. "Our results indicate that the rates of potential organic biomolecules' degradation in silica-rich regions on both Europa and Enceladus are higher than in pure ice and, thus, possible future missions to Europa and Enceladus should be cautious in sampling silica-rich locations on both icy moons."

A potential explanation for why amino acids survived longer in bacteria involves the ways ionizing radiation changes molecules -- directly by breaking their chemical bonds or indirectly by creating reactive compounds nearby which then alter or break down the molecule of interest. It's possible that bacterial cellular material protected amino acids from the reactive compounds produced by the radiation.

Read more at Science Daily

Agriculture: Less productive yet more stable pastures

Climate change will have a considerable influence on the biodiversity and productivity of meadows and pastures. However, according to the results of the large-scale climate and land use experiment, GCEF, which has been conducted at the Helmholtz Centre for Environmental Research (UFZ) for 10 years, the extent of these changes depends on the land use. Grassland optimised for high yield responds much more sensitively to periods of drought than less intensively used meadows and pastures. According to an article recently published in Global Change Biology, this can certainly have economic consequences for the farmers affected.

Grassland is one of the most important and most widespread ecosystems on earth. Such open landscapes with grasses and herbs not only cover more than one quarter of the entire land surface but also store at least one third of the terrestrial carbon, are crucial for food production, and can be extremely species-rich in a relatively small area. But what is the future of these habitats? The study provides new insights into this question.

It has long been clear that two environmental changes are threatening the world's grasslands. Particularly in Europe, grasslands are now fertilised much more heavily, mowed more frequently, and grazed more intensively. In addition, farmers often sow only a handful of grass varieties that promise a particularly high yield. This intensification of land use is fundamentally changing the species composition and functionality of meadows and pastures. The same applies to climate change. For Germany, climate change will result in a shift in the seasonal distribution of precipitation as well as an increase in hydrological extremes (e.g. heavy rainfall and droughts), among other things. It is considered the second largest threat for these ecosystems.

When both changes come together, they can reinforce each other. However, nobody yet knows exactly what will happen. Most experiments on this topic have so far focussed on either the climate or land use. "What makes our study unique is that we investigated the interaction of both factors," explains Dr Lotte Korell, biologist at the UFZ and first author of the publication.

This was made possible by the large-scale and long-term experiment of the UFZ in Bad Lauchstädt near Halle, the Global Change Experimental Facility (GCEF). It consists of 50 plots, each measuring 16 × 24 m; these are used with varying degrees of land use intensity. Temperatures and precipitation levels can also be manipulated with the help of mobile roof systems. For example, some plots receive 10% more precipitation in spring and autumn and 20% less in summer than the untreated control plots. This roughly corresponds to the conditions that climate models project for central Germany.

An eight-year data series from this experiment has now been compiled for the new study. The researchers analysed the biodiversity and productivity of the plants on the differently used plots between 2015 and 2022. "This period includes three of the driest years this region has experienced since beginning of records," recalls Korell. These droughts apparently had a much stronger effect on the plants than the experimentally simulated climate change.

However, in both cases, the trend pointed in the same direction: species-rich grassland that is only rarely mown or sparsely grazed withstood the heat and drought much better than the intensively used high-performance meadows. "Among other factors, this is probably related to the diversity of species," says Korell. This varied greatly depending on the land use of the grasslands.

A diverse mixture of more than 50 native grasses and herbs grew on the less intensively used meadows and pastures of the GCEF. However, on the intensively used grassland, the UFZ team had sown only the five grass varieties recommended to farmers by the Saxony-Anhalt State Institute for Agriculture and Horticulture for drier sites at the start of the experiment. These included varieties of meadow grass (Dactylis glomerata) and perennial ryegrass (Lolium perenne).

Because such grasses are bred for maximum yield and were also heavily fertilised -- as is common in agricultural practice -- the intensive meadows were initially much more productive than the more diverse grasslands. However, they were able to make use of this advantage only in favourable climatic conditions and were not able to withstand the drought as well as the plants in the low-intensity meadows and pastures. In times of drought, the grasses in the intensively used meadows increasingly died back and were replaced by other species such as chickweed (Stellaria media), shepherd's purse (Capsella bursa-pastoris), dandelion (Taraxacum officinale), and small-flowered cranesbill (Geranium pusillum). "These are mostly short-lived species that survive as seeds," explains Dr Harald Auge, also a biologist at the UFZ and senior author of the study. When the more competitive plants succumb to drought, these species take the opportunity to invade their habitats: they either migrate from the low-intensity grassland or germinate from the seed stock in the soil.

This shift in species composition is not particularly welcomed by farmers, especially because most of the new arrivals have a lower fodder quality than the grasses originally sown. The common ragwort (Senecio vulgaris), which was frequently represented among the immigrating species in the experiment, is in fact poisonous. All of this reduces the productivity of the land.

Farmers have long been aware of this kind of degradation of high-performance grassland by immigrating species. They therefore expect to have to plough up and reseed their land every few years. "However, climate change may accelerate this need and lead to additional costs," says Korell. Perhaps everything will go well for a few years and it will rain enough. However, it is also possible that several dry summers will follow one another. Climate change is making conditions even more unpredictable.

Read more at Science Daily

Brain size riddle solved as humans exceed evolution trend

The largest animals do not have proportionally bigger brains -- with humans bucking this trend -- a new study published in Nature Ecology and Evolution has revealed.

Researchers at the University of Reading and Durham University collected an enormous dataset of brain and body sizes from around 1,500 species to clarify centuries of controversy surrounding brain size evolution.

Bigger brains relative to body size are linked to intelligence, sociality, and behavioural complexity -- with humans having evolved exceptionally large brains.

The new research, published today (Monday, 8 July), reveals the largest animals do not have proportionally bigger brains, challenging long-held beliefs about brain evolution.

Professor Chris Venditti, lead author of the study from the University of Reading, said: "For more than a century, scientists have assumed that this relationship was linear -- meaning that brain size gets proportionally bigger, the larger an animal is. We now know this is not true. The relationship between brain and body size is a curve, essentially meaning very large animals have smaller brains than expected."

Professor Rob Barton, co-author of the study from Durham University, said: "Our results help resolve the puzzling complexity in the brain-body mass relationship. Our model has a simplicity that means previously elaborate explanations are no longer necessary -- relative brain size can be studied using a single underlying model."

Beyond the ordinary


The research reveals a simple association between brain and body size across all mammals which allowed the researchers to identify the rule-breakers -- species which challenge the norm.

Among these outliers includes our own species, Homo sapiens, which has evolved more than 20 times faster than all other mammal species, resulting in the massive brains that characterise humanity today.

But humans are not the only species to buck this trend.

All groups of mammals demonstrated rapid bursts of change -- both towards smaller and larger brain sizes.

For example, bats very rapidly reduced their brain size when they first arose, but then showed very slow rates of change in relative brain size, suggesting there may be evolutionary constraints related to the demands of flight.

There are three groups of animals that showed the most pronounced rapid change in brain size: primates, rodents, and carnivores.

In these three groups, there is a tendency for relative brain size to increase in time (the "Marsh-Lartet rule"). This is not a trend universal across all mammals, as previously believed.

Read more at Science Daily

Chimpanzees gesture back and forth quickly like in human conversations

When people are having a conversation, they rapidly take turns speaking and sometimes even interrupt. Now, researchers who have collected the largest ever dataset of chimpanzee "conversations" have found that they communicate back and forth using gestures following the same rapid-fire pattern. The findings are reported on July 22 in the journal Current Biology.

"While human languages are incredibly diverse, a hallmark we all share is that our conversations are structured with fast-paced turns of just 200 milliseconds on average," said Catherine Hobaiter  at the University of St Andrews, UK. "But it was an open question whether this was uniquely human, or if other animals share this structure."

"We found that the timing of chimpanzee gesture and human conversational turn-taking is similar and very fast, which suggests that similar evolutionary mechanisms are driving these social, communicative interactions," says Gal Badihi, the study's first author.

The researchers knew that human conversations follow a similar pattern across people living in places and cultures all over the world. They wanted to know if the same communicative structure also exists in chimpanzees even though they communicate through gestures rather than through speech. To find out, they collected data on chimpanzee "conversations" across five wild communities in East Africa.

Altogether, they collected data on more than 8,500 gestures for 252 individuals. They measured the timing of turn-taking and conversational patterns. They found that 14% of communicative interactions included an exchange of gestures between two interacting individuals. Most of the exchanges included a two-part exchange, but some included up to seven parts.

Overall, the data reveal a similar timing to human conversation, with short pauses between a gesture and a gestural response at about 120 milliseconds. Behavioral responses to gestures were slower. "The similarities to human conversations reinforce the description of these interactions as true gestural exchanges, in which the gestures produced in response are contingent on those in the previous turn," the researchers write.

"We did see a little variation among different chimp communities, which again matches what we see in people where there are slight cultural variations in conversation pace: some cultures have slower or faster talkers," Badihi says.

"Fascinatingly, they seem to share both our universal timing, and subtle cultural differences," says Hobaiter. "In humans, it is the Danish who are 'slower' responders, and in Eastern chimpanzees that's the Sonso community in Uganda."

This correspondence between human and chimpanzee face-to-face communication points to shared underlying rules in communication, the researchers say. They note that these structures could trace back to shared ancestral mechanisms. It's also possible that chimpanzees and humans arrived at similar strategies to enhance coordinated interactions and manage competition for communicative "space." The findings suggest that human communication may not be as unique as one might think.

"It shows that other social species don't need language to engage in close-range communicative exchanges with quick response time," Badihi says. "Human conversations may share similar evolutionary history or trajectories to the communication systems of other species suggesting that this type of communication is not unique to humans but more widespread in social animals."

In future studies, the researchers say they want to explore why chimpanzees have these conversations to begin with. They think chimpanzees often rely on gestures to ask something of one another.

Read more at Science Daily

Jul 22, 2024

New dawn for space storm alerts could help shield Earth's tech

Space storms could soon be forecasted with greater accuracy than ever before thanks to a big leap forward in our understanding of exactly when a violent solar eruption may hit Earth.

Scientists say it is now possible to predict the precise speed a coronal mass ejection (CME) is travelling at and when it will smash into our planet -- even before it has fully erupted from the Sun.

CMEs are bursts of gas and magnetic fields spewed into space from the solar atmosphere.

They can cause geomagnetic storms that have the potential to wreak havoc with terrestrial technology in Earth's orbit and on its surface, which is why experts across the globe are striving to improve space weather forecasts.

Advancements such as this one could make a huge difference in helping to protect infrastructure that is vital to our everyday lives, according to researchers at Aberystwyth University, who will present their findings today at the Royal Astronomical Society's National Astronomy Meeting in Hull.

They made their discovery after studying specific areas on the Sun called 'Active Regions', which have strong magnetic fields where CMEs are born. The researchers monitored how these areas changed in the periods before, during and after an eruption.

A vital aspect which they looked at was the "critical height" of the Active Regions, which is the height at which the magnetic field becomes unstable and can lead to a CME.

"By measuring how the strength of the magnetic field decreases with height, we can determine this critical height," said lead researcher Harshita Gandhi, a solar physicist at Aberystwyth University.

"This data can then be used along with a geometric model which is used to track the true speed of CMEs in three dimensions, rather than just two, which is essential for precise predictions."

She added: "Our findings reveal a strong relationship between the critical height at CME onset and the true CME speed.

"This insight allows us to predict the CME's speed and, consequently, its arrival time on Earth, even before the CME has fully erupted."

When these CMEs hit the Earth they can trigger a geomagnetic storm which is capable of producing stunning aurorae, often referred to in the northern hemisphere as the Northern Lights.

But the storms also have the potential to disrupt vital systems we rely on daily, including satellites, power grids, and communication networks, which is why scientists worldwide are working hard to improve our ability to better predict when CMEs will hit Earth.

This requires knowing a more accurate speed of the CME shortly after it erupts from the Sun to better provide advance warnings of when it will reach our planet.

Accurate speed predictions enable better estimates of when a CME will reach Earth, providing crucial advance warnings.

"Understanding and using the critical height in our forecasts improves our ability to warn about incoming CMEs, helping to protect the technology that our modern lives depend on," Gandhi said.

"Our research not only enhances our understanding of the Sun's explosive behaviour but also significantly improves our ability to forecast space weather events.

Read more at Science Daily

Chemists design novel method for generating sustainable fuel

Chemists have been working to synthesize high-value materials from waste molecules for years. Now, an international collaboration of scientists is exploring ways to use electricity to streamline the process.

In their study, recently published in Nature Catalysis, researchers demonstrated that carbon dioxide, a greenhouse gas, can be converted into a type of liquid fuel called methanol in a highly efficient manner.

This process happened by taking cobalt phthalocyanine (CoPc) molecules and spreading them evenly on carbon nanotubes, graphene-like tubes that have unique electrical properties. On their surface was an electrolyte solution, which, by running an electrical current through it, allowed CoPc molecules to take electrons and use them to turn carbon dioxide into methanol.

Using a special method based on in-situ spectroscopy to visualize the chemical reaction, researchers for the first time saw those molecules convert themselves into either methanol or carbon monoxide, which is not the desired product. They found that which path the reaction takes is decided by the environment where the carbon dioxide molecule reacts.

Tuning this environment by controlling how the CoPc catalyst was distributed on the carbon nanotube surface allowed carbon dioxide to be as much as eight times more likely to produce methanol, a discovery that could increase the efficiency of other catalytic processes and have a widespread impact on other fields, said Robert Baker, co-author of the study and a professor in chemistry and biochemistry at The Ohio State University.

"When you take carbon dioxide and convert it to another product, there are many different molecules you can make," he said. "Methanol is definitely one of the most desirable because it has such a high energy density and can be used directly as an alternative fuel."

While transforming waste molecules into useful products isn't a new phenomenon, until now, researchers have often been unable to watch how the reaction actually takes place, a crucial insight into being able to optimize and improve the process.

"We might empirically optimize how something works, but we don't really have an understanding of what makes it work, or what makes one catalyst work better than another catalyst," said Baker, who specializes in surface chemistry, the study of how chemical reactions change when they occur on the face of different objects. "These are very difficult things to answer."

But with the help of special techniques and computer modeling, the team has come significantly closer to grasping the complex process. In this study, researchers used a new type of vibrational spectroscopy, which allowed them to see how molecules behave on the surface, said Quansong Zhu, the lead author of the study and former Ohio State Presidential Scholar whose challenging measurements were vital to the discovery.

"We could tell by their vibrational signatures that it was the same molecule sitting in two different reaction environments," said Zhu. "We were able to correlate that one of those reaction environments was responsible for producing methanol, which is valuable liquid fuel."

According to the study, deeper analysis also found these molecules were directly interacting with supercharged particles called cations that enhanced the process of methanol formation.

More research is needed to learn more about what else these cations enable, but such a finding is key to achieving a more efficient way to create methanol, said Baker.

"We're seeing systems that are very important and learning things about them that have been wondered about for a long time," said Baker. "Understanding the unique chemistry that happens at a molecular level is really important to enabling these applications."

Besides being a low-cost fuel for vehicles like planes, cars and shipping boats, methanol produced from renewable electricity could also be utilized for heating and power generation, and to advance future chemical discoveries.

"There's a lot of exciting things that can come next based on what we've learned here, and some of that we're already starting to do together," said Baker. "The work is ongoing."

Read more at Science Daily

Study shows promise for a universal influenza vaccine

New research led by Oregon Health & Science University reveals a promising approach to developing a universal influenza vaccine -- a so-called "one and done" vaccine that confers lifetime immunity against an evolving virus.

The study, published today in the journal Nature Communications, tested an OHSU-developed vaccine platform against the virus considered most likely to trigger the next pandemic.

Researchers reported the vaccine generated a robust immune response in nonhuman primates that were exposed to the avian H5N1 influenza virus. But the vaccine wasn't based on the contemporary H5N1 virus; instead, the primates were inoculated against the influenza virus of 1918 that killed millions of people worldwide.

"It's exciting because in most cases, this kind of basic science research advances the science very gradually; in 20 years, it might become something," said senior author Jonah Sacha, Ph.D., professor and chief of the Division of Pathobiology at OHSU's Oregon National Primate Research Center. "This could actually become a vaccine in five years or less."

Researchers reported that six of 11 nonhuman primates inoculated against the virus that circulated a century ago -- the 1918 flu -- survived exposure to one of the deadliest viruses in the world today, H5N1. In contrast, a control group of six unvaccinated primates exposed to the H5N1 virus succumbed to the disease.

Sacha said he believes the platform "absolutely" could be useful against other mutating viruses, including SARS-CoV-2.

"It's a very viable approach," he said. "For viruses of pandemic potential, it's critical to have something like this. We set out to test influenza, but we don't know what's going to come next."

A senior co-author from the University of Pittsburgh concurred.

"Should a deadly virus such as H5N1 infect a human and ignite a pandemic, we need to quickly validate and deploy a new vaccine," said co-corresponding author Douglas Reed, Ph.D., associate professor of immunology at the University of Pittsburgh Center for Vaccine Research.

Finding a stationary target

This approach harnesses a vaccine platform previously developed by scientists at OHSU to fight HIV and tuberculosis, and in fact is already being used in a clinical trial against HIV.

The method involves inserting small pieces of target pathogens into the common herpes virus cytomegalovirus, or CMV, which infects most people in their lifetimes and typically produces mild or no symptoms. The virus acts as a vector specifically designed to induce an immune response from the body's own T cells.

This approach differs from common vaccines -- including the existing flu vaccines -- which are designed to induce an antibody response that targets the most recent evolution of the virus, distinguished by the arrangement of proteins covering the exterior surface.

"The problem with influenza is that it's not just one virus," Sacha said. "Like the SARS-CoV-2 virus, it's always evolving the next variant and we're always left to chase where the virus was, not where it's going to be."

The spike proteins on the virus exterior surface evolve to elude antibodies. In the case of flu, vaccines are updated regularly using a best estimate of the next evolution of the virus. Sometimes it's accurate, sometimes less so.

In contrast, a specific type of T cell in the lungs, known as effector memory T cell, targets the internal structural proteins of the virus, rather than its continually mutating outer envelope. This internal structure doesn't change much over time -- presenting a stationary target for T cells to search out and destroy any cells infected by an old or newly evolved influenza virus.

Success with a century-old template

To test their T cell theory, researchers designed a CMV-based vaccine using the 1918 influenza virus as a template. Working within a highly secure biosafety level 3 laboratory at the University of Pittsburgh, they exposed the vaccinated nonhuman primates to small particle aerosols containing the avian H5N1 influenza virus -- an especially severe virus that is currently circulating among dairy cows in the United States.

Remarkably, six of the 11 vaccinated primates survived the exposure, despite the century-long period of virus evolution.

"It worked because the interior protein of the virus was so well preserved," Sacha said. "So much so, that even after almost 100 years of evolution, the virus can't change those critically important parts of itself."

The study raises the potential for developing a protective vaccine against H5N1 in people.

"Inhalation of aerosolized H5N1 influenza virus causes a cascade of events that can trigger respiratory failure," said co-senior author Simon Barratt-Boyes, Ph.D., professor of infectious diseases, microbiology and immunology at Pitt. "The immunity induced by the vaccine was sufficient to limit virus infection and lung damage, protecting the monkeys from this very serious infection."

By synthesizing more up-to-date virus templates, the new study suggests CMV vaccines may be able to generate an effective, long-lasting immune response against a wide suite of new variants.

"I think it means within five to 10 years, a one-and-done shot for influenza is realistic," Sacha said.

The same CMV platform developed by OHSU researchers has advanced to a clinical trial to protect against HIV, and a recent publication by those scientists suggests it may even be useful targeting specific cancer cells. The HIV clinical trial is being led by Vir Biotechnology, which licensed the vaccine platform from OHSU.

Sacha sees the development as the latest in the rapid advance of medical research to treat or prevent disease.

"It's a massive sea change within our lifetimes," Sacha said. "There is no question we are on the cusp of the next generation of how we address infectious disease."

Read more at Science Daily

New snake discovery rewrites history, points to North America's role in snake evolution

A new species of fossil snake unearthed in Wyoming is rewriting our understanding of snake evolution. The discovery, based on four remarkably well-preserved specimens found curled together in a burrow, reveals a new species named Hibernophis breithaupti. This snake lived in North America 34 million years ago and sheds light on the origin and diversification of boas and pythons.

Hibernophis breithaupti has unique anatomical features, in part because the specimens are articulated -- meaning they were found all in one piece with the bones still arranged in the proper order -- which is unusual for fossil snakes.

Researchers believe it may be an early member of Booidea, a group that includes modern boas and pythons.

Modern boas are widespread in the Americas, but their early evolution is not well understood.These new and very complete fossils add important new information, in particular, on the evolution of small, burrowing boas known as rubber boas.

Traditionally, there has been much debate on the evolution of small burrowing boas.

Hibernophis breithaupti shows that northern and more central parts of North America might have been a key hub for their development.

The discovery of these snakes curled together also hints at the oldest potential evidence for a behavior familiar to us today -- hibernation in groups.

"Modern garter snakes are famous for gathering by the thousands to hibernate together in dens and burrows," says Michael Caldwell, a U of A paleontologist who co-led the research along with his former graduate student Jasmine Croghan, and collaborators from Australia and Brazil. "They do this to conserve heat through the effect created by the ball of hibernating animals. It's fascinating to see possible evidence of such social behavior or hibernation dating back 34 million years."

From Science Daily

Jul 21, 2024

Exoplanet-hunting telescope to begin search for another Earth in 2026

Europe's next big space mission -- a telescope that will hunt for Earth-like rocky planets outside of our solar system -- is on course to launch at the end of 2026.

PLATO, or PLAnetary Transits and Oscillations of stars, is being built to find nearby potentially habitable worlds around Sun-like stars that we can examine in detail.

The space telescope will blast into orbit on Europe's new rocket, Ariane-6, which made its maiden flight last week after being developed at a cost of €4billion (£3.4billion).

Dr David Brown, of the University of Warwick, is giving an update on the mission at the Royal Astronomical Society's National Astronomy Meeting at the University of Hull this week.

"PLATO's goal is to search for exoplanets around stars similar to the Sun and at orbital periods long enough for them to be in the habitable zone," he said.

"One of the main mission objectives is to find another Earth-Sun equivalent pair, but it is also designed to carefully and precisely characterise the exoplanets that it finds (i.e. work out their masses, radii, and bulk density)."

PLATO isn't just an exoplanet hunter, however. It is also a stellar science mission.

As well as searching for exoplanets it will study the stars using a range of techniques including asteroseismology (measuring the vibrations and oscillations of stars) to work out their masses, radii, and ages.

Unlike most space telescopes, PLATO has multiple cameras -- including a UK-named one called ArthurEddington, after the famous astronomer and physicist who won the Royal Astronomical Society's Gold Medal in 1924.

It has 24 'Normal' cameras (N-CAMs) and 2 'Fast' cameras (F-CAMs). The N-CAMs are arranged into four groups of six cameras, with the cameras in each group pointing in the same direction but the groups slightly offset.

This gives PLATO a very large field of view, improved scientific performance, redundancy against failures, and a built-in way to identify 'false positive' signals that might mimic an exoplanet transit, Dr Brown explained.

"The planned observing strategy is to stare at two patches of sky, one in the North and one in the South, for two years each," he added.

"The Southern patch of sky has been chosen, while the Northern patch won't be confirmed for another few years."

Several of the spacecraft's components have finished their manufacturing programmes and are close to completing their calibration tests. This includes the UK-provided Front-End Electronics (FEE) for the N-CAMs.

Built by the Mullard Space Science Laboratory of University College London, these operate the cameras, digitise the images, and transfer them to the onboard data processing.

Ten of the final cameras have been built and tested and the first of these was mounted onto the optical bench -- the surface which keeps all cameras pointed in the right direction -- earlier this year.

Read more at Science Daily

New humidity-driven membrane to remove carbon dioxide from the air

A new ambient-energy-driven membrane that pumps carbon dioxide out of the air has been developed by Newcastle University researchers.

Direct air capture was identified as one of the 'Seven chemical separations to change the world'. This is because although carbon dioxide is the main contributor to climate change (we release ~40 billion tons into the atmosphere every year), separating carbon dioxide from air is very challenging due to its dilute concentration (~0.04%).

Prof Ian Metcalfe, Royal Academy of Engineering Chair in Emerging Technologies in the School of Engineering, Newcastle University, UK, and lead investigator states, "Dilute separation processes are the most challenging separations to perform for two key reasons. First, due to the low concentration, the kinetics (speed) of chemical reactions targeting the removal of the dilute component are very slow. Second, concentrating the dilute component requires a lot of energy."

These are the two challenges that the Newcastle researchers (with colleagues at the Victoria University of Wellington, New Zealand, Imperial College London, UK, Oxford University, UK, Strathclyde University, UK and UCL, UK) set out to address with their new membrane process. By using naturally occurring humidity differences as a driving force for pumping carbon dioxide out of air, the team overcame the energy challenge. The presence of water also accelerated the transport of carbon dioxide through the membrane, tackling the kinetic challenge.

The work is published in Nature Energy and Dr Greg A. Mutch, Royal Academy of Engineering Fellow in the School of Engineering, Newcastle University, UK explains, "Direct air capture will be a key component of the energy system of the future. It will be needed to capture the emissions from mobile, distributed sources of carbon dioxide that cannot easily be decarbonised in other ways."

"In our work, we demonstrate the first synthetic membrane capable of capturing carbon dioxide from air and increasing its concentration without a traditional energy input like heat or pressure. I think a helpful analogy might be a water wheel on a flour mill. Whereas a mill uses the downhill transport of water to drive milling, we use it to pump carbon dioxide out of the air."

Separation processes

Separation processes underpin most aspects of modern life. From the food we eat, to the medicines we take, and the fuels or batteries in our car, most products we use have been through several separation processes. Moreover, separation processes are important for minimising waste and the need for environmental remediation, such as direct air capture of carbon dioxide.

However, in a world moving towards a circular economy, separation processes will become even more critical. Here, direct air capture might be used to provide carbon dioxide as a feedstock for making many of the hydrocarbon products we use today, but in a carbon-neutral, or even carbon-negative, cycle.

Most importantly, alongside transitioning to renewable energy and traditional carbon capture from point sources like power plants, direct air capture is necessary for realising climate targets, such as the 1.5 °C goal set by the Paris Agreement.

The humidity-driven membrane


Dr Evangelos Papaioannou, Senior Lecturer in the School of Engineering, Newcastle University, UK explains, "In a departure from typical membrane operation, and as described in the research paper, the team tested a new carbon dioxide-permeable membrane with a variety of humidity differences applied across it. When the humidity was higher on the output side of the membrane, the membrane spontaneously pumped carbon dioxide into that output stream."

Using X-ray micro-computed tomography with collaborators at UCL and the University of Oxford, the team were able to precisely characterise the structure of the membrane. This enabled them to provide robust performance comparisons with other state-of-the-art membranes.

A key aspect of the work was modelling the processes occurring in the membrane at the molecular scale. Using density-functional-theory calculations with a collaborator affiliated to both Victoria University of Wellington and Imperial College London, the team identified 'carriers' within the membrane. The carrier uniquely transports both carbon dioxide and water but nothing else. Water is required to release carbon dioxide from the membrane, and carbon dioxide is required to release water. Because of this, the energy from a humidity difference can be used to drive carbon dioxide through the membrane from a low concentration to a higher concentration.

Read more at Science Daily

Good timing: Study unravels how our brains track time

Ever hear the old adage that time flies when you're having fun? A new study by a team of UNLV researchers suggests that there's a lot of truth to the trope.

Many people think of their brains as being intrinsically synced to the human-made clocks on their electronic devices, counting time in very specific, minute-by-minute increments. But the study, published this month in the latest issue of the peer-reviewed Cell Press journal Current Biology, showed that our brains don't work that way.

By analyzing changes in brain activity patterns, the research team found that we perceive the passage of time based on the number of experiences we have -- not some kind of internal clock. What's more, increasing speed or output during an activity appears to affect how our brains perceive time.

"We tell time in our own experience by things we do, things that happen to us," said James Hyman, a UNLV associate professor of psychology and the study's senior author. "When we're still and we're bored, time goes very slowly because we're not doing anything or nothing is happening. On the contrary, when a lot of events happen, each one of those activities is advancing our brains forward. And if this is how our brains objectively tell time, then the more that we do and the more that happens to us, the faster time goes."

Methodology and Findings

The findings are based on analysis of activity in the anterior cingulate cortex (ACC), a portion of the brain important for monitoring activity and tracking experiences. To do this, rodents were tasked with using their noses to respond to a prompt 200 times.

Scientists already knew that brain patterns are similar, but slightly different, each time you do a repetitive motion, so they set out to answer: Is it possible to detect whether these slight differences in brain pattern changes correspond with doing the first versus 200th motion in series? And does the amount of time it takes to complete a series of motions impact brain wave activity?

By comparing pattern changes throughout the course of the task, researchers observed that there are indeed detectable changes in brain activity that occur as one moves from the beginning to middle to end of carrying out a task. And regardless of how slowly or quickly the animals moved, the brain patterns followed the same path. The patterns were consistent when researchers applied a machine learning-based mathematical model to predict the flow of brain activity, bolstering evidence that it's experiences -- not time, or a prescribed number of minutes, as you would measure it on a clock -- that produce changes in our neurons' activity patterns.

Hyman drove home the crux of the findings by sharing an anecdote of two factory workers tasked with making 100 widgets during their shift, with one worker completing the task in 30 minutes and the other in 90 minutes.

"The length of time it took to complete the task didn't impact the brain patterns. The brain is not a clock; it acts like a counter," Hyman explained. "Our brains register a vibe, a feeling about time. ...And what that means for our workers making widgets is that you can tell the difference between making widget No. 85 and widget No. 60, but not necessarily between No. 85 and No. 88."

But exactly "how" does the brain count? Researchers discovered that as the brain progresses through a task involving a series of motions, various small groups of firing cells begin to collaborate -- essentially passing off the task to a different group of neurons every few repetitions, similar to runners passing the baton in a relay race.

"So, the cells are working together and over time randomly align to get the job done: one cell will take a few tasks and then another takes a few tasks," Hyman said. "The cells are tracking motions and, thus, chunks of activities and time over the course of the task."

And the study's findings about our brains' perception of time applies to activities-based actions other than physical motions too.

"This is the part of the brain we use for tracking something like a conversation through dinner," Hyman said. "Think of the flow of conversation and you can recall things earlier and later in the dinner. But to pick apart one sentence from the next in your memory, it's impossible. But you know you talked about one topic at the start, another topic during dessert, and another at the end."

By observing the rodents who worked quickly, scientists also concluded that keeping up a good pace helps influence time perception: "The more we do, the faster time moves. They say that time flies when you're having fun. As opposed to having fun, maybe it should be 'time flies when you're doing a lot'."

Takeaways


While there's already a wealth of information on brain processes over very short time scales of less than a second, Hyman said that the UNLV study is groundbreaking in its examination of brain patterns and perception of time over a span of just a few minutes to hours -- "which is how we live much of our life: one hour at a time. "

"This is among the first studies looking at behavioral time scales in this particular part of the brain called the ACC, which we know is so important for our behavior and our emotions," Hyman said.

The ACC is implicated in most psychiatric and neurodegenerative disorders, and is a concentration area for mood disorders, PTSD, addiction, and anxiety. ACC function is also central to various dementias including Alzheimer's disease, which is characterized by distortions in time. The ACC has long been linked to helping humans with sequencing events or tasks such as following recipes, and the research team speculates that their findings about time perception might fall within this realm.

While the findings are a breakthrough, more research is needed. Still, Hyman said, the preliminary findings posit some potentially helpful tidbits about time perception and its likely connection to memory processes for everyday citizens' daily lives. For example, researchers speculate that it could lend insights for navigating things like school assignments or even breakups.

"If we want to remember something, we may want to slow down by studying in short bouts and take time before engaging in the next activity. Give yourself quiet times to not move," Hyman said. "Conversely, if you want to move on from something quickly, get involved in an activity right away."

Hyman said there's also a huge relationship between the ACC, emotion, and cognition. Thinking of the brain as a physical entity that one can take ownership over might help us control our subjective experiences.

Read more at Science Daily