Mar 26, 2022

Scientists solve solar secret

The further we move away from a heat source, the cooler the air gets. Bizarrely, the same can't be said for the Sun, but University of Otago scientists may have just explained a key part of why.

Study lead Dr Jonathan Squire, of the Department of Physics, says the surface of the Sun starts at 6000 degree C, but over a short distance of only a few hundred kilometers, it suddenly heats up to more than a million degrees, becoming its atmosphere, or corona.

"This is so hot that the gas escapes the Sun's gravity as 'solar wind', and flies into space, smashing into Earth and other planets.

"We know from measurements and theory that the sudden temperature jump is related to magnetic fields which thread out of the Sun's surface. But, exactly how these work to heat the gas is not well understood -- this is known as the Coronal Heating Problem.

"Astrophysicists have several different ideas about how the magnetic-field energy could be converted into heat to explain the heating, but most have difficulty explaining some aspect of observations," he says.

Dr Squire and co-author Dr Romain Meyrand have been working with scientists at Princeton University and the University of Oxford and found two previous theories can be merged into one to solve a key piece of the 'problem'. The group's findings have just been published in Nature Astronomy.

The popular theories are based on heating caused by turbulence, and heating caused by a type of magnetic wave called ion cyclotron waves.

"Both, however, have some problem -- turbulence struggles to explain why Hydrogen, Helium and Oxygen in the gas become as hot as they do, while electrons remain surprisingly cold; while the magnetic waves theory could explain this feature, there doesn't seem to be enough of the waves coming off the Sun's surface to heat up the gas," Dr Meyrand says.

The group used six-dimensional supercomputer simulations of the coronal gas to show how these two theories are actually part of the same process, linked together by a bizarre effect called the 'helicity barrier'.

This intriguing occurrence was discovered in an earlier Otago study, led by Dr Meyrand.

"If we imagine plasma heating as occurring a bit like water flowing down a hill, with electrons heated right at the bottom, then the helicity barrier acts like a dam, stopping the flow and diverting its energy into ion cyclotron waves. In this way, the helicity barrier links the two theories and resolves each of their individual problems," he explains.

For this latest study, the group stirred the magnetic field lines in simulations and found the turbulence created the waves, which then caused the heating.

"As this happens, the structures and eddies that form end up looking extremely similar to cutting-edge measurements from NASA's Parker Solar Probe spacecraft, which has recently become the first human-made object to actually fly into the corona.

"This gives us confidence that we are accurately capturing key physics in the corona, which -- coupled with the theoretical findings about the heating mechanisms -- is a promising path to understanding the coronal heating problem," Dr Meyrand says.

Understanding more about the Sun's atmosphere and the subsequent solar wind is important because of the profound impacts they have on Earth, Dr Squire explains.

Effects which result from solar wind's interaction with the Earth's magnetic field is called 'space weather', which causes everything from Aurora to satellite-destroying radiation and geomagnetic currents which damage the power grid.

"All of this is sourced, fundamentally, by the corona and its heating by magnetic fields, so as well as being interesting for our general understanding of the solar system, the solar-corona's dynamics can have profound impacts on Earth.

Read more at Science Daily

New study reveals why HIV remains in human tissue even after antiretroviral therapy

Thanks to antiretroviral therapy, HIV infection is no longer the life sentence it once was. But despite the effectiveness of drugs to manage and treat the virus, it can never be fully eliminated from the human body, lingering in some cells deep in different human tissues where it goes unnoticed by the immune system.

Now, new research by University of Alberta immunologist Shokrollah Elahi reveals a possible answer to the mystery of why infected people can't get rid of HIV altogether.

Elahi and his team found that in HIV patients, killer T cells -- a type of white blood cells responsible for identifying and destroying cells infected with viruses -- have very little to none of a protein called CD73.

Because CD73 is responsible for migration and cell movement into the tissue, the lack of the protein compromises the ability of killer T cells to find and eliminate HIV-infected cells, explained Elahi.

"This mechanism explains one potential reason for why HIV stays in human tissues forever," he said, adding that the research also shows the complexity of HIV infection.

"This provides us the opportunity to come up with potential new treatments that would help killer T cells migrate better to gain access to the infected cells in different tissues."

After identifying the role of CD73 -- a three-year project -- Elahi turned his focus to understanding potential causes for the drastic reduction. He found it is partly due to the chronic inflammation that is common among people living with HIV.

"Following extensive studies, we discovered that chronic inflammation results in increased levels of a type of RNA found in cells and in blood, called microRNAs," he explained. "These are very small types of RNA that can bind to messenger RNAs to block them from making CD73 protein. We found this was causing the CD73 gene to be suppressed."

The team's discovery also helps explain why people with HIV have a lower risk of developing multiple sclerosis, Elahi noted.

"Our findings suggest that reduced or eliminated CD73 can be beneficial in HIV-infected individuals to protect them against MS. Therefore, targeting CD73 could be a novel potential therapeutic marker for MS patients."

Read more at Science Daily

Mar 25, 2022

Scientists develop the largest, most detailed model of the early universe to date

It all started around 13.8 billion years ago with a big, cosmological "bang" that brought the universe suddenly and spectacularly into existence. Shortly after, the infant universe cooled dramatically and went completely dark.

Then, within a couple hundred million years after the Big Bang, the universe woke up, as gravity gathered matter into the first stars and galaxies. Light from these first stars turned the surrounding gas into a hot, ionized plasma -- a crucial transformation known as cosmic reionization that propelled the universe into the complex structure that we see today.

Now, scientists can get a detailed view of how the universe may have unfolded during this pivotal period with a new simulation, known as Thesan, developed by scientists at MIT, Harvard University, and the Max Planck Institute for Astrophysics.

Named after the Etruscan goddess of the dawn, Thesan is designed to simulate the "cosmic dawn," and specifically cosmic reionization, a period which has been challenging to reconstruct, as it involves immensely complicated, chaotic interactions, including those between gravity, gas, and radiation.

The Thesan simulation resolves these interactions with the highest detail and over the largest volume of any previous simulation. It does so by combining a realistic model of galaxy formation with a new algorithm that tracks how light interacts with gas, along with a model for cosmic dust.

With Thesan, the researchers can simulate a cubic volume of the universe spanning 300 million light years across. They run the simulation forward in time to track the first appearance and evolution of hundreds of thousands of galaxies within this space, beginning around 400,000 years after the Big Bang, and through the first billion years.

So far, the simulations align with what few observations astronomers have of the early universe. As more observations are made of this period, for instance with the newly launched James Webb Space Telescope, Thesan may help to place such observations in cosmic context.

For now, the simulations are starting to shed light on certain processes, such as how far light can travel in the early universe, and which galaxies were responsible for reionization.

"Thesan acts as a bridge to the early universe," says Aaron Smith, a NASA Einstein Fellow in MIT's Kavli Institute for Astrophysics and Space Research. "It is intended to serve as an ideal simulation counterpart for upcoming observational facilities, which are poised to fundamentally alter our understanding of the cosmos."

Smith and Mark Vogelsberger, associate professor of physics at MIT, Rahul Kannan of the Harvard-Smithsonian Center for Astrophysics, and Enrico Garaldi at Max Planck have introduced the Thesan simulation through three papers, the third published today in the Monthly Notices of the Royal Astronomical Society.

Follow the light

In the earliest stages of cosmic reionization, the universe was a dark and homogenous space. For physicists, the cosmic evolution during these early "dark ages" is relatively simple to calculate.

"In principle you could work this out with pen and paper," Smith says. "But at some point gravity starts to pull and collapse matter together, at first slowly, but then so quickly that calculations become too complicated, and we have to do a full simulation."

To fully simulate cosmic reionization, the team sought to include as many major ingredients of the early universe as possible. They started off with a successful model of galaxy formation that their groups previously developed, called Illustris-TNG, which has been shown to accurately simulate the properties and populations of evolving galaxies. They then developed a new code to incorporate how the light from galaxies and stars interact with and reionize the surrounding gas -- an extremely complex process that other simulations have not been able to accurately reproduce at large scale.

"Thesan follows how the light from these first galaxies interacts with the gas over the first billion years and transforms the universe from neutral to ionized," Kannan says. "This way, we automatically follow the reionization process as it unfolds."

Finally, the team included a preliminary model of cosmic dust -- another feature that is unique to such simulations of the early universe. This early model aims to describe how tiny grains of material influence the formation of galaxies in the early, sparse universe.

Cosmic bridge

With the simulation's ingredients in place, the team set its initial conditions for around 400,000 years after the Big Bang, based on precision measurements of relic light from the Big Bang. They then evolved these conditions forward in time to simulate a patch of the universe, using the SuperMUC-NG machine -- one of the largest supercomputers in the world -- which simultaneously harnessed 60,000 computing cores to carry out Thesan's calculations over an equivalent of 30 million CPU hours (an effort that would have taken 3,500 years to run on a single desktop).

The simulations have produced the most detailed view of cosmic reionization, across the largest volume of space, of any existing simulation. While some simulations model across large distances, they do so at relatively low resolution, while other, more detailed simulations do not span large volumes.

"We are bridging these two approaches: We have both large volume and high resolution," Vogelsberger emphasizes.

Early analyses of the simulations suggest that towards the end of cosmic reionization, the distance light was able to travel increased more dramatically than scientists had previously assumed.

"Thesan found that light doesn't travel large distances early in the universe," Kannan says. "In fact, this distance is very small, and only becomes large at the very end of reionization, increasing by a factor of 10 over just a few hundred million years."

The researchers also see hints of the type of galaxies responsible for driving reionization. A galaxy's mass appears to influence reionization, though the team says more observations, taken by James Webb and other observatories, will help to pin down these predominant galaxies.

Read more at Science Daily

Rapid glacial advance reconstructed during the time of Norse occupation in Greenland

The Greenland Ice Sheet is the second largest ice body in the world, and it has the potential to contribute significantly to global sea-level rise in a warming global climate. Understanding the long-term record of the Greenland Ice Sheet, including both records of glacial advance and retreat, is critical in validating approaches that model future ice-sheet scenarios. However, this reconstruction can be extremely challenging. A new study published Thursday in the journal Geology reconstructed the advance of one of the largest tidewater glaciers in Greenland to provide a better understanding of long-term glacial dynamics.

"In the news, we're very used to hearing about glacial retreat, and that's because in a warming climate scenario -- which is what we're in at the moment -- we generally document ice masses retreating. However, we also want to understand how glaciers react if there is a climate cooling and subsequent advance. To do this, we need to reconstruct glacier geometry from the past," said Danni Pearce, co-lead author of the study.

An interdisciplinary team of researchers studied the advance of Kangiata Nunaata Sermia (KNS) -- the largest tidewater glacier in southwest Greenland -- during a period of cooling when the Norse had settlements in Greenland. Differing from glaciers that are strictly on land, tidewater glaciers extend and flow all the way to the ocean or a sea, where they can then calve and break up into icebergs.

Reconstructing the advance of glaciers can be exceptionally difficult, because the glacier typically destroys or reworks everything in its path as it advances forward. The research team undertook multiple field seasons in Greenland, traveling on foot to remote sites -- many of which hadn't been visited since the 1930s -- to try and uncover the record of KNS advance.

"When we went out into the field, we had absolutely no idea whether the evidence would be there or not, so I was incredibly nervous. Though we did a huge amount of planning beforehand, until you go out into the field you don't know what you're going to find," said James Lea, the other co-lead author of the study.

By traveling on foot, the research team was able to more closely examine and explore sites that otherwise may have been missed if traveling by helicopter. The team's planning paid off, and the sedimentary sequences they studied and sampled held the clues they were looking for to date and track the advance of the glacier.

The research team found that during the twelfth and thirteenth centuries CE, KNS advanced at least 15 km, at a rate of ~115 m/yr. This rate of advance is comparable to modern rates of glacial retreat observed over the past ~200 years, indicating that when climate is cooler glaciers can advance equally as fast as they are currently retreating. The glacier reached its maximum extent by 1761 CE during the Little Ice Age, culminating in a total advance of ~20 km. Since then, KNS has retreated ~23 km to its present position.

The period when the glacier was advancing coincided with when the Norse were present in Greenland. Prior to its maximum extent during the Little Ice Age, the researchers found that KNS advanced to a location within only 5 km of a Norse farmstead.

"Even though KNS was rapidly coming down the fjord, it did not seem to affect the Norse, which we found really unusual," said Pearce. "So the team started to think about the surrounding environment and the amount of iceberg production in the fjord during that time. At the moment, the fjord is completely filled with icebergs, making boat access challenging, and we know from historical record that it has been like this for the last 200 years while the glacier has been retreating. However, for KNS to advance at 115 m/yr, it needed to hang onto its ice and could not have been producing a lot of icebergs. So we actually think that the fjord would have looked very different with few icebergs, which allowed the Norse far more easy access to this site for farming, hunting, and fishing."

In the 1930s, archaeologists who visited the site hypothesized that conditions in the fjord must have been different from the present day in order for the Norse to have occupied the site, and this current research study provides data to support these long-held ideas.

"So we have this counterintuitive notion that climate cooling and glacier advance might have actually helped the Norse in this specific circumstance and allowed them to navigate more of the fjord more easily," said Lea.

The Norse left Greenland during the fifteenth century CE, and these results are consistent with the idea that a cooling climate was likely not the cause of their exodus; rather, a combination of economic factors likely led the Norse to abandon Greenland.

The results from this research reconstructing rapid glacial advance are also shown to be consistent with the ways ice sheet models work, which brings confidence to the projections from these models. Having accurate models and projections are crucial in understanding and preparing for future scenarios of continued retreat of the Greenland Ice Sheet and associated sea-level rise.

"Melt from Greenland not only impacts sea-level change but also the ecology around the ice sheets, fisheries, the biological productivity of the oceans -- how much algae is growing. And also because the types of glaciers we're looking at produce icebergs these can cause hazards to shipping and trade, especially if the Northwest Passage opens up as it is expected to," said James Lea.

Read more at Science Daily

Quantum physics sets a speed limit to electronics

Semiconductor electronics is getting faster and faster -- but at some point, physics no longer permits any increase. The speed can definitely not be increased beyond one petahertz (one million gigahertz), even if the material is excited in an optimal way with laser pulses.

How fast can electronics be? When computer chips work with ever shorter signals and time intervals, at some point they come up against physical limits. The quantum-mechanical processes that enable the generation of electric current in a semiconductor material take a certain amount of time. This puts a limit to the speed of signal generation and signal transmission.

TU Wien (Vienna), TU Graz and the Max Planck Institute of Quantum Optics in Garching have now been able to explore these limits: The speed can definitely not be increased beyond one petahertz (one million gigahertz), even if the material is excited in an optimal way with laser pulses. This result has now been published in the scientific journal Nature Communications.

Fields and currents

Electric current and light (i.e. electromagnetic fields) are always interlinked. This is also the case in microelectronics: In microchips, electricity is controlled with the help of electromagnetic fields. For example, an electric field can be applied to a transistor, and depending on whether the field is switched on or off, the transistor either allows electrical current to flow or blocks it. In this way, an electromagnetic field is converted into an electrical signal.

In order to test the limits of this conversion of electromagnetic fields to current, laser pulses -- the fastest, most precise electromagnetic fields available -- are used, rather than transistors.

"Materials are studied that initially do not conduct electricity at all," explains Prof. Joachim Burgdörfer from the Institute for Theoretical Physics at TU Wien. "These are hit by an ultra-short laser pulse with a wavelength in the extreme UV range. This laser pulse shifts the electrons into a higher energy level, so that they can suddenly move freely. That way, the laser pulse turns the material into an electrical conductor for a short period of time." As soon as there are freely moving charge carriers in the material, they can be moved in a certain direction by a second, slightly longer laser pulse. This creates an electric current that can then be detected with electrodes on both sides of the material.

These processes happen extremely fast, on a time scale of atto- or femtoseconds. "For a long time, such processes were considered instantaneous," says Prof. Christoph Lemell (TU Wien). "Today, however, we have the necessary technology to study the time evolution of these ultrafast processes in detail." The crucial question is: How fast does the material react to the laser? How long does the signal generation take and how long does one have to wait until the material can be exposed to the next signal? The experiments were carried out in Garching and Graz, the theoretical work and complex computer simulations were done at TU Wien.

Time or energy -- but not both

The experiment leads to a classic uncertainty dilemma, as it often occurs in quantum physics: in order to increase the speed, extremely short UV laser pulses are needed, so that free charge carriers are created very quickly. However, using extremely short pulses implies that the amount of energy which is transferred to the electrons is not precisely defined. The electrons can absorb very different energies. "We can tell exactly at which point in time the free charge carriers are created, but not in which energy state they are," says Christoph Lemell. "Solids have different energy bands, and with short laser pulses many of them are inevitably populated by free charge carriers at the same time."

Depending on how much energy they carry, the electrons react quite differently to the electric field. If their exact energy is unknown, it is no longer possible to control them precisely, and the current signal that is produced is distorted -- especially at high laser intensities.

Read more at Science Daily

Cases of cognitive decline in older people more than doubles in ten years

The researchers set out to see if there had been an increase in the numbers of older people who were reporting their first concerns about memory loss or cognitive decline to their doctor and what their chances of developing dementia were after consultation.

The study, published today in Clinical Epidemiology, looked at data from more than 1.3m adults aged between 65 and 99 years old, taken between 2009 and the end of 2018. The researchers identified 55,941 adults who had spoken to their GP about memory concerns and 14,869 people who had a record of cognitive decline.

For every 1,000 people that were observed for one year in 2009, there was one new case of cognitive decline being recorded. By 2018, for every 1,000 people that were observed for one year, there were three new cases of cognitive decline being recorded.

Lead author and PhD candidate Brendan Hallam (UCL Epidemiology & Health Care) said: "This is an important study which sheds new light on how prevalent memory concerns and cognitive decline are among the older generation in the UK and how likely these symptoms might progress to a dementia diagnosis.

"The study showed that while memory concern rates had remained stable, incidence of cognitive decline, a step beyond memory concern, had more than doubled between 2009 and 2018.

"There has been a drive in the past decade to encourage people to seek help earlier from their doctors if they are worried about their memory and we found that among those over 80, women and people living in more deprived areas were more likely to have a record of memory concern or cognitive decline, and their symptoms were more likely to progress to dementia diagnosis."

The study also showed that within three years of following up a person from the date when the doctor reported a memory concern, 46% of people would go on to develop dementia. For people with cognitive decline, 52% would go on to develop dementia.

Co-author, Professor Kate Walters (UCL Epidemiology & Health Care) explained: "People who have been noted in their health records as having concerns about their memory are at just under 50% chance of developing dementia within the next three years."

Brendan Hallam also outlined "Memory concerns and cognitive decline are not only hallmark symptoms of dementia, but they also predict a high risk of developing dementia. It is important for GPs to identify people with memory concerns as soon as possible to deliver recommendations to improve memory and allow timely diagnosis of dementia."

The authors note one potential limitation of the present study is the potential variations in which GPs record memory concerns and memory decline. They also say more research is needed to better understand the discrepancy between rates of memory symptoms and cognitive decline in the general population and those recorded in primary care.

Read more at Science Daily

Mar 24, 2022

On Jupiter's moon Europa, 'chaos terrains' could be shuttling oxygen to ocean

Salt water within the icy shell of Jupiter's moon Europa could be transporting oxygen into an ice-covered ocean of liquid water where it could potentially help sustain alien life, according to a team of researchers led by The University of Texas at Austin.

This theory has been proposed by others, but the researchers put it to the test by building the world's first physics-based computer simulation of the process, with oxygen hitching a ride on salt water under the moon's "chaos terrains," landscapes made up of cracks, ridges and ice blocks that cover a quarter of the icy world.

The results show that not only is the transport possible, but that the amount of oxygen brought into Europa's ocean could be on a par with the quantity of oxygen in Earth's oceans today.

"Our research puts this process into the realm of the possible," said lead researcher Marc Hesse, a professor at the UT Jackson School of Geosciences Department of Geological Sciences. "It provides a solution to what is considered one of the outstanding problems of the habitability of the Europa subsurface ocean."

The study was recently published in the journal Geophysical Research Letters.

Europa is a top spot to look for alien life because scientists have detected signs of oxygen and water, along with chemicals that could serve as nutrients. However, the moon's ice shell -- which is estimated to be about 15 miles thick -- serves as a barrier between water and oxygen, which is generated by sunlight and charged particles from Jupiter striking the icy surface.

If life as we know it exists in the ocean, there needs to be a way for oxygen to get to it. According to Hesse, the most plausible scenario based on the available evidence is for the oxygen to be carried by salt water, or brine.

Scientists think that chaos terrains form above regions where Europa's ice shell partially melts to form brine, which can mix with oxygen from the surface. The computer model created by the researchers showed what happens to the brine after the formation of the chaos terrain.

The model showed the brine draining in a distinct manner, taking the form of a "porosity wave" that causes pores in the ice to momentarily widen -- allowing the brine to pass through before sealing back up. Hesse compares the process to the classic cartoon gag of a bulge of water making its way down a garden hose.

This mode of transport appears to be an effective way to bring oxygen through the ice, with 86% of the oxygen taken up at the surface riding the wave all the way to the ocean. But the available data allows for a wide range of oxygen levels delivered to Europa's ocean over its history -- with estimates ranging by a factor of 10,000.

According to co-author Steven Vance, a research scientist at NASA's Jet Propulsion Laboratory (JPL) and the supervisor of its Planetary Interiors and Geophysics Group, the highest estimate would make the oxygen levels in Europa's ocean similar to those in Earth's oceans -- which raises hope about the potential for that oxygen to support life in the hidden sea.

"It's enticing to think of some kind of aerobic organisms living just under the ice," he said.

Vance said that NASA's upcoming 2024 Europa Clipper mission may help improve estimates for oxygen and other ingredients for life on the icy moon.

Kevin Hand, a scientist focused on Europa research at NASA JPL who was not part of the study, said that the study presents a compelling explanation for oxygen transport on Europa.

"We know that Europa has useful compounds like oxygen on its surface, but do those make it down into the ocean below, where life can use them?" he said. "In the work by Hesse and his collaborators, the answer seems to be yes."

Read more at Science Daily

Older wildfire smoke plumes can affect climate

Aerosols carried in wildfire smoke plumes that are hundreds of hours old can still affect climate, according to a study out of the University of California, Davis.

The research, published in the journal Environmental Science and Technology, suggests that wildfire emissions even 10 days old can affect the properties of aerosols -- suspended liquid or particles that are key to cloud formation.

Research in aerosols and particulate matter pollution related to wildfires has most often focused on the early hours of smoke plumes, not several days later after smoke has traveled to other areas.

Enhancing modeling

This research helps fill in a knowledge gap and can inform future predictions about the climate and atmospheric effects of wildfire over the lifetime of aerosols, particularly in rural or pristine areas with relatively clean air, said Qi Zhang, an environmental toxicology professor and lead author of the study.

"These parameters are really useful for atmospheric and chemical models," she said. "It's a really important component to solving the effects on climate. To capture those characteristics is super critical."

Zhang, Ph.D. student Ryan Farley and others spent time in 2019 at the Mount Bachelor Observatory atop a volcanic mountain in Oregon. That year was relatively calm in terms of wildfire, but smoke plumes and aerosols were still observed. Some were at least 10 days old and came from as close as Northern California and as far as Siberia, Russia.

The properties and chemical composition of aerosols can do a number of things: scatter or absorb solar radiation affecting temperature, seed clouds to produce rain or snow, or change the reflectivity of clouds -- all of which affect climate.

Aerosol properties change with age

Scientists found that particulate matter concentrations were low, but oxidized organic aerosols from burning biomass -- such as trees, grasses and shrubs -- were detected throughout the samples.

The aerosols, which have a life cycle of about two weeks, were larger in aged samples compared to those found shortly after a fire starts.

"The properties of the smoke determine the effects on the climate," Zhang said. "The really aged aerosols can behave very differently than the fresh ones. You want to capture these aerosols over the lifetime to properly account for the effects."

Aerosols in the background

Older aerosols produced by wildfires can be present but not obvious and still affect climate.

"It's not something you just notice but it's in the background," she said.

Knowing that information becomes ever more important as "biomass burning has become more and more frequent," Zhang said.

Read more at Science Daily

Early evolution of sea urchins

New insight on the origins and early evolution of echinoids, a group that includes the sea urchins, the sand dollars, and their relatives, has been published today in the journal eLife.

The study suggests that modern echinoids emerged approximately 300 million years ago, survived the Permo-Triassic mass extinction event -- the most severe biodiversity crisis in Earth's history -- and rapidly diversified in its aftermath. These findings help address a gap in knowledge caused by the relative lack of fossil evidence for this early diversification.

There are more than 1,000 living species of echinoids, including sea urchins, heart urchins, sand dollars and sea biscuits, which live across different ocean environments ranging from shallow waters to abysses. Throughout history, the hard spine-covered skeletons of these creatures have left an impressive number of fossils. However, despite this remarkable fossil record, their emergence is documented by few fossil specimens with unclear affinities to living groups, making their early history uncertain.

"There are still debates among scientists about when the ancestors of echinoids emerged and what role the mass extinction event that occurred between the Permian and Triassic periods may have played in their evolution," says first author Nicolás Mongiardino Koch, who completed the work while he was at Yale University, New Haven, Connecticut, US, and is now a postdoctoral fellow at Scripps Institution of Oceanography at UC San Diego, US.

"We set out to help resolve these debates by combining genomic and paleontological data to disentangle their evolutionary relationships. The extraordinary fossil record of echinoids and the ease with which these fossils can be incorporated in phylogenetic analyses make them an ideal system to explore their early evolution using this approach."

Mongiardino Koch and the team built upon available molecular resources with 18 novel genomic datasets, creating the largest existing molecular matrix for echinoids. Using this dataset, they were able to reconstruct the phylogenetic relationships and divergence times of the major lineages of living echinoids and place their diversification within broader evolutionary history. They did so by applying a 'molecular clock' technique to their dataset, whereby the rate at which mutations accumulated in the echinoid genomes is translated into geological time with the use of fossil evidence, allowing researchers to determine when different lineages first diversified.

Their analyses suggest that the ancestors of modern echinoids likely emerged during the Early Permian, and rapidly diversified during the Triassic period in the aftermath of a mass extinction event, even though this evolutionary radiation does not seem to have been captured by the fossil record.

Additionally, the results suggest that sand dollars and sea biscuits likely emerged much earlier than thought, during the Cretaceous period about 40 to 50 million years before the first documented fossils of these creatures. The authors say this result is remarkable, as the tough skeleton of the sand dollars, their buried lifestyles, and their extremely distinct morphologies imply that their fossil record should faithfully reflect their true evolutionary history.

The team also developed a multivariate statistical approach called a 'chronospace' to help them visualise and assess the robustness of their evolutionary timeline to different choices in their analyses. They found that different implementations of the molecular clock model had the strongest impact on divergence times, while other decisions showed minimal effects.

Read more at Science Daily

Do octopuses, squid and crabs have emotions?

Octopuses can solve complex puzzles and show a preference for different individuals, but whether they, and other animals and invertebrates, have emotions is being hotly debated and could shake up humans' moral decision-making, says a York University expert in animal minds.

Most countries don't recognize invertebrates, such as octopuses, crabs, lobsters and crayfish, as sentient beings that can feel pain, but the United Kingdom is considering amendments to its animal welfare legislation that would recognize this.

"A London School of Economics (LSE) report commissioned by the U.K. government found there is strong enough evidence to conclude that decapod crustaceans and cephalopod molluscs are sentient," says York University Professor and philosopher Kristin Andrews, the York Research Chair in Animal Minds, who is working with the LSE team.

Andrews co-wrote an article published today in the journal Science, "The question of animal emotions," with Professor Frans de Waal, director of the Living Links Center at Emory University, which discusses the ethical and policy issues around animals being considered sentient.

Andrews points out it has long been thought in Western culture that other animals don't feel pain or have emotions. "It's been a real struggle even to get fish and mammals recognized under welfare law as sentient. So, it's pretty cutting-edge what seems to be happening in the U.K. with invertebrates."

Pre-verbal human babies were considered not to feel pain up until at least the 1980s. It is still thought by many that animals, including invertebrates, don't feel pain and only have unconscious reactions to negative stimuli. However, research on mammals, fish, octopuses, and to a lesser extent crabs, has shown they avoid pain and dangerous locations, and there are signs of empathy in some animals, such as cows -- they become distressed when they see their calf is in pain.

Recognizing the sentience of invertebrates opens a moral and ethical dilemma. Humans can say what they feel, but animals don't have the same tools for describing their emotions. "However, the research so far strongly suggests their existence," says Andrews, is working on a research project called Animals and Moral Practice.

"When we're going about our normal lives, we try not to do harm to other beings. So, it's really about retraining the way we see the world. How exactly to treat other animals remains an open research question," says Andrews. "We don't have sufficient science right now to know exactly what the proper treatment of certain species should be. To determine that, we need greater co-operation between scientists and ethicists."

There may be a point when humans can no longer assume that crayfish, shrimp, and other invertebrates don't feel pain and other emotions.

Read more at Science Daily

Good news for coffee lovers: Daily coffee may benefit the heart

Drinking coffee -- particularly two to three cups a day -- is not only associated with a lower risk of heart disease and dangerous heart rhythms but also with living longer, according to studies being presented at the American College of Cardiology's 71st Annual Scientific Session. These trends held true for both people with and without cardiovascular disease. Researchers said the analyses -- the largest to look at coffee's potential role in heart disease and death -- provide reassurance that coffee isn't tied to new or worsening heart disease and may actually be heart protective.

"Because coffee can quicken heart rate, some people worry that drinking it could trigger or worsen certain heart issues. This is where general medical advice to stop drinking coffee may come from. But our data suggest that daily coffee intake shouldn't be discouraged, but rather included as a part of a healthy diet for people with and without heart disease," said Peter M. Kistler, MD, professor and head of arrhythmia research at the Alfred Hospital and Baker Heart Institute in Melbourne, Australia, and the study's senior author. "We found coffee drinking had either a neutral effect -- meaning that it did no harm -- or was associated with benefits to heart health."

Kistler and his team used data from the UK BioBank, a large-scale prospective database with health information from over half a million people who were followed for at least 10 years. Researchers looked at varying levels of coffee consumption ranging from up to a cup to more than six cups a day and the relationship with heart rhythm problems (arrhythmias); cardiovascular disease, including coronary artery disease, heart failure and stroke; and total and heart-related deaths among people both with and without cardiovascular disease. Patients were grouped by how much coffee they reported drinking each day: 0, <1, 1, 2-3, 4-5, >5 cups/day. Coffee drinking was assessed from questionnaires completed upon entry into the registry. Overall, they either found no effect or, in many cases, significant reductions in cardiovascular risk after controlling for exercise, alcohol, smoking, diabetes and high blood pressure that could also play a role in heart health and longevity.

For the first study, researchers examined data from 382,535 individuals without known heart disease to see whether coffee drinking played a role in the development of heart disease or stroke during the 10 years of follow up. Participants' average age was 57 years and half were women. In general, having two to three cups of coffee a day was associated with the greatest benefit, translating to a 10%-15% lower risk of developing coronary heart disease, heart failure, a heart rhythm problem, or dying for any reason. The risk of stroke or heart-related death was lowest among people who drank one cup of coffee a day. Researchers did observe a U-shaped relationship with coffee intake and new heart rhythm problems. The maximum benefit was seen among people drinking two to three cups of coffee a day with less benefit seen among those drinking more or less.

The second study included 34,279 individuals who had some form of cardiovascular disease at baseline. Coffee intake at two to three cups a day was associated with lower odds of dying compared with having no coffee. Importantly, consuming any amount of coffee was not associated with a higher risk of heart rhythm problems, including atrial fibrillation (AFib) or atrial flutter, which Kistler said is often what clinicians are concerned about. Of the 24,111 people included in the analysis who had an arrhythmia at baseline, drinking coffee was associated with a lower risk of death. For example, people with AFib who drank one cup of coffee a day were nearly 20% less likely to die than non-coffee drinkers.

"Clinicians generally have some apprehension about people with known cardiovascular disease or arrhythmias continuing to drink coffee, so they often err on the side of caution and advise them to stop drinking it altogether due to fears that it may trigger dangerous heart rhythms," Kistler said. "But our study shows that regular coffee intake is safe and could be part of a healthy diet for people with heart disease."

Although two to three cups of coffee a day seemed to be the most favorable overall, Kistler said that people shouldn't increase their coffee intake, particularly if it makes them feel anxious or uncomfortable.

"There is a whole range of mechanisms through which coffee may reduce mortality and have these favorable effects on cardiovascular disease," he said. "Coffee drinkers should feel reassured that they can continue to enjoy coffee even if they have heart disease. Coffee is the most common cognitive enhancer -- it wakes you up, makes you mentally sharper and it's a very important component of many people's daily lives."

So how might coffee beans benefit the heart? People often equate coffee with caffeine, but coffee beans actually have over 100 biologically active compounds. These substances can help reduce oxidative stress and inflammation, improve insulin sensitivity, boost metabolism, inhibit the gut's absorption of fat and block receptors known to be involved with abnormal heart rhythms, Kistler said.

In a third study, researchers looked at whether there were any differences in the relationship between coffee and cardiovascular disease depending on whether someone drank instant or ground coffee or caffeinated or decaf. They found, once again, two to three cups a day to be associated with the lowest risk of arrhythmias, blockages in the heart's arteries, stroke or heart failure regardless of whether they had ground or instant coffee. Lower rates of death were seen across all coffee types. Decaf coffee did not have favorable effects against incident arrhythmia but did reduce cardiovascular disease, with the exception of heart failure. Kistler said the findings suggest caffeinated coffee is preferable across the board, and there are no cardiovascular benefits to choosing decaf over caffeinated coffees.

There are several important limitations to these studies. Researchers were unable to control for dietary factors that may play a role in cardiovascular disease, nor were they able to adjust for any creamers, milk or sugar consumed. Participants were predominantly white, so additional studies are needed to determine whether these findings extend to other populations. Finally, coffee intake was based on self-report via a questionnaire fielded at study entry. This should be considered when interpreting the study findings, though Kistler noted that research suggests people's dietary habits don't change much in adulthood or over time. Kistler said the results should be validated in randomized trials.

Read more at Science Daily

Mar 23, 2022

Researchers map the movement of white dwarfs of the Milky Way

White dwarfs were once normal stars similar to the Sun but then collapsed after exhausting all their fuel. These interstellar remnants have historically been difficult to study. However, a recent study from Lund University in Sweden reveals new information about the movement patterns of these puzzling stars.

White dwarfs have a radius of about 1 percent of the Sun's. They have about the same mass, which means they have an astonishing density of about 1 tonne per cubic centimeter. After billions of years, white dwarfs will cool down to a point where they stop emitting visible light, and turn into so-called black dwarfs.

The first white dwarf that was discovered was 40 Eridani A. It is a bright celestial body 16.2 light-years from Earth, surrounded by a binary system consisting of the white dwarf 40 Eridani B and the red dwarf 40 Eridani C. Ever since it was discovered in 1783, astronomers have tried to learn more about white dwarfs in order to gain a deeper understanding of the evolutionary history of our home galaxy.

In a study published in Monthly Notices of the Royal Astronomical Society, a research team can present new findings about how the collapsed stars move.

"Thanks to observations from the Gaia space telescope, we have for the first time managed to reveal the three-dimensional velocity distribution for the largest catalogue of white dwarfs to date. This gives us a detailedpicture of their velocity structurewith unparalleled detail," says Daniel Mikkola, doctoral student in astronomy at Lund University.

Thanks to Gaia, researchers have measured positions and velocities for about 1.5 billion stars. But only recently have they been able to completely focus on the white dwarfs in the Solar neighbourhood.

"We have managed to map the white dwarfs' velocities and movement patterns. Gaia revealed that there are two parallel sequences of white dwarfs when looking at their temperature and brightness. If we study these separately, we can see that they move in different ways, probably as a consequence of them having different masses and lifetimes," says Daniel Mikkola.

The results can be used to develop new simulations and models to continue to map the history and development of the Milky Way. Through an increased knowledge of the white dwarfs, the researchers hope to be able to straighten out a number of question marks surrounding the birth of the Milky Way.

Read more at Science Daily

Humans have given wild animals their diseases nearly 100 times, researchers find

An international research team led by scientists at Georgetown University has found that humans might give viruses back to animals more often than previously understood.

In a study published March 22 in Ecology Letters, the authors describe nearly one hundred different cases where diseases have undergone "spillback" from humans back into wild animals, much like how SARS-CoV-2 has been able to spread in mink farms, zoo lions and tigers, and wild white-tailed deer.

"There has understandably been an enormous amount of interest in human-to-wild animal pathogen transmission in light of the pandemic," says Gregory Albery, Ph.D., a postdoctoral fellow in the Department of Biology at Georgetown University and the study's senior author. "To help guide conversations and policy surrounding spillback of our pathogens in the future, we went digging through the literature to see how the process has manifested in the past."

In their new study, Albery and colleagues found that almost half of the incidents identified occurred in captive settings like zoos, where veterinarians keep a close eye on animals' health and are more likely to notice when a virus makes the jump. Additionally, more than half of cases they found were human-to-primate transmission, an unsurprising result both because pathogens find it easier to jump between closely-related hosts, and because wild populations of endangered great apes are so carefully monitored.

"This supports the idea that we're more likely to detect pathogens in the places we spend a lot of time and effort looking, with a disproportionate number of studies focusing on charismatic animals at zoos or in close proximity to humans" says Anna Fagre, DVM, Ph.D., MPH, a virologist and wildlife veterinarian at Colorado State University who was lead author on the study, and has also published research on the risks of SARS-CoV-2 spillback using laboratory experiments with the North American deer mouse (Peromyscus maniculatus). "It brings into question which cross-species transmission events we may be missing, and what this might mean not only for public health, but for the health and conservation of the species being infected."

Disease spillback has recently attracted substantial attention due to the spread of SARS-CoV-2, the virus that causes COVID-19, in wild white-tailed deer in the United States and Canada. Some data suggest that deer have given the virus back to humans in at least one case, and many scientists have expressed broader concerns that new animal reservoirs might give the virus extra chances to evolve new variants.

In their new study, Albery and colleagues find a sliver of good news: scientists can use artificial intelligence to anticipate which species might be at risk of contracting the virus. When the researchers compared species that have been infected with SARS-CoV-2 to predictions made by other researchers earlier in the pandemic, they found that scientists were able to guess correctly more often than not.

"It's quite satisfying to see that sequencing animal genomes and understanding their immune systems has paid off," says Colin Carlson, Ph.D., an assistant research professor in the Center for Global Health Science and Security at Georgetown University Medical Center and an author on the study. "The pandemic gave scientists a chance to test out some predictive tools, and it turns out we're more prepared than we thought."

The new study is part of a National Science Foundation-funded project called the Viral Emergence Research Initiative, or Verena. The Verena team uses data science and machine learning to study "the science of the host-virus network" -- a new field that aims to predict which viruses can infect humans, which animals host them and where, when and why they might emerge. Those insights could be critical if scientists want to understand how and why humans share their diseases with animals.

Spillover may be predictable, the authors conclude, but the biggest problem is how little we know about wildlife disease. "We're watching SARS-CoV-2 more closely than any other virus on earth, so when spillback happens, we can catch it. It's still much harder to credibly assess risk in other cases where we're not able to operate with as much information," says Carlson. As a result, it's hard to measure how severe a risk spillback poses for human health or wildlife conservation, particularly for pathogens other than SARS-CoV-2.

"Long-term monitoring helps us establish baselines for wildlife health and disease prevalence, laying important groundwork for future studies," says Fagre. "If we're watching closely, we can spot these cross-species transmission events much faster, and act accordingly."

Read more at Science Daily

Rewriting the history books: Why the Vikings left Greenland

One of the great mysteries of late medieval history is why did the Norse, who had established successful settlements in southern Greenland in 985, abandon them in the early 15th century? The consensus view has long been that colder temperatures, associated with the Little Ice Age, helped make the colonies unsustainable. However, new research, led by the University of Massachusetts Amherst and published recently in Science Advances, upends that old theory. It wasn't dropping temperatures that helped drive the Norse from Greenland, but drought.

When the Norse settled in Greenland on what they called the Eastern Settlement in 985, they thrived by clearing the land of shrubs and planting grass as pasture for their livestock. The population of the Eastern Settlement peaked at around 2,000 inhabitants, but collapsed fairly quickly about 400 years later. For decades, anthropologists, historians and scientists have thought the Eastern Settlement's demise was due to the onset of the Little Ice Age, a period of exceptionally cold weather, particularly in the North Atlantic, that made agricultural life in Greenland untenable.

However, as Raymond Bradley, University Distinguished Professor of geosciences at UMass Amherst and one of the paper's co- author, points out, "before this study, there was no data from the actual site of the Viking settlements. And that's a problem." Instead, the ice core data that previous studies had used to reconstruct historical temperatures in Greenland was taken from a location that was over 1,000 kilometers to the north and over 2,000 meters higher in elevation. "We wanted to study how climate had varied close to the Norse farms themselves," says Bradley. And when they did, the results were surprising.

Bradley and his colleagues traveled to a lake called Lake 578, which is adjacent to a former Norse farm and close to one of the largest groups of farms in the Eastern Settlement. There, they spent three years gathering sediment samples from the lake, which represented a continuous record for the past 2,000 years. "Nobody has actually studied this location before," says Boyang Zhao, the study's lead author who conducted this research for his Ph.D. in geosciences at UMass Amherst and is currently a postdoctoral research associate at Brown University.

They then analyzed that 2,000 year sample for two different markers: the first, a lipid, known as BrGDGT, can be used to reconstruct temperature. "If you have a complete enough record, you can directly link the changing structures of the lipids to changing temperature," says Isla Castañeda, professor of geosciences at UMass Amherst and one of the paper's co-authors. A second marker, derived from the waxy coating on plant leaves, can be used to determine the rates at which the grasses and other livestock-sustaining plants lost water due to evaporation. It is therefore an indicator of how dry conditions were.

"What we discovered," says Zhao, "is that, while the temperature barely changed over the course of the Norse settlement of southern Greenland, it became steadily drier over time."

Norse farmers had to overwinter their livestock on stored fodder, and even in a good year the animals were often so weak that they had to be carried to the fields once the snow finally melted in the spring. Under conditions like that, the consequences of drought would have been severe. An extended drought, on top of other economic and social pressures, may have tipped the balance just enough to make the Eastern Settlement unsustainable.

Read more at Science Daily

Origins of diabetes may be different in men and women

Over the past four decades, global cases of Type 2 diabetes mellitus have skyrocketed. According to the World Health Organization, the number of people estimated to have the disease jumped from 108 million in 1980 to 422 million in 2014, with the fastest growth observed in low- and middle-income countries.

Although the disease is common, there is still much research left to be done to fully understand it. For instance, while diabetes is linked to obesity, researchers still do not know the exact reasons why obesity causes diabetes.

In a new paper published in the journal Obesity Reviews, Concordia researchers Kerri Delaney and Sylvia Santosa look at how fat tissue from different parts of the body may lead to diabetes onset in men and women. They reviewed almost 200 hundred scientific papers looking for a deeper understanding of how fat operates at the surface and tissue level, and the mechanisms by which that tissue contributes to diabetes onset.

"There are many different theories about how diabetes develops, and the one that we explore posits that different regions of fat tissue contributes to disease risk differently," says Kerri Delaney, a PhD candidate at Concordia's PERFORM Centre and the paper's lead author. "So the big question is, how do the different depots uniquely contribute to its development, and is this contribution different in men and women?"

From surface to cell level


Men and women store fat in different places. Diabetes, like many other diseases, is closely associated with abdominal fat. Women tend to store that fat just under the skin. This is known as subcutaneous fat. In men, abdominal fat is stored around the organs. This is visceral fat.

Fat appears to exhibit different features in men and women. They grow differently, are dispersed differently and interact with the inflammatory and immune system differently. For example, in men fat tissue expands because the fat cells grow in size; in women, fat cells multiply and increase in number. This changes with the loss of the protective hormone estrogen that disappears with menopause and may explain why men are more susceptible to diabetes earlier in life than women.

Working from the hypothesis that diabetes risk is driven by expansions of visceral fat in men and of subcutaneous fat in women, the researchers then looked through the papers to see what was happening in the cell-level microenvironments.

Though more research is needed, there were overall differences observed in the immune cell, hormone, and cell signalling level in men and women that seem to support different origins in diabetes between the sexes.

Delaney and Santosa hope that by identifying how diabetes risks are different in men and women, clinical approaches to treatment of the disease can be better defined between the sexes.

Read more at Science Daily

Mar 22, 2022

Could the asteroid Ryugu be a remnant of an extinct comet? Scientists now answer

Asteroids hold many clues about the formation and evolution of planets and their satellites. Understanding their history can, therefore, reveal much about our solar system. While observations made from a distance using electromagnetic waves and telescopes are useful, analyzing samples retrieved from asteroids can yield much more detail about their characteristics and how they may have formed. An endeavor in this direction was the Hayabusa mission, which, in 2010, returned to Earth after 7 years with samples from the asteroid Itokawa.

The successor to this mission, called Hayabusa2, was completed near the end of 2020, bringing back material from Asteroid 162173 "Ryugu," along with a collection of images and data gathered remotely from close proximity. While the material samples are still being analyzed, the information obtained remotely has revealed three important features about Ryugu. Firstly, Ryugu is a rubble-pile asteroid composed of small pieces of rock and solid material clumped together by gravity rather than a single, monolithic boulder. Secondly, Ryugu is shaped like a spinning top, likely caused by deformation induced by quick rotation. Third, Ryugu has a remarkably high organic matter content.

Of these, the third feature raises a question regarding the origin of this asteroid. The current scientific consensus is that Ryugu originated from the debris left by the collision of two larger asteroids. However, this cannot be true if the asteroid is high in organic content (which will confirmed once the analyses of the returned samples are complete). What could, then, be the true origin of Ryugu?

In a recent effort to answer this question, a research team led by Associate Professor Hitoshi Miura of Nagoya City University, Japan, proposed an alternative explanation backed up by a relatively simple physical model. As explained in their paper published in The Astrophysical Journal Letters, the researchers suggest that Ryugu, as well as similar rubble-pile asteroids, could, in fact, be remnants of extinct comets. This study was carried out in collaboration with Professor Eizo Nakamura and Associate Professor Tak Kunihiro from Okayama University, Japan.

Comets are small bodies that form on the outer, colder regions of the solar system. They are mainly composed of water ice, with some rocky components (debris) mixed in. If a comet enters the inner solar system -- the space delimited by the asteroid belt "before" Jupiter -- heat from the solar radiation causes the ice to sublimate and escape, leaving behind rocky debris that compacts due to gravity and forms a rubble-pile asteroid.

This process fits all the observed features of Ryugu, as Dr. Miura explains, "Ice sublimation causes the nucleus of the comet to lose mass and shrink, which increases its speed of rotation. As a result of this spin-up, the cometary nucleus may acquire the rotational speed required for the formation of a spinning-top shape. Additionally, the icy components of comets are thought to contain organic matter generated in the interstellar medium. These organic materials would be deposited on the rocky debris left behind as the ice sublimates."

To test their hypothesis, the research team conducted numerical simulations using a simple physical model to calculate the time it would take for the ice to sublimate and the increase in rotational speed of the resulting asteroid due to it. The results of their analysis suggested that Ryugu has likely spent a few tens of thousands of years as an active comet before moving into the inner asteroid belt, where the high temperatures vaporized its ice and turned it into a rubble-pile asteroid.

Read more at Science Daily

Hawaiian-Emperor undersea mystery revealed with supercomputers

The Hawaiian-Emperor seamount chain spans almost four thousand miles from the Hawaiian Islands to the Detroit Seamount in the north Pacific, an L- shaped chain that goes west then abruptly north. The 60-degree bend in the line of mostly undersea mountains and volcanic islands has puzzled scientists since it was first identified in the 1940s from the data of numerous echo sounding ships.

A team of scientists have now used supercomputers allocated by the Extreme Science and Engineering Discovery Environment (XSEDE) to model and reconstruct the dynamics of Pacific tectonic plate motion that might explain the mysterious mountain chain bend.

Major Findings


"We've shown with computer models for the first time how the Pacific plate can abruptly change direction from the north to the west," said Michael Gurnis, professor of Geophysics at the California Institute of Technology.

"It's been a holy grail to figure out why this change happened," he said. Gurnis co-authored the study on the origins of the seamount chain that was published in Nature Geoscience in January 2022.

Besides Gurnis, the team consisted of geoscientists Jiashun Hu, a post-doctoral scholar at Caltech, and Dietmar Mu?ller of Sydney University in Australia and computational scientists Johann Rudi of Argonne National Laboratory and Georg Stadler of New York University.

Plate Motion Clues


The plate motion provides a key to understanding how the seamount chain reflects plate motions. Gigantic tectonic plates in Earth's crust basically move over the hot, weak rock of the mantle.

The Pacific Plate is one of the largest. The plate spans about 40 million square miles undersea, outlined by the mountains and volcanos of the 'Ring of Fire' that are created by the return of the plates to the mantle.

But the volcanos of Hawaii and the Hawaiian-Emperor seamount chain weren't caused by this process. Instead, scientists theorize that plumes of Earth's hottest rock, from its core, travel upward through the mantle to generate a volcanic hotspot. And it's theorized that the seamount chain was created by the plate moving over the hot plume, something like a trail of burn marks on a paper moved over a candle.

About 80 million years ago, the Pacific plate traveled mostly north for about 30 million years, as evidenced by the line of Emperor seamounts. But about 50 million years ago, something odd happened. The Pacific plate apparently changed direction, and the mantle plume also shifted.

"Maybe there's an underlying physical reason why they would happen simultaneously," Gurnis said.

Prior Gordon Bell Prize

He pointed to previous work using techniques such as adaptive mesh refinement on the dynamics of mantle convection, computational work that scales well to a large number of CPUs and used the Stampede1 system of TACC and earned the team spearheaded by Johann Rudi the Gordon Bell Prize in 2015.

"Moreover, earlier work with Mu?ller, Gurnis and others showed how the physics of plumes could work inside the mantle such that the you could have a plume which rapidly migrated to the south and then stopped at 50 million years ago," Gurnis said.

"These two studies are complementary because going into the present study, we actually had a model which could explain the motion of the plume to the south and then stop abruptly, but we didn't have a model that could explain how the plate could change its direction," he added.

The team's computations of the physics of tectonic plates had to account for the faults at their boundaries but yet allow the movement of plates.

Computational Challenges

The challenge of getting both of those pieces of physics computed simultaneously meant that they needed computational methods that can handle vast changes in the mechanical properties from one plate to another plate as well as their faults.

Yet, the traditional ideas of plate motion failed to add up to enough force in the models to pull the Pacific Plate to the west and explain the bend.

"We discovered that there was another idea that had existed in the literature, but it wasn't getting much attention," Gurnis said.

New Factor

The new factor accounted for in the study was a subduction zone in the Russian Far East, a Kronotsky arc that terminated at about 50 million years ago. They built new plate tectonic reconstructions with these subduction zones.

When they put the zones in the models, they discovered that they could make the Pacific plate go to the north. And when that subduction terminated, the Pacific plate started to move to the west, slowly building up other subduction zones that over time provided more force to pull the Pacific plate.

"It's a new hypothesis that's much firmer in terms of the physics which it's based upon," Gurnis concluded. "It will allow other scientists to see if it will hold up to further scrutiny and if there are other ideas that can be tested on its assumptions."

Computational Resources

For the study, Gurnis was awarded access to the Stampede2 supercomputer at TACC through XSEDE funded by the National Science Foundation (NSF). He was also awarded access to the NSF-funded Frontera system also at TACC, the most powerful supercomputer in academia and the first phase of the NSF "Towards a Leadership Class Computing Facility" program.

"Both XSEDE and Frontera are absolutely vital for our research," Gurnis said.

"This capability computing is essential," he added. "We're spinning up projects with this collaboration that will be substantially larger than this, that are going to require something even beyond Frontera to compute."

This basic research aims to investigate mysteries about the dynamics of the past and present Earth.

"When you deal with some of the most fundamental processes in the earth, it's important to correctly figure out how they work," Gurnis said.

New Directions


He also highlighted the interplay between domain science and the applied work with computational scientists.

"The algorithms we've developed for adaptive mesh refinement can be applied to many pure and applied problems," Gurnis added. "That was a huge breakthrough."

Read more at Science Daily

Agricultural expansion a major cause to doubling of annual tropical carbon loss over past two decades

Using multiple high-resolution satellite datasets, researchers from the Department of Civil Engineering at the University of Hong Kong (HKU) and Southern University of Science and Technology (SUSTech) found that tropical carbon loss has doubled over the past two decades due to excessive forest removal in the tropics.

The tropics are an important ecosystem as they store massive amounts of carbon in their woody vegetation and soil -- but they have suffered from extensive forest clearance since 2001. The researchers analysed the gross forest carbon loss associated with forest removal in the tropics (between 23.5° N and 23.5 S but excluding northern Australia) during the 21st century. They revealed a two-fold increase in gross tropical forest carbon loss worldwide from 0.97 gigatons of carbon per year in 2001-2005 to 1.99 gigatons of carbon per year in 2015-2019 due to rapid forest loss.

The study has been published in the academic journal Nature Sustainability in an article entitled "Doubling of annual forest carbon loss over the tropics during the early twenty-first century."

Given the key role of the tropics in the carbon cycle, the study poses serious implications. "The findings are critical because they suggest that existing strategies to reduce forest loss are questionable; this failure underscores the importance of monitoring deforestation trends following one of the new pledges made -- to halt and reverse deforestation -- by UN climate summit-the twenty-sixth Conference of the Parties (COP26) in Glasgow in November 2021," said Professor Ji CHEN from HKU's Department of Civil Engineering.

Tropical forests are the largest terrestrial component of the global carbon cycle, storing about 250 gigatons of biomass carbon in its woody vegetation and absorbing about 70 gigatons of atmospheric carbon per year through photosynthesis. The rapid and steady loss of forests could be devastating because it leads to the loss of stored carbon in biomass and soil. Deforestation also obstructs carbon sequestration or the process of capturing and retaining carbon dioxide.

"The doubling and acceleration in the loss of forest carbon, including biomass and soil organic carbon, is primarily driven by agricultural expansion which differs from current estimates of land-use change emissions in the assessments of the global carbon budget that shows a flat or decreasing trend. In addition to carbon, conversion of forests to agricultural lands also induces other environmental consequences, like biodiversity extinction and land degradation," said Yu FENG, a PhD candidate of the HKU and SUSTech joint programme.

Most of the tropical forest carbon loss (82%) was set off by agricultural expansion, for example shifting cultivation, particularly in Africa.

"While some agricultural lands may reappear as forested due to abandonment or policies, we still observed about 70% of former forest lands converted to agriculture in 2001-2019 remained so in 2020, confirming a dominant role of agriculture in long-term pan-tropical carbon reductions on formerly forested landscapes," said research team member Dr Zhenzhong Zeng, Associate Professor at SUSTech.

Read more at Science Daily

Blowing bubbles in dough to bake perfect yeast-free pizza

In typical breads, yeast produces bubbles via a biochemical process, causing dough to rise and develop into light, airy, and tasty treats. Without that yeast, it is difficult to make morsels with the same characteristic taste and texture. The perfect, yeast-free pizza, as such a food, presents an important challenge for bakers and yeast-intolerant crust enthusiasts across the globe.

In Physics of Fluids, by AIP Publishing, researchers from the University of Naples Federico II developed a method to leaven pizza dough without yeast.

The team, which included its very own professional pizza-maker/graduate student, prepared the dough by mixing water, flour, and salt and placing it in a hot autoclave, an industrial device designed to raise temperature and pressure.

From there, the process is like the one used to produce carbonation in soda. Gas is dissolved into the dough at high pressure, and bubbles form in the dough as pressure is released during baking. In comparison to other scientific experiments, the pressures involved were mild. They can be obtained by a typical at-home coffee maker.

However, the scientists-turned-bakers had to be cautious with the pressure release. Compared to soda, pizza dough does not respond as nicely to an abrupt change in pressure.

"The key to the process is to design the pressure release rate not to stress the dough, which likes to expand gently," said author Ernesto Di Maio.

The authors evaluated their dough with rheology, which measures the flow and deformation of a material. Fine-tuning the pressure release through rheological analysis made it possible to gently inflate bubbles to the desired extent.

"We mainly studied how dough behaves with and without yeast. How the softness changes with leavening, and how the dough responds to a temperature program during baking," said author Rossana Pasquino. "This was fundamental to designing the pressure protocol for the dough without yeast."

After many unofficial taste tests, the researchers are purchasing a larger, food-grade autoclave that will make full-sized pizzas in future experiments. They hope to see their idea used in pizza shops.

"We had a lot of fun applying things we know well to delicious polymers, instead of our typical and sometimes boring smelly plastics," said Pasquino. "The idea of approaching food samples with the same technologies and knowledge used for thermoplastic polymers was surprisingly successful!"

As a person with a yeast allergy, Di Maio is also excited about applications for other leavened products like bread, cakes, and snacks.

Read more at Science Daily

Shining a light on protein aggregation in Parkinson's disease

A novel system to control protein aggregation in a model of Parkinson's disease may answer longstanding questions about how the disease begins and spreads, according to a new study published March 9 in the open-access journal PLOS Biology by Abid Oueslati of Laval University, Quebec, Canada, and colleagues. Initial results suggest that aggregation of the protein alpha-synuclein plays a critical role in disrupting neuronal homeostasis and triggering neurodegeneration.

Parkinson's disease is a neurodegenerative disorder, marked clinically by tremor, stiffness, and slowed movements, as well as a host of nonmotor symptoms. Within affected neurons, molecules of a protein called alpha-synuclein can be seen to clump together, forming characteristic aggregates called Lewy bodies. But it has been hard to answer whether alpha-synuclein aggregation contributes to disease development or progression, and when it may act in the toxic disease cascade, or whether instead the aggregates are innocent bystanders to some other malevolent process, or are even protective. These elements have been difficult to determine, in part because aggregation in cellular and animal models has not been controllable in either time or space.

To address that problem, the authors turned to optobiology, a technique in which a protein of interest is fused to another protein that changes its conformation in response to light, allowing the behavior of the target protein to be manipulated selectively and reversibly. Here, the authors fused alpha-synuclein to a protein known as cryptochrome protein 2, from a mustard plant. They found that when light of the correct wavelength fell on the mustard protein, its conformational change triggered aggregation of its alpha-synuclein partner.

The aggregates that formed were reminiscent of Lewy bodies in multiple important ways, including that they included several other key proteins besides alpha-synuclein found in Lewy bodies in people with Parkinson's disease, and that the alpha-synuclein in the aggregates adopted the characteristic beta-sheet conformation seen in many diseases of misfolded proteins. The aggregates induced dislocation of multiple cellular organelles, as Lewy bodies have been recently reported to do as well. They also induced misfolding in alpha-synuclein molecules not attached to the cryptochrome protein, mimicking the prion-like spread of aggregation seen with alpha-synuclein in the diseased brain and animal models.

Finally, the authors delivered the genes for the alpha-synuclein-cryptochrome fusion protein to mice, directly into the substantia nigra, the structure in the brain that is most prominently affected by Parkinson's disease, and surgically placed an optic fiber to deliver light to the targeted cells. Light treatment led to formation of alpha-synuclein aggregates, neurodegeneration, disruption of calcium activity in downstream neuronal targets, and Parkinson-like motor deficits.

"Our results demonstrate the potential of this optobiological system to reliably and controllably induce formation of Lewy body-like aggregations in model systems, in order to better understand the dynamics and timing of Lewy body formation and spread, and their contribution to the pathogenesis of Parkinson's disease," Oueslati said.

Read more at Science Daily

Mar 21, 2022

Soil erosion and wildfire another nail in coffin for Triassic era

Curtin research has revealed that soil erosion and wildfires contributed to a mass extinction event 201 million years ago that ended the Triassic era and paved the way for the rise of dinosaurs in the Jurassic period.

Lead author Curtin PhD graduate Dr Calum Peter Fox, from the WA-Organic and Isotope Geochemistry Centre (WA-OIGC) in Curtin's School of Earth and Planetary Sciences, said the research identified the other factors that contributed to a combination of stresses that killed off Triassic life and allowed the ecological expansion of dinosaurs.

"This new study adds soil erosion and wildfire activity to the list of factors that drove this mass extinction to end the Triassic era, building on our previous research that found a rise in levels of acid and hydrogen sulfide in the ocean caused by rapid increases in carbon dioxide due to a surge in volcanic activity," Dr Fox said.

"Similar to modern large-scale fire events that are driven by climate change, periods of wildfire activity have significant impacts for land-dwelling fauna and flora and drive environmental and ecosystem stress that can lead to mass extinctions."

Dr Fox said the team investigated fire events 201 million years ago during the end-Triassic mass extinction event, which featured similar increases to carbon dioxide to those witnessed in the modern-day conditions due to greenhouse gas emissions.

"By studying polycyclic aromatic hydrocarbons, which can be formed during the incomplete combustion of organic matter, we found that soil erosion was a more prominent terrestrial ecological stress than intensive wildfire activity during the end-Triassic mass extinction event in the Bristol Channel of the south-west United Kingdom.

"This tells us land and marine ecosystem and environmental stresses occurred at the same time and were likely exacerbated by soil erosion, with fire activity likely to be more localised in other areas rather than widespread across Europe."

Co-author John Curtin Distinguished Professor Kliti Grice, also from WA-OIGC in Curtin's School of Earth and Planetary Sciences, said modern-day soil erosion was a major cause of land degradation as it removed fertilised soil and promoted the deoxygenation of water columns, much like the mass extinction events of the past.

"These processes certainly have implications in the modern day due to the introduction of pollutants and pesticides," Professor Grice said. "Observing that soil erosion had major impacts in our history and in comparing and contrasting a global record of the past, we can anticipate the scale and duration of currently-occurring and future soil erosion events."

Read more at Science Daily

The secret to staying young: New research highlights power of life long exercise to keep muscles healthy

Lifelong physical activity could protect against age-related loss of muscle mass and function, according to research published in The Journal of Physiology. Individuals aged 68 and above who were physically active throughout their life have healthier ageing muscle that has superior function and is more resistant to fatigue compared to inactive individuals, both young and old.

This is the first study to investigate muscle, stem cell and nerve activity in humans. The researchers from University of Copenhagen, Denmark, found that elderly individuals who keep physically active throughout their adult life, whether by taking part in resistance exercise, ball games, racket sports, swimming, cycling, running and/or rowing had a greater number of muscle stem cells, otherwise known as satellite cells in their muscle. These cells are important for muscle regeneration and long-term growth and protect against nerve decay.

46 male participants took part in the study. They were divided into three groups: young sedentary (15), elderly lifelong exercise (16) and elderly sedentary (15). They performed a heavy resistance exercise, sitting in a mechanical chair performing a knee extension movement to evaluate muscle function. The amount of force produced was measured. Blood samples were taken, and muscle biopsies were analysed from both legs. The researchers found elderly lifelong exercisers outperformed both the elderly and young sedentary adults.

Lead author, Casper Soendenbroe, University of Copenhagen, Denmark said:

"This is the first study in humans to find that lifelong exercise at a recreational level could delay some detrimental effects of ageing. Using muscle tissue biopsies, we've found positive effects of exercise on the general ageing population. This has been missing from the literature as previous studies have mostly focused on master athletes, which is a minority group. Our study is more representative of the general population aged 60 and above, as the average person is more likely to take part in a mixture of activities at a moderate level. That's why we wanted to explore the relation between satellite cell content and muscle health in recreationally active individuals. We can now use this as a biomarker to further investigate the link between exercise, ageing and muscle health."

"The single most important message from this study, is that even a little exercise seems to go a long way, when it comes to protecting against the age-related decline in muscle function. This is an encouraging finding which can hopefully spur more people to engage in an activity that they enjoy. We still have much to learn about the mechanisms and interactions between nerves and muscles and how these change as we age. Our research takes us one step closer."

Read more at Science Daily

Taste, temperature and pain sensations are neurologically linked

If you have eaten a chili pepper, you have likely felt how your body reacts to the spicy hot sensation. New research published by biologists at the University of Oklahoma shows that the brain categorizes taste, temperature and pain-related sensations in a common region of the brain. The researchers suggest the brain also groups these sensations together as either pleasant or aversive, potentially offering new insights into how scientists might better understand the body's response to and treatment of pain.

"The spicy hot sensation you get from a chili pepper is actually a pain sensation…this follows activation of pain-related fibers that innervate the tongue and are heat sensitive," said Christian H. Lemon, Ph.D., an associate professor in the Department of Biology in the Dodge Family College of Arts and Sciences at OU. "What happens is a chemical in chili peppers, called capsaicin, causes activation of pain fibers and 'tricks' the neurons to react like there is a heat stimulus in your mouth, so you'll notice when you eat spicy foods, your body will react to try to remove the heat - your blood vessels can dilate and you can start to sweat because your body 'thinks' it's overheating."

Lemon, who is also a member of the OU Institute for Biomedical Engineering, Science and Technology, and researchers in his lab, Jinrong Li, Ph.D., and?Md Sams Sazzad Ali, Ph.D., published an article in The Journal of Neuroscience that examines how taste, temperature and pain-related sensations interact in the brain. Their article was also selected for the journal's Featured Research section.

"Neural messages associated with pain are partly carried by neural circuits involved with sensing temperature," Lemon said. "This would explain, for example, why when you touch a hot stove, it's a burning pain. There are intimate ties between temperature and pain, and there are intimate ties between temperature and taste…just about everything we eat is either warmed or cooled, and that's known to have a fairly robust effect on the way we perceive certain tastes."

The research team wanted to better understand how temperature and pain intersect with taste neurologically. Building on their previous research that had shown that temperature and taste signals come together in a particular section of the midbrain, Lemon's research group used mouse models under anesthesia to artificially stimulate temperature and pain-related fibers, combined with a physiological method to monitor the actions occurring in the brain to determine the connection between these senses.

"It's been known that temperature and taste can activate some of the same cells in the brain, but this was rarely systematically studied," he said. "We wanted to know if the temperature responses that we were seeing in this part of the brain were actually attributable to activation of thermal and pain-related fibers that innervate the head, face and mouth. To do this we used a modern genetic technology where we could insert a protein into these 'temperature/pain' cells that allowed us to control these cells with blue light -- we could turn the cells on with a light, like a light switch."

"What we found is that these neurons that scientists have studied for a long time as taste neurons actually respond to artificial stimulation of these temperature/pain cells," he added. "This is significant because most scientists that have looked at taste, they're usually only studying neural circuits from the perspective of taste. Pain scientists are usually only looking at pain-related responses, but they actually come together in this part of the midbrain, and not only do they come together, they do so in a very systematic way where preferred tastes and preferred temperatures are separated from adverse taste and temperatures in terms of the way that the responses are happening in this part of the brain."

The researchers categorize preferred or pleasurable tastes as something sweet, like sugar, whereas adverse tastes are bitter -- which can signify that something may be toxic or harmful. Similarly, people, and mice, have preferred temperatures, like a comfortably warmed or cooled environment as compared to an extreme cold or extreme heat stimulus.

Through this artificial stimulation of temperature/pain cells and the corresponding taste neurons, they discovered the brain segregated preferable tastes and temperatures from adverse tastes and temperatures. This finding offers new insights into how these senses interact, which could have implications for how scientists understand the brain's responses to stimuli that cause pain.

Read more at Science Daily

Scientists determine structure of a DNA damage 'first responder'

DNA is often likened to a blueprint. The particular sequence of As, Cs, Gs, and Ts in DNA provides information for building an organism.

What's not captured by this analogy is the fact that our DNA requires constant upkeep to maintain its integrity. Were it not for dedicated DNA repair machinery that routinely fixes mistakes, the information within DNA would be rapidly degraded.

This repair happens at cell cycle checkpoints that are activated in response to DNA damage. Like a quality assurance agent on an assembly line, proteins that participate in the DNA damage checkpoint assess the cell's DNA for mistakes and, if necessary, pause cell division and make repairs. When this checkpoint breaks down -- which can happen as a result of genetic mutations -- DNA damage builds up, and the result is often cancer.

Though scientists have learned much about DNA damage and repair over the past 50 years, important outstanding questions remain. One particularly bedeviling puzzle is how a repair protein called the 9-1-1 clamp -- a DNA damage "first responder" -- attaches itself to the site of a broken DNA strand to activate of the DNA damage checkpoint.

"We know that this attachment is a pivotal step necessary for initiating an effective repair program," says Dirk Remus, a molecular biologist at the Sloan Kettering Institute (SKI) who studies the fundamentals of DNA replication and repair. "But the mechanisms involved are completely obscure."

Now, thanks to a collaboration between Dr. Remus' lab and that of SKI structural biologist Richard Hite, a clear picture of how the 9-1-1 clamp is recruited to sites of DNA damage has emerged. The results, which challenge conventional wisdom in the field, were published March 21, 2022, in the journal Nature Structural and Molecular Biology.

Complementary Expertise Yields Surprising Results

The startling discoveries grew out of a collaboration between two labs with complementary expertise. Dr. Remus' lab uses biochemical methods to study the process of DNA replication and repair. A primary goal of his research over the past several years has been to reconstitute the entire DNA replication-and-repair process in a test tube, apart from a surrounding cell.

As a result of this effort, his lab has purified several components of the repair machinery, including 9-1-1 proteins and proteins that facilitate the binding of 9-1-1 to DNA.

Dr. Remus realized that if these complexes could be viewed at atomic resolution, they would provide a set of freeze-frame images of the individual steps in the repair process. That's when he turned to Dr. Hite's lab for help.

"I said, 'We have this complex; can you help us determine its molecular structure to figure out how it works?' And that's what he did."

Dr. Hite is a structural biologist with expertise in using a technique called cryo-electron microscopy (cryo-EM), which enables the study of proteins and protein assemblies by visualizing their fine-grain movements at resolutions that can reveal the positions of individual amino acids within the proteins. Much like the gears and levers of a machine, it's these movements of amino acids that allow proteins to serve as the workhorses of the cell, including those that repair DNA.

"When Dirk came to us, we realized that many of the tools that our lab has developed over the past few years were perfectly suited to answering this question," Dr. Hite says. "Using cryo-EM, we're able to not only determine one structure but an ensemble of structures. By putting these structures together in a logical pattern, based on the new data and previous biochemical data, we can come up with a proposal for how this clamp works."

They did, and the results were surprising.

"The model we developed had interesting features that contradicted what had been previously thought to be the way these types of clamps are being loaded onto DNA," Dr. Hite says.

"When Rich first produced the structure, I thought he got it wrong because it was against all the expectations," Dr. Remus adds. "Now, in hindsight, it all makes perfect sense."

A New Model for Opening and Closing a DNA Clamp Around DNA

The 9-1-1 clamp is shaped like a ring. To carry out its function, it needs to surround the broken DNA at the junction between an exposed end of one strand of a double-stranded piece of DNA abutting a single-stranded one. Consequently, the ring structure of the 9-1-1 clamp must open to allow the single-stranded DNA to swing into the center of the clamp and then reclose around it. This does not occur spontaneously but is facilitated by another protein complex, called the clamp loader complex.

"It had been thought from all studies prior to this that clamps would open in the manner of lock washer, where basically the two open ends of the clamp would rotate out of plane to create a narrow gap," Dr. Remus says. "But what Rich observed is that the 9-1-1 clamp opens much more widely than anticipated, and it opens completely in plane -- there's no twisting like in the lock-washer scenario."

The scientists point out that the lock-washer model has been around for two decades and has been the guiding paradigm in the field for how a clamp gets loaded around DNA. But in this case, it's wrong.

Another surprise was that the 9-1-1 clamp loader complex was observed to bind DNA in the opposite orientation from other clamp loader complexes that act on undamaged DNA during normal DNA replication. This observation explained how 9-1-1 is specifically recruited to sites of DNA damage.

From Basic to Translational Research

Aside from providing a satisfying answer to a fundamental biological puzzle, Dr. Remus thinks the research may eventually lead to better cancer drugs.

Many existing chemotherapy drugs work by interfering with DNA replication of cancer cells and generating the type of DNA damage that is normally fixed by repair processes elicited by the 9-1-1 clamp. Because cancer cells already have a reduced ability to repair DNA damage, the addition of DNA-damaging chemotherapy drugs can overwhelm the cells' ability to fix their DNA, and so they die. (This is how drugs called PARP inhibitors work, for example.)

With this new knowledge about how 9-1-1 interacts with other repair proteins and with DNA, scientists could potentially design drugs that interfere specifically with this step of the repair process, making chemotherapy drugs even more effective.

"One of the great things about working here at SKI is that a basic scientist's research can be the starting point for translational studies that ultimately lead to better treatments," Dr. Hite says.

Read more at Science Daily

Mar 20, 2022

Comet 67P’s abundant oxygen more of an illusion, new study suggests

When the European Space Agency's Rosetta spacecraft discovered abundant molecular oxygen bursting from comet 67P/Churyumov-Gerasimenko (67P) in 2015, it puzzled scientists. They had never seen a comet emit oxygen, let alone in such abundance. But most alarming were the deeper implications: that researchers had to account for so much oxygen, which meant reconsidering everything they thought they already knew about the chemistry of the early solar system and how it formed.

A new analysis, however, led by planetary scientist Adrienn Luspay-Kuti at the Johns Hopkins Applied Physics Laboratory (APL) in Laurel, Maryland, shows Rosetta's discovery may not be as strange as scientists first imagined. Instead, it suggests the comet has two internal reservoirs that make it seem like there's more oxygen than is actually there.

"It's kind of an illusion," Luspay-Kuti said. "In reality, the comet doesn't have this high oxygen abundance, at least not as far as its formation goes, but it has accumulated oxygen that gets trapped in the upper layers of the comet, which then gets released all at once."

While common on Earth, molecular oxygen (two oxygen atoms doubly linked to each other) is markedly uncommon throughout the universe. It quickly binds to other atoms and molecules, especially the universally abundant atoms hydrogen and carbon, so oxygen appears only in small amounts in just a few molecular clouds. That fact led many researchers to conclude any oxygen in the protosolar nebula that formed our solar system likely had been similarly scooped up.

When Rosetta found oxygen pouring out of comet 67P, however, everything turned on its head. Nobody had seen oxygen in a comet before, and as the fourth most abundant molecule in the comet's bright coma (after water, carbon dioxide and carbon monoxide), it needed some explanation. The oxygen seemed to come off the comet with water, causing many researchers to suspect the oxygen was either primordial -- meaning it got tied up with water at the birth of the solar system and amassed in the comet when it later formed -- or formed from water after the comet had formed.

But Luspay-Kuti and her team were skeptical. As the comet's dumbbell shape gradually rotates, each "bell" (or hemisphere) faces the Sun at various points, meaning the comet has seasons so the oxygen-water connection might not be present all the time. On short time frames, volatiles could potentially turn on and off as they thaw and refreeze with the seasons.

Now You See It, Now You Don't

Taking advantage of these seasons, the team examined the molecular data on short- and long-time periods just before the comet's southern hemisphere entered summer and then again just as its summer ended. As reported in their study, published March 10 in Nature Astronomy, the team found that as the southern hemisphere turned away and was sufficiently far from the Sun, the link between oxygen and water disappeared. The amount of water coming off the comet dropped precipitously, so instead the oxygen seemed strongly linked to carbon dioxide and carbon monoxide, which the comet was still emitting.

"There's no way that should be possible under the previous explanations suggested," Luspay-Kuti said. "If oxygen were primordial and tied to water in its formation, there shouldn't be any time that oxygen strongly correlates with carbon monoxide and carbon dioxide but not water."

The team instead proposed the comet's oxygen doesn't come from water but from two reservoirs: one made of oxygen, carbon monoxide and carbon dioxide deep inside the comet's rocky nucleus, and a shallower pocket closer to the surface where oxygen chemically combines with water ice molecules.

The idea goes like this: A deep reservoir of oxygen, carbon monoxide and carbon dioxide ice is constantly emitting gases because oxygen, carbon dioxide and carbon monoxide all vaporize at very low temperatures. As oxygen traverses from the comet's interior toward the surface, however, some chemically inserts into water ice (a major constituent of the comet's nucleus) to form a second, shallower oxygen reservoir. But water ice vaporizes at a much higher temperature than oxygen, so until the Sun sufficiently heats the surface and vaporizes the water ice, the oxygen is stuck.

The consequence is that oxygen can accumulate in this shallow reservoir for long periods until the comet surface is finally warmed enough for water ice to vaporize, releasing a plume far richer in oxygen than was actually present in the comet.

"Put another way, the oxygen abundances measured in the comet's coma aren't necessarily reflecting its abundances in the comet's nucleus," Luspay-Kuti explained.

The comet would consequently also vacillate with the seasons between strongly associating with water (when the Sun heats the surface) and strongly associating with carbon dioxide and carbon monoxide (when that surface faces away from the Sun and the comet is sufficiently far) -- exactly what Rosetta observed.

"This isn't just one explanation: It's the explanation because there is no other possibility," said Olivier Mousis, a planetary scientist from France's Aix-Marseille Université and a study co-author. "If oxygen were just coming from the surface, you wouldn't see these trends observed by Rosetta."

The major implication, he said, is that it means comet 67P's oxygen is, in fact, oxygen that accreted at the beginning of the solar system. It's just that it's only a fraction of what people had thought.

Luspay-Kuti said she wants to probe the topic more deeply by examining the comet's minor molecular species, such as methane and ethane, and their correlation with molecular oxygen and other major species. She suspects this will help researchers get a better idea of the type of ice that the oxygen was incorporated into.

Read more at Science Daily