Jun 30, 2022

Asteroids: Researchers simulate defense of Earth

NASA's Double Asteroid Redirection Test (DART) mission is the world's first full-scale planetary defense test against potential asteroid impacts on Earth. Researchers of the University of Bern and the National Centre of Competence in Research (NCCR) PlanetS now show that instead of leaving behind a relatively small crater, the impact of the DART spacecraft on its target could leave the asteroid near unrecognizable.

66 million years ago, a giant asteroid impact on the Earth likely caused the extinction of the dinosaurs. Currently no known asteroid poses an immediate threat. But if one day a large asteroid were to be discovered on a collision course with Earth, it might have to be deflected from its trajectory to prevent catastrophic consequences.

Last November, the DART space probe of the US space agency NASA was launched as a first full-scale experiment of such a manoeuvre: Its mission is to collide with an asteroid and to deflect it from its orbit, in order to provide valuable information for the development of such a planetary defense system.

In a new study published in The Planetary Science Journal, researchers of the University of Bern and the National Centre of Competence in Research (NCCR) PlanetS have simulated this impact with a new method. Their results indicate that it may deform its target far more severely than previously thought.

Rubble instead of solid rock

"Contrary to what one might imagine when picturing an asteroid, direct evidence from space missions like the Japanese space agency's (JAXA) Hayabusa2 probe demonstrate that asteroid can have a very loose internal structure -- similar to a pile of rubble -- that is held together by gravitational interactions and small cohesive forces," says study lead-author Sabina Raducan from the Institute of Physics and the National Centre of Competence in Research PlanetS at the University of Bern.

Yet, previous simulations of the DART mission impact mostly assumed a much more solid interior of its asteroid target Dimorphos. "This could drastically change the outcome the collision of DART and Dimorphos, which is scheduled to take place in the coming September," Raducan points out. Instead of leaving a relatively small crater on the 160 meter wide asteroid, DART's impact at a speed of around 24'000 km/h could completely deform Dimorphos. The asteroid could also be deflected much more strongly and larger amounts of material could be ejected from the impact than the previous estimates predicted.

A prize winning new approach


"One of the reasons that this scenario of a loose internal structure has so far not been thoroughly studied is that the necessary methods were not available," study lead-author Sabina Raducan says.

"Such impact conditions cannot be recreated in laboratory experiments and the relatively long and complex process of crater formation following such an impact -- a matter of hours in the case of DART -- made it impossible to realistically simulate these impact processes up to now," according to the researcher.

"With our novel modelling approach, which takes into account the propagation of the shock waves, the compaction and the subsequent flow of material, we were for the first time able to model the entire cratering process resulting from impacts on small, asteroids like Dimorphos," Raducan reports. For this achievement, she was awarded by ESA and by the mayor of Nice at a workshop on the DART follow-up mission HERA.

Read more at Science Daily

Hidden in caves: Mineral overgrowths reveal 'unprecedented' sea level rise

The early 1900s were an exciting time across the world, with rapid advances in the steel, electric and automobile industries. The industrial changes also mark an inflection point in our climate. According to an international team of researchers led by the University of South Florida (USF), the sea level has risen 18 centimeters since the start of the 20th century.

The study, featured on the cover of the July 1 issue of Science Advances, works to identify preindustrial sea levels and examines the impact of modern greenhouse warming on sea-level rise.

The team, which includes USF graduate students, traveled to Mallorca, Spain -- home to more than 1,000 cave systems, some of which have deposits that formed millions of years ago. For this study, they focused on analyzing deposits from 4,000 years ago to present day.

The team found evidence of a previously unknown 20 centimeter sea-level rise that occurred nearly 3,200 years ago when ice caps melted naturally over the course of 400 years at a rate of 0.5 millimeters per year. Otherwise, despite major climatic events like Medieval Warm Period and the Little Ice Age, the sea level remained exceptionally stable until 1900.

"The results reported in our study are alarming," said lead author Bogdan P. Onac, geology professor at USF. "The sea-level rise since the 1900s is unprecedented when compared to the natural change in ice volumes over the last 4,000 years. This implies that if global temperatures continue to rise, sea levels could eventually reach higher levels than scientists previously estimated."

To create the timeline, the team gathered 13 samples from eight caves along the coastline of the Mediterranean Sea. The deposits are rare -- only forming near the coastline in cave passages that were repeatedly flooded by sea water, making them accurate markers of sea-level changes overtime. Each deposit holds valuable insight into both the past and future, helping researchers determine how quickly the sea level will rise in the coming decades and centuries.

The samples were taken to the University of New Mexico and University of Bern in Switzerland, where special instruments were used to determine their age by uranium-series method. Over time, uranium decays into other elements such as thorium and lead, allowing researchers to create a timeline of the sea level documented in each deposit.

A complex software at Harvard University helped generate predictions using various ice models and Earth's parameters to showcase an accurate history of the sea level. These predictions are essential because they allow researchers to estimate past global mean sea level, which is key in addressing future sea-level rise.

"If humans continue to be the main driver and the temperature increases 1.5 degrees in the near future, there will be irreversible damage," Onac said. "There will be no turning back from that point on."

Based on ice mass loss from the Antarctic and Greenland, the average sea-level rise since 2008 is 1.43 millimeters per year.

Permanent flooding from the rising sea level won't happen overnight, but Onac says it will be seen more and more during storm surges and hurricanes. With nearly 40 percent of the world's population living within 62 miles of a coast, the rising sea level could be catastrophic with substantial societal and economic impacts.

"Even if we stop right now, sea level will continue to rise for at least a couple of decades, if not centuries, simply because the system is warmed up."

In June, Onac received a new research grant from the National Science Foundation to continue his research to predict future sea-level rise due to global warming. The grant will allow Onac to expand the research further into history by 130,000 years and create a better understanding of sea level globally. Starting in September, Onac and his team will begin analyzing cave deposits from around the globe, including Italy, Greece, Mexico and Cuba.

Read more at Science Daily

How pandas survive solely on bamboo: Evolutionary history

When is a thumb not a thumb? When it's an elongated wrist bone of the giant panda used to grasp bamboo. Through its long evolutionary history, the panda's hand has never developed a truly opposable thumb and instead evolved a thumb-like digit from a wrist bone, the radial sesamoid. This unique adaptation helps these bears subsist entirely on bamboo despite being bears (members of the order Carnivora, or meat-eaters). In a new paper published in Scientific Reports, the Natural History Museum of Los Angeles County's Curator of Vertebrate Paleontology Xiaoming Wang and colleagues report on the discovery of the earliest bamboo-eating ancestral panda to have this "thumb." Surprisingly, it's longer than its modern descendants.

While the celebrated false thumb in living giant pandas (Ailuropoda melanoleuca) has been known for more than 100 years, how this wrist bone evolved was not understood due to a near-total absence of fossil records. Uncovered at the Shuitangba site in the City of Zhaotong, Yunnan Province in south China and dating back 6-7 million years ago, a fossil false thumb from an ancestral giant panda, Ailurarctos, gives scientists a first look at the early use of this extra (sixth) digit-and the earliest evidence of a bamboo diet in ancestral pandas-helping us better understanding the evolution of this unique structure.

"Deep in the bamboo forest, giant pandas traded an omnivorous diet of meat and berries to quietly consuming bamboos, a plant plentiful in the subtropical forest but of low nutrient value," says NHM Vertebrate Paleontology Curator Dr. Xiaoming Wang. "Tightly holding bamboo stems in order to crush them into bite sizes is perhaps the most crucial adaptation to consuming a prodigious quantity of bamboo."

How to Walk and Chew Bamboo at the Same Time

This discovery could also help solve an enduring panda mystery: why are their false thumbs so seemingly underdeveloped? As an ancestor to modern pandas, Ailurarctos might be expected to have even less well-developed false"thumbs," but the fossil Wang and his colleagues discovered revealed a longer false thumb with a straighter end than its modern descendants' shorter, hooked digit. So why did pandas' false thumbs stop growing to achieve a longer digit?

"Panda's false thumb must walk and 'chew'," says Wang. "Such a dual function serves as the limit on how big this 'thumb' can become."

Wang and his colleagues think that modern panda's shorter false thumbs are an evolutionary compromise between the need to manipulate bamboo and the need to walk. The hooked tip of a modern panda's second thumb lets them manipulate bamboo while letting them carry their impressive weight to the next bamboo meal. After all, the "thumb" is doing double duty as the radial sesamoid-a bone in the animal's wrist.

"Five to six million years should be enough time for the panda to develop longer false thumbs, but it seems that the evolutionary pressure of needing to travel and bear its weight kept the 'thumb' short-strong enough to be useful without being big enough to get in the way," says Denise Su, associate professor at the School of Human Evolution and Social Change and research scientist at the Institute of Human Origins at Arizona State University, and co-leader of the project that recovered the panda specimens.

"Evolving from a carnivorous ancestor and becoming a pure bamboo-feeder, pandas must overcome many obstacles," Wang says. "An opposable 'thumb' from a wrist bone may be the most amazing development against these hurdles."

Read more at Science Daily

Study finds women have more brain changes after menopause

Women who have gone through menopause may have more of a brain biomarker called white matter hyperintensities than premenopausal women or men of the same age, according to a new study published in the June 29, 2022, online issue of Neurology®, the medical journal of the American Academy of Neurology.

White matter hyperintensities are tiny lesions visible on brain scans that become more common with age or with uncontrolled high blood pressure. These brain biomarkers have been linked in some studies to an increased risk of stroke, Alzheimer's disease and cognitive decline.

"White matter hyperintensities increase as the brain ages, and while having them does not mean that a person will develop dementia or have a stroke, larger amounts may increase a person's risk," said study author Monique M. B. Breteler, MD, PhD, of the German Center of Neurodegenerative Diseases (DZNE), in Bonn, Germany, and a member of the American Academy of Neurology. "Our study examined what role menopause may have on amounts of these brain biomarkers. Our results imply that white matter hyperintensities evolve differently for men and women, where menopause or factors that determine when menopause starts, such as variations in the aging process, are defining factors."

The study involved 3,410 people with an average age of 54. Of those, 58% were women, and of the women, 59% were postmenopausal. Also, 35% of all participants had high blood pressure and of those, half had uncontrolled high blood pressure.

All participants had MRI brain scans. Researchers looked at the scans and calculated the amount of white matter hyperintensities for each participant. Average total volume for these brain biomarkers was 0.5 milliliters (ml). Average total brain volume was 1,180 ml for men and 1,053 ml for women. Average total white matter volume, the area of the brain where white matter hyperintensities can be found, was 490 ml for men and 430 ml for women.

After adjusting for age and vascular risk factors such as high blood pressure and diabetes, researchers found that postmenopausal women had more of these brain biomarkers when compared to men of similar age. In people 45 and older, postmenopausal women had an average total white matter hyperintensities volume of 0.94 ml compared to 0.72 ml for men. Researchers also found that the increase in brain biomarkers accelerated with age and at a faster rate in women than in men.

Premenopausal women and men of a similar age did not have a difference in the average amount of white matter hyperintensities.

Researchers also found that postmenopausal women had more white matter hyperintensities than premenopausal women of similar age. In a group of participants ages 45 to 59, postmenopausal women had an average total volume of white matter hyperintensities of 0.51 ml compared to 0.33 ml for premenopausal women.

There was no difference between postmenopausal and premenopausal women using hormone therapy. Breteler said this finding suggests that hormone therapy after menopause may not have a protective effect on the brain.

Unrelated to menopausal status, women with uncontrolled high blood pressure had higher amounts of this brain biomarker compared to men.

"It has been known that high blood pressure, which affects the small blood vessels in the brain, can lead to an increase in white matter hyperintensities," said Breteler. "The results of our study not only show more research is needed to investigate how menopause may be related to the vascular health of the brain. They also demonstrate the necessity to account for different health trajectories for men and women, and menopausal status. Our research underscores the importance of sex-specific medicine and more attentive therapy for older women, especially those with vascular risk factors."

Read more at Science Daily

Jun 29, 2022

Falling stardust, wobbly jets explain blinking gamma ray bursts

A Northwestern University-led team of astrophysicists has developed the first-ever full 3D simulation of an entire evolution of a jet formed by a collapsing star, or a "collapsar."

Because these jets generate gamma ray bursts (GRBs) -- the most energetic and luminous events in the universe since the Big Bang -- the simulations have shed light on these peculiar, intense bursts of light. Their new findings include an explanation for the longstanding question of why GRBs are mysteriously punctuated by quiet moments -- blinking between powerful emissions and an eerily quiet stillness. The new simulation also shows that GRBs are even rarer than previously thought.

The new study will be published on June 29 in Astrophysical Journal Letters. It marks the first full 3D simulation of the entire evolution of a jet -- from its birth near the black hole to its emission after escaping from the collapsing star. The new model also is the highest-ever resolution simulation of a large-scale jet.

"These jets are the most powerful events in the universe," said Northwestern's Ore Gottlieb, who led the study. "Previous studies have tried to understand how they work, but those studies were limited by computational power and had to include many assumptions. We were able to model the entire evolution of the jet from the very beginning -- from its birth by a black hole -- without assuming anything about the jet's structure. We followed the jet from the black hole all the way to the emission site and found processes that have been overlooked in previous studies."

Gottlieb is a Rothschild Fellow in Northwestern's Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA). He coauthored the paper with CIERA member Sasha Tchekhovskoy, an assistant professor of physics and astronomy at Northwestern's Weinberg College of Arts and Sciences.

Weird wobbling


The most luminous phenomenon in the universe, GRBs emerge when the core of a massive star collapses under its own gravity to form a black hole. As gas falls into the rotating black hole, it energizes -- launching a jet into the collapsing star. The jet punches the star until finally escaping from it, accelerating at speeds close to the speed of light. After breaking free from the star, the jet generates a bright GRB.

"The jet generates a GRB when it reaches about 30 times the size of the star -- or a million times the size of the black hole," Gottlieb said. "In other words, if the black hole is the size of a beach ball, the jet needs to expand over the entire size of France before it can produce a GRB."

Due to the enormity of this scale, previous simulations have been unable to model the full evolution of the jet's birth and subsequent journey. Using assumptions, all previous studies found that the jet propagates along one axis and never deviates from that axis.

But Gottlieb's simulation showed something very different. As the star collapses into a black hole, material from that star falls onto the disk of magnetized gas that swirls around the black hole. The falling material causes the disk to tilt, which, in turn, tilts the jet. As the jet struggles to realign with its original trajectory, it wobbles inside the collapsar.

This wobbling provides a new explanation for why GRBs blink. During the quiet moments, the jet doesn't stop -- its emission beams away from Earth, so telescopes simply cannot observe it.

"Emission from GRBs is always irregular," Gottlieb said. "We see spikes in emission and then a quiescent time that lasts for a few seconds or more. The entire duration of a GRB is about one minute, so these quiescent times are a non-negligible fraction of the total duration. Previous models were not able to explain where these quiescent times were coming from. This wobbling naturally gives an explanation to that phenomenon. We observe the jet when its pointing at us. But when the jet wobbles to point away from us, we cannot see its emission. This is part of Einstein's theory of relativity."

Rare becomes rarer

These wobbly jets also provide new insights into the rate and nature of GRBs. Although previous studies estimated that about 1% of collapsars produce GRBs, Gottlieb believes that GRBs are actually much rarer.

If the jet were constrained to moving along one axis, then it would only cover a thin slice of the sky -- limiting the likelihood of observing it. But the wobbly nature of the jet means that astrophysicists can observe GRBs at different orientations, increasing the likelihood of spotting them. According to Gottlieb's calculations, GRBs are 10 times more observable than previously thought, which means that astrophysicists are missing 10 times fewer GRBs than previously thought.

"The idea is that we observe GRBs on the sky in a certain rate, and we want to learn about the true rate of GRBs in the universe," Gottlieb explained. "The observed and true rates are different because we can only see the GRBs that are pointing at us. That means we need to assume something about the angle that these jets cover on the sky, in order to infer the true rate of GRBs. That is, what fraction of GRBs we are missing. Wobbling increases the number of detectable GRBs, so the correction from the observed to true rate is smaller. If we miss fewer GRBs, then there are fewer GRBs overall in the sky."

If this is true, Gottlieb posits, then most of the jets either fail to be launched at all or never succeed in escaping from the collapsar to produce a GRB. Instead, they remain buried inside.

Mixed energy


The new simulations also revealed that some of the magnetic energy in the jets partially converts to thermal energy. This suggests that the jet has a hybrid composition of magnetic and thermal energies, which produce the GRB. In a major step forward in understanding the mechanisms that power GRBs, this is the first time researchers have inferred the jet composition of GRBs at the time of emission.

"Studying jets enables us to 'see' what happens deep inside the star as it collapses," Gottlieb said. "Otherwise, it's difficult to learn what happens in a collapsed star because light cannot escape from the stellar interior. But we can learn from the jet emission -- the history of the jet and the information that it carries from the systems that launch them."

Read more at Science Daily

Limiting global warming to 1.5°C would reduce risks to humans by up to 85%

New research led by the University of East Anglia (UEA) quantifies the benefits of limiting global warming to 1.5°C and identifies the hotspot regions for climate change risk in the future.

The study calculates reductions in human exposure to a series of risks -- water scarcity and heat stress, vector-borne diseases, coastal and river flooding -- that would result from limiting global warming to 1.5°C rather than 2°C or 3.66°C. Effects on agricultural yields and the economy are also included.

Researchers from the UK, including scientists from UEA and the University of Bristol, and from PBL Netherlands Environmental Assessment Agency, find that the risks are reduced by 10-44% globally if warming is reduced to 1.5°C rather than 2°C.

Currently, insufficient climate policy has been implemented globally to limit warming to 2°C, so the team also made a comparison with risks that would occur with higher levels of global warming.

Risks will be greater if global warming is greater. The risks at 3.66°C warming are reduced by 26-74% if instead warming is kept to only 2°C. They are reduced even further, by 32-85%, if warming can be limited to just 1.5°C. The ranges are wide because the percentage depends on which of the indicators, for example human exposure to drought or flooding, are being considered.

The findings, published today in the journal Climatic Change, suggest that in percentage terms, the avoided risk is highest for river flooding, drought, and heat stress, but in absolute terms the risk reduction is greatest for drought.

The authors also identify West Africa, India and North America as regions where the risks caused by climate change are projected to increase the most with 1.5°C or 2°C of average global warming by 2100.

The study follows the Intergovernmental Panel on Climate Change(IPCC) Sixth Assessment Report, which finds that global net zero CO2 emissions must be reached in the early 2050s to limit warming to 1.5°C with no or limited overshoot, and around the early 2070s to limit warming to 2°C.

Lead author Prof Rachel Warren, of the Tyndall Centre for Climate Change Research at UEA, said: "Our findings are important because the Paris Agreement target is to limit global warming to 'well below' 2°C and to 'pursue efforts' to limit it to 1.5°C. This means that decision makers need to understand the benefits of aiming for the lower figure.

"In addition, at COP26 last year, the commitments made by countries in terms of greenhouse gas emission reductions are not sufficient to achieve the Paris goals. At present, current policies would result in average warming of 2.7°C, while the Nationally Determined Contributions for 2030 would limit warming to 2.1°C.

"While there are a number of planned additional actions to reduce emissions further, potentially limiting warming to 1.8°C in the most optimistic case, these still need to be delivered and further additional action is needed to limit warming to 1.5°C."

For this study the researchers ran sophisticated computer simulations of climate change risk, using a common set of climate change scenarios in which global temperatures rise by 2°C and separately by 1.5°C and 3.66°C. They then compared the results.

The findings include:
 

  • Overall, global population exposure to malaria and dengue fever is 10% lower if warming is constrained to 1.5°C rather than 2°C.
  • Population exposure to water scarcity is most evident in western India and the northern region of West Africa.
  • A continuous increase in global drought risk with global warming is estimated, with hundreds of millions of people additionally affected by drought at each, successively higher warming level.
  • By 2100 if we do not adapt, global warming of 1.5°C would place an additional 41-88 million people a year at risk from coastal flooding globally (associated with 0.24-0.56 m of sea-level rise), whereas an additional 45-95 million people a year would be at risk under global warming of 2°C (corresponding to 0.27-0.64 m of sea-level rise) in 2100.
  • Global economic impacts of climate change are 20% lower when warming is limited to 1.5°C rather than 2°C. The net value of damages is correspondingly reduced from 61 trillion US dollars, to 39 trillion US dollars.


The study used 21 alternative climate models to simulate the regional patterns of climate change corresponding to 2°C warming and 1.5°C warming respectively. Previous research has used simpler models, a more limited range of climate models, or has covered different risk indicators.

Read more at Science Daily

Ice Age wolf DNA reveals dogs trace ancestry to two separate wolf populations

An international group of geneticists and archaeologists, led by the Francis Crick Institute, have found that the ancestry of dogs can be traced to at least two populations of ancient wolves. The work moves us a step closer to uncovering the mystery of where dogs underwent domestication, one of the biggest unanswered questions about human prehistory.

Dogs are known to have originated from the gray wolf, with this domestication occurring during the Ice Age, at least 15,000 years ago. But where this happened, and if it occurred in one single location or in multiple places, is still unknown.

Previous studies using the archaeological record and comparing the DNA of dogs and modern wolves have not found the answer.

In their study, published in Nature today (29 June), the researchers turned to ancient wolf genomes to further understanding of where the first dogs evolved from wolves. They analysed 72 ancient wolf genomes, spanning the last 100,000 years, from Europe, Siberia and North America.

The remains came from previously excavated ancient wolves, with archaeologists from 38 institutions in 16 different countries contributing to the study. The remains included a full, perfectly preserved head from a Siberian wolf that lived 32,000 years ago. Nine different ancient DNA labs then collaborated on generating DNA sequence data from the wolves.

By analysing the genomes, the researchers found that both early and modern dogs are more genetically similar to ancient wolves in Asia than those in Europe, suggesting a domestication somewhere in the east.

However, they also found evidence that two separate populations of wolves contributed DNA to dogs. Early dogs from north-eastern Europe, Siberia and the Americas appear to have a single, shared origin from the eastern source. But early dogs from the Middle East, Africa and southern Europe appear to have some ancestry from another source related to wolves in the Middle East, in addition to the eastern source.

One possible explanation for this dual ancestry is that wolves underwent domestication more than once, with the different populations then mixing together. Another possibility is that domestication happened only once, and that the dual ancestry is due to these early dogs then mixing with wild wolves. It is not currently possible to determine which of these two scenarios occurred.

Anders Bergström, co-first author and post-doctoral researcher in the Ancient Genomics lab at the Crick, says: "Through this project we have greatly increased the number of sequenced ancient wolf genomes, allowing us to create a detailed picture of wolf ancestry over time, including around the time of dog origins."

"By trying to place the dog piece into this picture, we found that dogs derive ancestry from at least two separate wolf populations -- an eastern source that contributed to all dogs and a separate more westerly source, that contributed to some dogs."

The team are continuing the hunt for a close ancient wolf ancestor of dogs, which could reveal more precisely where domestication most likely took place. They are now focusing on genomes from other locations not included in this study, including more southerly regions.

As the 72 ancient wolf genomes spanned around 30,000 generations, it was possible to look back and build a timeline of how wolf DNA has changed, tracing natural selection in action.

For example, they observed that over a period of around 10,000 years, one gene variant went from being very rare to being present in every wolf, and is still present in all wolves and dogs today. The variant affects a gene, IFT88, which is involved in the development of bones in the skull and jaw. It is possible that the spread of this variant could have been driven by a change in the types of prey available during the Ice Age, giving an advantage to wolves with a certain head shape, but the gene could also have other unknown functions in wolves.

Pontus Skoglund, senior author and group leader of the Ancient Genomics lab at the Crick, says: "This is the first time scientists have directly tracked natural selection in a large animal over a time-scale of 100,000 years, seeing evolution play out in real time rather than trying to reconstruct it from DNA today."

"We found several cases where mutations spread to the whole wolf species, which was possible because the species was highly connected over large distances. This connectivity is perhaps a reason why wolves managed to survive the Ice Age while many other large carnivores vanished."

Read more at Science Daily

Is there a right-handed version of our left-handed universe?

To solve a long-standing puzzle about how long a neutron can "live" outside an atomic nucleus, physicists entertained a wild but testable theory positing the existence of a right-handed version of our left-handed universe. They designed a mind-bending experiment at the Department of Energy's Oak Ridge National Laboratory to try to detect a particle that has been speculated but not spotted. If found, the theorized "mirror neutron" -- a dark-matter twin to the neutron -- could explain a discrepancy between answers from two types of neutron lifetime experiments and provide the first observation of dark matter.

"Dark matter remains one of the most important and puzzling questions in science -- clear evidence we don't understand all matter in nature," said ORNL's Leah Broussard, who led the study published in Physical Review Letters.

Neutrons and protons make up an atom's nucleus. However, they also can exist outside nuclei. Last year, using the Los Alamos Neutron Science Center, co-author Frank Gonzalez, now at ORNL, led the most precise measurement ever of how long free neutrons live before they decay, or turn into protons, electrons and anti-neutrinos. The answer -- 877.8 seconds, give or take 0.3 seconds, or a little under 15 minutes -- hinted at a crack in the Standard Model of particle physics. That model describes the behavior of subatomic particles, such as the three quarks that make up a neutron. The flipping of quarks initiates neutron decay into protons.

"The neutron lifetime is an important parameter in the Standard Model because it is used as an input for calculating the quark mixing matrix, which describes quark decay rates," said Gonzalez, who calculated probabilities of neutrons oscillating for the ORNL study. "If the quarks don't mix as we expect them to, that hints at new physics beyond the Standard Model."

To measure the lifetime of a free neutron, scientists take two approaches that should arrive at the same answer. One traps neutrons in a magnetic bottle and counts their disappearance. The other counts protons appearing in a beam as neutrons decay. It turns out neutrons appear to live nine seconds longer in a beam than in a bottle.

Over the years, perplexed physicists have considered many reasons for the discrepancy. One theory is that the neutron transforms from one state to another and back again. "Oscillation is a quantum mechanical phenomenon," Broussard said. "If a neutron can exist as either a regular or a mirror neutron, then you can get this sort of oscillation, a rocking back and forth between the two states, as long as that transition isn't forbidden."

The ORNL-led team performed the first search for neutrons oscillating into dark-matter mirror neutrons using a novel disappearance and regeneration technique. The neutrons were made at the Spallation Neutron Source, a DOE Office of Science user facility. A beam of neutrons was guided to SNS's magnetism reflectometer. Michael Fitzsimmons, a physicist with a joint appointment at ORNL and the University of Tennessee, Knoxville, used the instrument to apply a strong magnetic field to enhance oscillations between neutron states. Then the beam impinged on a "wall" made of boron carbide, which is a strong neutron absorber.

If the neutron does in fact oscillate between regular and mirror states, when the neutron state hits the wall, it will interact with atomic nuclei and get absorbed into the wall. If it is in its theorized mirror neutron state, however, it is dark matter that will not interact.

So only mirror neutrons would make it through the wall to the other side. It would be as if the neutrons had gone through a "portal" to some dark sector -- a figurative concept used in the physics community. Yet, the press reporting on past related work had fun taking liberties with the concept, comparing the theorized mirror universe Broussard's team is exploring to the "Upside Down" alternate reality in the TV series "Stranger Things." The team's experiments were not exploring a literal portal to a parallel universe.

"The dynamics are the same on the other side of the wall, where we try to induce what are presumably mirror neutrons -- the dark-matter twin state -- to turn back into regular neutrons," said co-author Yuri Kamyshkov, a UT physicist who with colleagues has long pursued the ideas of neutron oscillations and mirror neutrons. "If we see any regenerated neutrons, that could be a signal that we've seen something really exotic. The discovery of the particle nature of dark matter would have tremendous implications."

Matthew Frost of ORNL, who received his doctorate from UT working with Kamyshkov, performed the experiment with Broussard and assisted with data extraction, reduction and analysis. Frost and Broussard performed preliminary tests with help from Lisa DeBeer-Schmitt, a neutron scattering scientist at ORNL.

Lawrence Heilbronn, a nuclear engineer at UT, characterized backgrounds, whereas Erik Iverson, a physicist at ORNL, characterized neutron signals. Through the DOE Office of Science Scientific Undergraduate Laboratory Internships Program, Michael Kline of The Ohio State University figured out how to calculate oscillations using graphics processing units -- accelerators of specific types of calculations in application codes -- and performed independent analyses of neutron beam intensity and statistics, and Taylor Dennis of East Tennessee State University helped set up the experiment and analyzed background data, becoming a finalist in a competition for this work. UT graduate students Josh Barrow, James Ternullo and Shaun Vavra with undergraduates Adam Johnston, Peter Lewiz and Christopher Matteson contributed at various stages of experiment preparation and analysis. University of Chicago graduate student Louis Varriano, a former UT Torchbearer, helped with conceptual quantum-mechanical calculations of mirror-neutron regeneration.

The conclusion: No evidence of neutron regeneration was seen. "One hundred percent of the neutrons stopped; zero percent passed through the wall," Broussard said. Regardless, the result is still important to the advancement of knowledge in this field.

With one particular mirror-matter theory debunked, the scientists turn to others to try to solve the neutron lifetime puzzle. "We're going to keep looking for the reason for the discrepancy," Broussard said. She and colleagues will use the High Flux Isotope Reactor, a DOE Office of Science user facility at ORNL, for that. Ongoing upgrades at HFIR will make more sensitive searches possible because the reactor will produce a much higher flux of neutrons, and the shielded detector at its small-angle neutron scattering diffractometer has a lower background.

Because the rigorous experiment did not find evidence of mirror neutrons, the physicists were able to rule out a far-fetched theory. And that takes them closer to solving the puzzle.

Read more at Science Daily

Jun 28, 2022

Ancient microbes may help us find extraterrestrial life forms

Using light-capturing proteins in living microbes, scientists have reconstructed what life was like for some of Earth's earliest organisms. These efforts could help us recognize signs of life on other planets, whose atmospheres may more closely resemble our pre-oxygen planet.

The earliest living things, including bacteria and single-celled organisms called archaea, inhabited a primarily oceanic planet without an ozone layer to protect them from the sun's radiation. These microbes evolved rhodopsins -- proteins with the ability to turn sunlight into energy, using them to power cellular processes.

"On early Earth, energy may have been very scarce. Bacteria and archaea figured out how to use the plentiful energy from the sun without the complex biomolecules required for photosynthesis," said UC Riverside astrobiologist Edward Schwieterman, who is co-author of a study describing the research.

Rhodopsins are related to rods and cones in human eyes that enable us to distinguish between light and dark and see colors. They are also widely distributed among modern organisms and environments like saltern ponds, which present a rainbow of vibrant colors.

Using machine learning, the research team analyzed rhodopsin protein sequences from all over the world and tracked how they evolved over time. Then, they created a type of family tree that allowed them to reconstruct rhodopsins from 2.5 to 4 billion years ago, and the conditions that they likely faced.

Their findings are detailed in a paper published in the journal Molecular Biology and Evolution.

"Life as we know it is as much an expression of the conditions on our planet as it is of life itself. We resurrected ancient DNA sequences of one molecule, and it allowed us to link to the biology and environment of the past," said University of Wisconsin-Madison astrobiologist and study lead Betul Kacar.

"It's like taking the DNA of many grandchildren to reproduce the DNA of their grandparents. Only, it's not grandparents, but tiny things that lived billions of years ago, all over the world," Schwieterman said.

Modern rhodopsins absorb blue, green, yellow and orange light, and can appear pink, purple or red by virtue of the light they are not absorbing or complementary pigments. However, according to the team's reconstructions, ancient rhodopsins were tuned to absorb mainly blue and green light.

Since ancient Earth did not yet have the benefit of an ozone layer, the research team theorizes that billions-of-years-old microbes lived many meters down in the water column to shield themselves from intense UVB radiation at the surface.

Blue and green light best penetrates water, so it is likely that the earliest rhodopsins primarily absorbed these colors. "This could be the best combination of being shielded and still being able to absorb light for energy," Schwieterman said.

After the Great Oxidation Event, more than 2 billion years ago, Earth's atmosphere began to experience a rise in the amount of oxygen. With additional oxygen and ozone in the atmosphere, rhodopsins evolved to absorb additional colors of light.

Rhodopsins today are able to absorb colors of light that chlorophyll pigments in plants cannot. Though they represent completely unrelated and independent light capture mechanisms, they absorb complementary areas of the spectrum.

"This suggests co-evolution, in that one group of organisms is exploiting light not absorbed by the other," Schwieterman said. "This could have been because rhodopsins developed first and screened out the green light, so chlorophylls later developed to absorb the rest. Or it could have happened the other way around."

Moving forward, the team is hoping to resurrect model rhodopsins in a laboratory using synthetic biology techniques.

"We engineer the ancient DNA inside modern genomes and reprogram the bugs to behave how we believe they did millions of years ago. Rhodopsin is a great candidate for laboratory time-travel studies," Kacar said.

Ultimately, the team is pleased about the possibilities for research opened up by techniques they used for this study. Since other signs of life from the deep geologic past need to be physically preserved and only some molecules are amenable to long-term preservation, there are many aspects of life's history that have not been accessible to researchers until now.

"Our study demonstrates for the first time that the behavioral histories of enzymes are amenable to evolutionary reconstruction in ways that conventional molecular biosignatures are not," Kacar said.

Read more at Science Daily

People less outraged by gender discrimination caused by algorithms

People are less morally outraged when gender discrimination occurs because of an algorithm rather than direct human involvement, according to research published by the American Psychological Association.

In the study, researchers coined the phrase "algorithmic outrage deficit" to describe their findings from eight experiments conducted with a total of more than 3,900 participants from the United States, Canada and Norway.

When presented with various scenarios about gender discrimination in hiring decisions caused by algorithms and humans, participants were less morally outraged about those caused by algorithms. Participants also believed companies were less legally liable for discrimination when it was due to an algorithm.

"It's concerning that companies could use algorithms to shield themselves from blame and public scrutiny over discriminatory practices," said lead researcher Yochanan Bigman, PhD, a post-doctoral research fellow at Yale University and incoming assistant professor at Hebrew University. The findings could have broader implications and affect efforts to combat discrimination, Bigman said. The research was published online in the Journal of Experimental Psychology: General.

"People see humans who discriminate as motivated by prejudice, such as racism or sexism, but they see algorithms that discriminate as motivated by data, so they are less morally outraged," Bigman said. "Moral outrage is an important societal mechanism to motivate people to address injustices. If people are less morally outraged about discrimination, then they might be less motivated to do something about it."

Some of the experiments used a scenario based on a real-life example of alleged algorithm-based gender discrimination by Amazon that penalized female job applicants. While the research focused on gender discrimination, one of the eight experiments was replicated to examine racial and age discrimination and had similar findings.

Knowledge about artificial intelligence didn't appear to make a difference. In one experiment with more than 150 tech workers in Norway, participants who reported greater knowledge about artificial intelligence were still less outraged by gender discrimination caused by algorithms.

When people learn more about a specific algorithm it may affect their outlook, the researchers found. In another study, participants were more outraged when a hiring algorithm that caused gender discrimination was created by male programmers at a company known for sexist practices.

Read more at Science Daily

Increasing heat waves affect up to half a billion people

Climate change is a reality and extremely high temperatures have been reported by India and Pakistan in the spring. In a new scientific journal article, researchers from the University of Gothenburg, amongst others, paint a gloomy picture for the rest of the century. Heat waves are expected to increase, affecting up to half a billion people every year. In turn, they can lead to food shortages, deaths and refugee flows when the heat reaches levels that exceed what humans can tolerate. But this does not have to happen if measures are put in place to reach the Paris Agreement targets, the researchers say.

In India and Pakistan, heat waves with temperatures above 40 degrees in the shade are a directly life-threatening form of extreme weather. In a new article in the journal Earth's Future, researchers have outlined different scenarios for the consequences of heat waves in South Asia to the year 2100.

"We established a link between extreme heat and population. In the best scenario, we succeeded in meeting the targets in the Paris Agreement, which added roughly two heat waves per year, exposing about 200 million people to the heat waves. But if countries continue to contribute to the greenhouse effect as they are still doing now, clearing and building on land that is actually helping to lower global temperatures, we believe that there could be as many as five more heat waves per year, with more than half a billion people being exposed to them, by the end of the century," says Deliang Chen, Professor of Physical Meteorology at the University of Gothenburg and one of the authors of the article.

Population growth drives emissions

The study identifies the Indo-Gigantic Plains beside the Indus and Ganges rivers as particularly vulnerable. This is a region of high temperatures, and it is densely populated. Deliang Chen points out that the link between heat waves and population works in both directions. The size of the population affects the number of future heat waves. A larger population drives emissions up as consumption and transport increase. Urban planning is also important. If new towns and villages are built in places that are less subject to heat waves, the number of people affected can be reduced.

"We hope that the leaders in the region such as India and Pakistan read our report and think about it. In our calculation model, the range for the number of people who will be exposed to heat waves is large. The actual numbers will depend on the path that these countries choose to take in their urban planning. It is future greenhouse gas and particulate emissions that will determine how many people are actually exposed. We can more than halve the population exposed to intense heat waves if we reduce emissions so that we reach the targets in the Paris Agreement. Both mitigation and adaptation measures can make a huge difference," says Deliang Chen.

Risk of a refugee wave

Heat waves are already causing major problems in India and Pakistan. Farmers have been hit hard when drought and heat has caused their wheat crops to fail, and their crops have moved to higher altitudes to escape the extreme heat. But this move has resulted in large acreages of trees being cleared; trees that have contributed to lowering temperatures.

"With a larger population, land use increases, which in itself can drive up temperatures further. Each heat wave will result in increased mortality and decreased productivity, since few people can work in 45-degree heat. I fear that if nothing is done, it can ultimately lead to a huge wave of migrations."

Read more at Science Daily

Fossils in the 'Cradle of Humankind' may be more than a million years older than previously thought

The earth doesn't give up its secrets easily -- not even in the "Cradle of Humankind" in South Africa, where a wealth of fossils relating to human evolution have been found.

For decades, scientists have studied these fossils of early human ancestors and their long-lost relatives. Now, a dating method developed by a Purdue University geologist just pushed the age of some of these fossils found at the site of Sterkfontein Caves back more than a million years. This would make them older than Dinkinesh, also called Lucy, the world's most famous Australopithecus fossil.

The "Cradle of Humankind" is a UNESCO World Heritage Site in South Africa that comprises a variety of fossil-bearing cave deposits, including at Sterkfontein Caves. Sterkfontein was made famous by the discovery of the first adult Australopithecus, an ancient hominin, in 1936. Hominins includes humans and our ancestral relatives, but not the other great apes. Since then, hundreds of Australopithecus fossils have been found there, including the well-known Mrs. Ples, and the nearly complete skeleton known as Little Foot. Paleoanthropologists and other scientists have studied Sterkfontein and other cave sites in the Cradle of Humankind for decades to shed light on human and environmental evolution over the past 4 million years.

Darryl Granger, a professor of earth, atmospheric, and planetary sciences in Purdue University's College of Science, is one of those scientists, working as part of an international team. Granger specializes in dating geologic deposits, including those in caves. As a doctoral student, he devised a method for dating buried cave sediments that is now used by researchers all over the world. His previous work at Sterkfontein dated the Little Foot skeleton to about 3.7 million years old, but scientists are still debating the age of other fossils at the site.

In a study published in the Proceedings of the National Academy of Sciences, Granger and a team of scientists including researchers from the University of the Witwatersrand in Johannesburg, South Africa and the University Toulouse Jean Jaurès in France, have discovered that not only Little Foot, but all of the Australopithecus-bearing cave sediments date from about 3.4 to 3.7 million years old, rather than 2-2.5 million years old as scientists previously theorized. That age places these fossils toward the beginning of the Australopithecus era, rather than near the end. Dinkinesh, who hails from Ethiopia, is 3.2 million years old, and her species, Australopithecus africanus, hails back to about 3.9 million years old.

Sterkfontein is a deep and complex cave system that preserves a long history of hominin occupation of the area. Understanding the dates of the fossils here can be tricky, as rocks and bones tumbled to the bottom of a deep hole in the ground, and there are few ways to date cave sediments.

In East Africa, where many hominin fossils have been found, the Great Rift Valley volcanoes lay down layers of ash that can be dated. Researchers use those layers to estimate how old a fossil is. In South Africa -- especially in a cave -- the scientists don't have that luxury. They typically use other animal fossils found around the bones to estimate their age or calcite flowstone deposited in the cave. But bones can shift in the cave, and young flowstone can be deposited in old sediment, making those methods potentially incorrect. A more accurate method is to date the actual rocks in which the fossils were found. The concrete-like matrix that embeds the fossil, called breccia, is the material Granger and his team analyze.

"Sterkfontein has more Australopithecus fossils than anywhere else in the world," Granger said. "But it's hard to get a good date on them. People have looked at the animal fossils found near them and compared the ages of cave features like flowstones and gotten a range of different dates. What our data does is resolve these controversies. It shows that these fossils are old -- much older than we originally thought."

Granger and the team used accelerator mass spectrometry to measure radioactive nuclides in the rocks, as well as geologic mapping and an intimate understanding of how cave sediments accumulate to determine the age of the Australopithecus-bearing sediments at Sterkfontein,

Granger and the research group at the Purdue Rare Isotope Measurement Laboratory (PRIME Lab) study so-called cosmogenic nuclides and what they can reveal about the history of fossils, geological features and rock. Cosmogenic nuclides are extremely rare isotopes produced by cosmic rays -- high-energy particles that constantly bombard the earth. These incoming cosmic rays have enough energy to cause nuclear reactions inside rocks at the ground surface, creating new, radioactive isotopes within the mineral crystals. An example is aluminum-26: aluminum that is missing a neutron and slowly decays to turn into magnesium over a period of millions of years. Since aluminum-26 is formed when a rock is exposed at the surface, but not after it has been deeply buried in a cave, PRIME lab researchers can date cave sediments (and the fossils within them) by measuring levels of aluminum-26 in tandem with another cosmogenic nuclide, beryllium-10.

In addition to the new dates at Sterkfontein based on cosmogenic nuclides, the research team made careful maps of the cave deposits and showed how animal fossils of different ages would have been mixed together during excavations in the 1930s and 1940s, leading to decades of confusion with the previous ages. "What I hope is that this convinces people that this dating method gives reliable results," Granger said. "Using this method, we can more accurately place ancient humans and their relatives in the correct time periods, in Africa, and elsewhere across the world."

Read more at Science Daily

Jun 27, 2022

Long-term liquid water also on non-Earth-like planets?

Liquid water is an important prerequisite for life to develop on a planet. As researchers from the University of Bern, the University of Zurich and the National Centre of Competence in Research (NCCR) PlanetS report in a new study, liquid water could also exist for billions of years on planets that are very different from Earth. This calls our currently Earth-centred idea of potentially habitable planets into question.

Life on Earth began in the oceans. In the search for life on other planets, the potential for liquid water is therefore a key ingredient. To find it, scientists have traditionally looked for planets similar to our own. Yet, long-term liquid water does not necessarily have to occur under similar circumstances as on Earth. Researchers of the University of Bern and the University of Zurich, who are members of the National Centre of Competence in Research (NCCR) PlanetS, report in a study published in the journal Nature Astronomy, that favourable conditions might even occur for billions of years on planets that barely resemble our home planet at all.

Primordial greenhouses

"One of the reasons that water can be liquid on Earth is its atmosphere," study co-author Ravit Helled, Professor of Theoretical Astrophysics at the University of Zurich and a member of the NCCR PlanetS explains. "With its natural greenhouse effect, it traps just the right amount of heat to create the right conditions for oceans, rivers and rain," says the researcher.

Earth's atmosphere used to be very different in its ancient history, however. "When the planet first formed out of cosmic gas and dust, it collected an atmosphere consisting mostly of Hydrogen and Helium -- a so-called primordial atmosphere," Helled points out. Over the course of its development, however, Earth lost this primordial atmosphere.

Other, more massive planets can collect much larger primordial atmospheres, which they can keep indefinitely in some cases. "Such massive primordial atmospheres can also induce a greenhouse effect -- much like Earth's atmosphere today. We therefore wanted to find out if these atmospheres can help to create the necessary conditions for liquid water," Helled says.

Liquid water for billions of years

To do so, the team thoroughly modelled countless planets and simulated their development over billions of years. They accounted not only for properties of the planets' atmospheres but also the intensity of the radiation of their respective stars as well as the planets' internal heat radiating outwards. While on Earth, this geothermal heat plays only a minor role for the conditions on the surface, it can contribute more significantly on planets with massive primordial atmospheres.

"What we found is that in many cases, primordial atmospheres were lost due to intense radiation from stars, especially on planets that are close to their star. But in the cases where the atmospheres remain, the right conditions for liquid water can occur," reports Marit Mol Lous, PhD student and lead-author of the study. According to the researcher at the University of Bern and the University of Zurich, "in cases where sufficient geothermal heat reaches the surface, radiation from a star like the Sun is not even necessary so that conditions prevail at the surface that allow the existence of liquid water."

"Perhaps most importantly, our results show that these conditions can persist for very long periods of time -- up to tens of billions of years," points out the researcher, who is also a member of the NCCR PlanetS.

Broadening the horizon for the search for extraterrestrial life

"To many, this may come as a surprise. Astronomers typically expect liquid water to occur in regions around stars that receive just the right amount of radiation: not too much, so that the water does not evaporate, and not too little, so that it does not all freeze," study co-author Christoph Mordasini, Professor of Theoretical Astrophysics at the University of Bern and member of the NCCR PlanetS explains.

"Since the availability of liquid water is a likely prerequisite for life, and life probably took many millions of years to emerge on Earth, this could greatly expand the horizon for the search for alien lifeforms. Based on our results, it could even emerge on so-called free-floating planets, that do not orbit around a star," Mordasini says.

Yet the researcher remains cautious: "While our results are exciting, they should be considered with a grain of salt. For such planets to have liquid water for a long time, they have to have the right amount of atmosphere. We do not know how common that is."

Read more at Science Daily

Reaction insights help make sustainable liquid fuels

Methanol, produced from carbon dioxide in the air, can be used to make carbon neutral fuels. But to do this, the mechanism by which methanol is turned into liquid hydrocarbons must be better understood so that the catalytic process can be optimised. Now, using sophisticated analytical techniques, researchers from ETH Zürich and Paul Scherrer Institute have gained unprecedented insight into this complex mechanism.

As we struggle to juggle the impact of emissions with our desire to maintain our energy hungry lifestyle, using carbon dioxide in the atmosphere to create new fuels is an exciting, carbon neutral alternative. One way to do this is to create methanol from carbon dioxide in the air, using a process called hydrogenation. This methanol can then be converted into hydrocarbons. Although these are then burnt, releasing carbon dioxide, this is balanced by carbon dioxide captured to make the fuel.

To fully develop this sustainable fuel, a deeper understanding of the mechanism by which methanol -- in a reaction catalysed by zeolites, solid materials with unique porous architectures -- is turned into long chain hydrocarbons, is necessary. With this in mind, in the frame of NCCR Catalysis, a Swiss National Center of Competence in Research, researchers from ETH Zürich joined forces with researchers from the Paul Scherrer Institut PSI to reveal the details of this reaction mechanism, the findings of which are published in the journal Nature Catalysis.

"Information is key to developing more selective and stable catalysts," explains Javier Pérez-Ramírez, Professor of Catalysis Engineering at ETH Zürich and director of NCCR Catalysis, who co-led the study. "Prior to our study, despite many efforts, key mechanistic aspects of the complex transformation of methanol into hydrocarbons were not well understood."

The researchers were interested in comparing the methanol to hydrocarbon process with another process: that of turning methyl chloride into hydrocarbons. Oil refineries frequently burn large quantities of unwanted methane rich natural gas. This polluting and wasteful activity results in the typical flares associated with oil refineries. "Turning methyl chloride into hydrocarbons is a kind of bridge technology," explains Pérez-Ramírez. "Of course, we would like to move away from fossil fuels but in the meantime this would be a way to avoid wasting the vast reserves of valuable methane."

Fleeting gas phase molecules tell the story

Key to understanding complex reaction mechanisms such as these is to detect the different species involved, including the intermediate products. Traditional techniques look directly at the surface of the catalyst to understand the reaction, but an important part of the story is told by gas phase molecules, which come off the catalyst.

"These molecules are often highly reactive and very short lived, decomposing within a few milliseconds. This makes identifying them a real challenge, as traditional gas phase analytical methods are simply too slow," explains Patrick Hemberger, scientist at the vacuum ultra violet (VUV) beamline of the Swiss Light Source SLS, whose sophisticated analytical techniques would enable the researchers to study the reaction as it happened.

At the VUV beamline, Photoion Photoelectron Coincidence (PEPICO) spectroscopy has recently been established as a powerful analytical tool in catalytic reactions. It combines two different analytical techniques, photoelectron spectroscopy and mass spectrometry, to give detailed information on the gas phase reaction intermediates, even enabling differentiation between isomers.

"Because we simultaneously gather two different types of information, we can rapidly identify these fleeting species even in a mixture containing up to one hundred reaction intermediates and products. This gives us an unprecedented insight that simply isn't possible with conventional methods," Hemberger says.

Reaction pathways revealed

The spectroscopy enabled the researchers to reveal how the carbon-carbon bonds form and the hydrocarbon chain grows by detecting numerous intermediate products. For the two processes -- methanol to hydrocarbon and methyl chloride to hydrocarbon -- the researchers observed that different reaction intermediates were occurring. From this, they could identify two distinct reaction pathways, one driven by methyl radicals, present in both reactions, and another driven by oxygenated species, so-called ketenes, which occurred only in the methanol to hydrocarbon reaction.

The researchers were also able to understand an interesting feature of the reactions: after several days, the catalyst was deactivated and the reaction stopped. This was because of the build-up of an unwanted by-product -- coke, which is made from large aromatic hydrocarbons deposited during the reaction.

With the help of another spectroscopic technique, electron paramagnetic resonance spectroscopy, the researchers saw that the methyl chloride to hydrocarbon production was much more prone to coke formation than production from methanol. Armed with knowledge of the reaction pathways, the reason for this difference was clear: "The methanol to hydrocarbon route proceeds along two reaction pathways, whilst the methyl chloride to hydrocarbon route can only take the more reactive methyl radical route, which is more prone to forming coke," explains Gunnar Jeschke, whose team at ETH Zürich performed the electron paramagnetic resonance spectroscopy studies.

Read more at Science Daily

COVID-19 Omicron variant leads to less severe disease in mice, study finds

Georgia State University researchers have found that the Alpha, Beta and Delta variants of SARS-CoV-2 were substantially more fatal in mouse models than the original strain of the virus that causes COVID-19. However, they also found that the Omicron variant, despite having more mutations, led to less severe disease with half as many deaths and longer survival time.

The findings, published in a study in the journal Viruses, offer new information on how the COVID-19 virus changes over time, and how those changes are leading to different kinds of infections.

"Given the speed that variants have emerged over the course of the pandemic, and may continue to emerge, we always want to predict how these variants will behave," said Mukesh Kumar, assistant professor of biology and the paper's lead researcher. "We want to know if the next variants will become more lethal than the one before, or weaker, or if it's a random process."

In the study, researchers looked at how variants acted in mouse models, examining the effects of the original strain compared to Alpha, Beta, Delta and Omicron variants.

They found that infection with the Alpha, Beta and Delta strains led to higher virus levels in lungs and brains, significant loss of body weight, more inflammation and a 100 percent mortality rate among study subjects.

The Omicron variant was much milder with lower virus levels, less lung inflammation, lower rates of weight loss and a 50 percent mortality rate. This was despite the Omicron variant having more mutations that allow it to bind with angiotensin-converting enzyme 2, also known as the ACE2 "receptor," which is a protein the virus binds with so it can infect cells.

"In mice, Omicron was significantly less efficient than the other variants we tested, despite carrying the highest number of mutations," said Kumar. "That tracks along with epidemiological data that suggests that the Omicron virus causes less severe pathology in humans than previous ancestral strains."

Most studies done in mouse models have been with the original virus strain first identified in Wuhan, China, which has been helpful but not informative in our current variant reality, Kumar said. These insights into the pathogenesis of the earlier and currently circulating variants help in understanding the pathogenesis of emerging COVID-19 variants.

Read more at Science Daily

The heat is on: Traces of fire uncovered dating back at least 800,000 years

They say that where there's smoke, there's fire, and Weizmann Institute of Science researchers are working hard to investigate that claim, or at least elucidate what constitutes "smoke." In an article published today in PNAS, the scientists reveal an advanced, innovative method that they have developed and used to detect nonvisual traces of fire dating back at least 800,000 years -- one of the earliest known pieces of evidence for the use of fire. The newly developed technique may provide a push toward a more scientific, data-driven type of archaeology, but -- perhaps more importantly -- it could help us better understand the origins of the human story, our most basic traditions and our experimental and innovative nature.

The controlled use of fire by ancient hominins -- a group that includes humans and some of our extinct family members -- is hypothesized to date back at least a million years, to around the time that archaeologists believe Homo habilis began its transition to Homo erectus. That is no coincidence, as the working theory, called the "cooking hypothesis," is that the use of fire was instrumental in our evolution, not only for allowing hominins to stay warm, craft advanced tools and ward off predators but also for acquiring the ability to cook. Cooking meat not only eliminates pathogens but increases efficient protein digestion and nutritional value, paving the way for the growth of the brain. The only problem with this hypothesis is a lack of data: since finding archaeological evidence of pyrotechnology primarily relies on visual identification of modifications resulting from the combustion of objects (mainly, a color change), traditional methods have managed to find widespread evidence of fire use no older than 200,000 years. While there is some evidence of fire dating back to 500,000 years ago, it remains sparse, with only five archaeological sites around the world providing reliable evidence of ancient fire.

"We may have just found the sixth site," says Dr. Filipe Natalio of Weizmann's Plant and Environmental Sciences Department, whose previous collaboration with Dr. Ido Azuri, of Weizmann's Life Core Facilities Department, and colleagues provided the basis for this project. Together they pioneered the application of AI and spectroscopy in archaeology to find indications of controlled burning of stone tools dating back to between 200,000 and 420,000 years ago in Israel. Now they're back, joined by PhD student Zane Stepka, Dr. Liora Kolska Horwitz from the Hebrew University of Jerusalem and Prof. Michael Chazan from the University of Toronto, Canada. The team upped the ante by taking a "fishing expedition" -- casting far out into the water and seeing what they could reel back in. "When we started this project," says Natalio, "the archaeologists who've been analyzing the findings from Evron Quarry told us we wouldn't find anything. We should have made a bet."

Evron Quarry, located in the Western Galilee, is an open-air archaeological site that was first discovered in the mid-1970s. During a series of excavations that took place at that time and were led by Prof. Avraham Ronen, archaeologists dug down 14 meters and uncovered a large array of animal fossils and Paleolithic tools dating back to between 800,000 and 1 million years ago, making it one of the oldest sites in Israel. None of the finds from the site or the soil in which they were found had any visual evidence of heat: ash and charcoal degrade over time, eliminating the chances of finding visual evidence of burning. Thus, if the Weizmann scientists wanted to find evidence of fire, they had to search farther afield.

The "fishing" expedition began with the development of a more advanced AI model than they had previously used. "We tested a variety of methods, among them traditional data analysis methods, machine learning modeling and more advanced deep learning models," says Azuri, who headed the development of the models. "The deep learning models that prevailed had a specific architecture that outperformed the others and successfully gave us the confidence we needed to further use this tool in an archaeological context having no visual signs of fire use." The advantage of AI is that it can find hidden patterns across a multitude of scales. By pinpointing the chemical composition of materials down to the molecular level, the output of the model can estimate the temperature to which the stone tools were heated, ultimately providing information about past human behaviors.

With an accurate AI method in hand, the team could start fishing for molecular signals from the stone tools used by the inhabitants of the Evron Quarry almost a million years ago. To this end, the team assessed the heat exposure of 26 flint tools found at the site almost half a century ago. The results revealed that the tools had been heated to a wide range of temperatures -- some exceeding 600°C. In addition, using a different spectroscopic technique, they analyzed 87 faunal remains and discovered that the tusk of an extinct elephant also exhibited structural changes resulting from heating. While cautious in their claim, the presence of hidden heat suggests that our ancient ancestors, not unlike the scientists themselves, were experimentalists.

According to the research team, by looking at the archaeology from a different perspective, using new tools, we may find much more than we initially thought. The methods they've developed could be applied, for example, at other Lower Paleolithic sites to identify nonvisual evidence of fire use. Furthermore, this method could perhaps offer a renewed spatiotemporal perspective on the origins and controlled use of fire, helping us to better understand how hominin's pyrotechnology-related behaviors evolved and drove other behaviors. "Especially in the case of early fire," says Stepka, "if we use this method at archaeological sites that are one or two million years old, we might learn something new."

Read more at Science Daily