Oct 7, 2022

Scientists identify potential source of 'shock-darkened' meteorites, with implications for hazardous asteroid deflection

When the Chelyabinsk fireball exploded across Russian skies in 2013, it littered Earth with a relatively uncommon type of meteorite. What makes the Chelyabinsk meteorites and others like them special is their dark veins, created by a process called shock darkening. Yet, planetary scientists have been unable to pinpoint a nearby asteroid source of these kinds of meteorites -- until now.

In a new paper published in the Planetary Science Journal, University of Arizona scientists identified an asteroid named 1998 OR2 as one potential source of shock-darkened meteorites. The near-Earth asteroid is about 1 1/2 miles wide and made a close approach to Earth in April 2020. When pieces of asteroids break off into space and then land on Earth, they are considered meteorites.

"Shock darkening is an alteration process caused when something impacts a planetary body hard enough that the temperatures partially or fully melt those rocks and alter their appearance both to the human eye and in our data," said lead study author Adam Battle, a UArizona graduate student studying planetary science. "This process has been seen in meteorites many times but has only been seen on asteroids in one or two cases way out in the main asteroid belt, which is found between Mars and Jupiter."

Battle's adviser and study co-author Vishnu Reddy,a planetary sciences professor, discovered shock darkening on main belt asteroids in 2013 and 2014. Reddy co-leads the Space Domain Awareness lab at the Lunar and Planetary Laboratory with engineering professor Roberto Furfaro. Battle has worked in the lab since 2019.

"Impacts are very common in asteroids and any solid body in the solar system because we see impact craters on these objects from spacecraft images. But impact melt and shock-darkening effects on meteorites derived from these bodies are rare. Finding a near-Earth asteroid dominated by this process has implications for impact hazard assessment," Reddy said. "Adam's work has shown that ordinary chondrite asteroids can appear as carbonaceous in our classification tools if they are affected by shock darkening. These two materials have different physical strengths, which is important when trying to deflect a hazardous asteroid."

For this study, Battle, Reddy and their team used the RAPTORS system, a telescope atop the Kuiper Space Sciences building on campus, to collect data on 1998 OR2's surface composition and determined that it looked like an ordinary chondrite asteroid. Chondrite asteroids contain the minerals olivine and pyroxene and are lighter in appearance.

But when the team ran the data through a classification tool, it suggested the asteroid was instead a carbonaceous asteroid, a type of asteroid that is characteristically dark and relatively featureless.

"The mismatch was one of the early things that got the project going to investigate potential causes for the discrepancy," Battle said. "The asteroid is not a mixture of ordinary chondrite and carbonaceous asteroids, but rather it is definitely an ordinary chondrite, based on its minerology, which has been altered -- likely through the shock darkening process -- to look like a carbonaceous asteroid to the classification tool."

Shock darkening was hypothesized in the late 1980s but didn't gain traction and went unstudied until 2013 when the fireball over Russia produced meteorites with shock-darkened characteristics.

Scientists, including Reddy, started getting more interested in shock darkening, and Reddy soon discovered shock-darkened asteroids in the main asteroid belt. On Earth, 2%, or roughly 1,400 of about 60,000 ordinary chondrite meteorites have undergone some degree of shock or impact process, Battle said.

Read more at Science Daily

New field of research: Crystal traces in fossil leaves

Fossil specimen Ro-59.9 is littered with microscopic cavities. Some of them look as if tiny raspberries had once slumbered inside them, each of them just two hundredths of a millimeter in size. The fossilized leaf comes from the Rott fossil site near Bonn and is more than 20 million years old. At the moment, it is not possible to say to which plant species it belongs.

Perhaps that will change soon. Because the position and shape of the cavities are like a kind of fingerprint: they can be used to identify fossil plant remains. "Until now, it was not known how these cavities were formed," explains Mahdieh Malekhosseini from the Institute of Geosciences at the University of Bonn. "For example, it was believed that they came from algae or pollen from other plants that somehow got onto the leaf during fossilization. But after analyzing hundreds of these structures, we can rule that out. Instead, we were able to show that calcium oxalate crystals are responsible for the depressions."

Microlenses for better photosynthesis?

Calcium oxalate is formed by very many living plants; it is considered one of the most common biominerals. What functions it fulfills has not yet been conclusively clarified. However, it is suspected that the crystals serve as calcium stores. In addition, because they are formed in the leaf but often penetrate the leaf surface as they grow, they probably repel pests. "Many insects have an aversion to calcium oxalate -- they don't like to walk on it," explains Prof. Dr. Jes Rust, who supervised the study. "Some plants also seem to use the crystals as microlenses to use sunlight more efficiently for photosynthesis."

The crystals are very sensitive to acid. They therefore dissolve during fossilization and can no longer be detected in the millions of years old finds. Often, however, imprints remain in the places where they have sat (in biology one speaks of "druses"). Sometimes organic material or other minerals also accumulate in these depressions, which then sit like tiny beads in the fossil leaf.

"We studied the microstructure of the pits and their distribution on fossil leaves whose species affiliation we knew," Malekhosseini explains. "In addition, we looked at calcium oxalate crystals in the leaves of present-day plants. We found clear parallels in closely related species. For example, the crystal imprints in a fossil ginkgo leaf strongly resemble the calcium oxalate deposits of a present-day ginkgo in distribution and structure."

Important insights into evolution

It was already known from the fossils of bare-seeded plants such as firs or pines that they sometimes show imprints of calcium oxalate crystals. However, this was not known of angiosperms -- which are most flowers and deciduous trees. "This is a completely new field of research," explains Jes Rust. "Among other things, we now want to investigate how the ability to form calcium oxalate crystals has developed over the course of evolution." In doing so, the researchers want to focus on periods when environmental conditions changed rapidly -- such as temperature or the intensity of UV radiation. "If the distribution of the drusen also changes after such incisions, then we can draw conclusions about the biological function of the crystals," says Rust.

Read more at Science Daily

What drives ecosystems to instability?

Trying to decipher all of the factors that influence the behavior of complex ecological communities can be a daunting task. However, MIT researchers have now shown that the behavior of these ecosystems can be predicted based on just two pieces of information: the number of species in the community and how strongly they interact with each other.

In studies of bacteria grown in the lab, the researchers were able to define three states of ecological communities, and calculated the conditions necessary for them to move from one state to another. These findings allowed the researchers to create a "phase diagram" for ecosystems, similar to the diagrams physicists use to describe the conditions that control the transition of water from solid to liquid to gas.

"What's amazing and wonderful about a phase diagram is that it summarizes a great deal of information in a very simple form," says Jeff Gore, a professor of physics at MIT. "We can trace out a boundary that predicts the loss of stability and the onset of fluctuations of a population."

Gore is the senior author of the study, which appears today in Science. Jiliang Hu, an MIT graduate student, is the lead author of the paper. Other authors include Daniel Amor, a former MIT postdoc; Matthieu Barbier, a researcher at the Plant Health Institute at the University of Montpellier, France; and Guy Bunin, a professor of physics at the Israel Institute of Technology.

Population dynamics

The dynamics of natural ecosystems are difficult to study because while scientists can make observations about how species interact with each other, they usually can't do controlled experiments in the wild. Gore's lab specializes in using microbes such as bacteria and yeast to analyze interspecies interactions in a controlled way, in hopes of learning more about how natural ecosystems behave.

In recent years, his lab has demonstrated how competitive and cooperative behavior affect populations, and has identified early warning signs of population collapse. During that time, his lab has gradually built up from studying one or two species at a time to larger scale ecosystems.

As they worked up to studying larger communities, Gore became interested in trying to test some of the predictions that theoretical physicists have made regarding the dynamics of large, complex ecosystems. One of those predictions was that ecosystems move through phases of varying stability based on the number of species in the community and the degree of interaction between species. Under this framework, the type of interaction -- predatory, competitive, or cooperative -- doesn't matter. Only the strength of the interaction matters.

To test that prediction, the researchers created communities ranging from two to 48 species of bacteria. For each community, the researchers controlled the number of species by forming different synthetic communities with different sets of species. They were also able to strengthen the interactions between species by increasing the amount of food available, which causes populations to grow larger and can also lead to environmental changes such as increased acidification.

"In order to see phase transitions in the lab, it really is necessary to have experimental communities where you can turn the knobs yourself and make quantitative measurements of what's happening," Gore says.

The results of these experimental manipulations confirmed that the theories had correctly predicted what would happen. Initially, each community existed in a phase called "stable full existence," in which all species coexist without interfering with each other.

As either the number of species or interactions between them were increased, the communities entered a second phase, known as "stable partial coexistence." In this phase, populations remain stable, but some species became extinct. The overall community remained in a stable state, meaning that the population returns to a state of equilibrium after some species go extinct.

Finally, as the number of species or strength of interactions increased even further, the communities entered a third phase, which featured more dramatic fluctuations in population. The ecosystems became unstable, meaning that the populations persistently fluctuate over time. While some extinctions occurred, these ecosystems tended to have a larger overall fraction of surviving species.

Predicting behavior

Using this data, the researchers were able to draw a phase diagram that describes how ecosystems change based on just two factors: number of species and strength of interactions between them. This is analogous to how physicists are able to describe changes in the behavior of water based on only two conditions: temperature and pressure. Detailed knowledge of the exact speed and position of each molecule of water is not needed.

"While we cannot access all biological mechanisms and parameters in a complex ecosystem, we demonstrate that its diversity and dynamics may be emergent phenomena that can be predicted from just a few aggregate properties of the ecological community: species pool size and statistics of interspecies interactions," Hu says.

The creation of this kind of phase diagram could help ecologists make predictions about what might be happening in natural ecosystems such as forests, even with very little information, because all they need to know is the number of species and how much they interact.

"We can make predictions or statements about what the community is going to do, even in the absence of detailed knowledge of what's going on," Gore says. "We don't even know which species are helping or hurting which other species. These predictions are based purely on the statistical distribution of the interactions within this complex community."

The researchers are now studying how the flow of new species between otherwise isolated populations (similar to island ecosystems) affects the dynamics of those populations. This could help to shed light on how islands are able to maintain species diversity even when extinctions occur.

Read more at Science Daily

World's first stem cell treatment for spina bifida delivered during fetal surgery

Three babies have been born after receiving the world's first spina bifida treatment combining surgery with stem cells. This was made possible by a landmark clinical trial at UC Davis Health.

The one-of-a-kind treatment, delivered while a fetus is still developing in the mother's womb, could improve outcomes for children with this birth defect.

Launched in the spring of 2021, the clinical trial is known formally as the "CuRe Trial: Cellular Therapy for In Utero Repair of Myelomeningocele." Thirty-five patients will be treated in total.

The three babies from the trial that have been born so far will be monitored by the research team until 30 months of age to fully assess the procedure's safety and effectiveness.

The first phase of the trial is funded by a $9 million state grant from the state's stem cell agency, the California Institute for Regenerative Medicine (CIRM).

"This clinical trial could enhance the quality of life for so many patients to come," said Emily, the first clinical trial participant who traveled from Austin, Tex. to participate. Her daughter Robbie was born last October. "We didn't know about spina bifida until the diagnosis. We are so thankful that we got to be a part of this. We are giving our daughter the very best chance at a bright future."

Spina bifida, also known as myelomeningocele, occurs when spinal tissue fails to fuse properly during the early stages of pregnancy. The birth defect can lead to a range of lifelong cognitive, mobility, urinary and bowel disabilities. It affects 1,500 to 2,000 children in the U.S. every year. It is often diagnosed through ultrasound.

While surgery performed after birth can help reduce some of the effects, surgery before birth can prevent or lessen the severity of the fetus's spinal damage, which worsens over the course of pregnancy.

"I've been working toward this day for almost 25 years now," said Diana Farmer, the world's first woman fetal surgeon, professor and chair of surgery at UC Davis Health and principal investigator on the study.

The path to a future cure

As a leader of the Management of Myelomeningocele Study (MOMS) clinical trial in the early 2000s, Farmer had previously helped to prove that fetal surgery reduced neurological deficits from spina bifida. Many children in that study showed improvement but still required wheelchairs or leg braces.

Farmer recruited bioengineer Aijun Wang specifically to help take that work to the next level. Together, they launched the UC Davis Health Surgical Bioengineering Laboratory to find ways to use stem cells and bioengineering to advance surgical effectiveness and improve outcomes. Farmer also launched the UC Davis Fetal Care and Treatment Center with fetal surgeon Shinjiro Hirose and the UC Davis Children's Surgery Center several years ago.

Farmer, Wang and their research team have been working on their novel approach using stem cells in fetal surgery for more than 10 years. Over that time, animal modeling has shown it is capable of preventing the paralysis associated with spina bifida.

It's believed that the stem cells work to repair and restore damaged spinal tissue, beyond what surgery can accomplish alone.

Preliminary work by Farmer and Wang proved that prenatal surgery combined with human placenta-derived mesenchymal stromal cells, held in place with a biomaterial scaffold to form a "patch," helped lambs with spina bifida walk without noticeable disability.

"When the baby sheep who received stem cells were born, they were able to stand at birth and they were able to run around almost normally. It was amazing," Wang said.

When the team refined their surgery and stem cells technique for canines, the treatment also improved the mobility of dogs with naturally occurring spina bifida.

A pair of English bulldogs named Darla and Spanky were the world's first dogs to be successfully treated with surgery and stem cells. Spina bifida, a common birth defect in this breed, frequently leaves them with little function in their hindquarters.

By their post-surgery re-check at 4 months old, Darla and Spanky were able to walk, run and play.

The world's first human trial

When Emily and her husband Harry learned that they would be first-time parents, they never expected any pregnancy complications. But the day that Emily learned that her developing child had spina bifida was also the day she first heard about the CuRe trial.

For Emily, it was a lifeline that they couldn't refuse.

Participating in the trial would mean that she would need to temporarily move to Sacramento for the fetal surgery and then for weekly follow-up visits during her pregnancy.

After screenings, MRI scans and interviews, Emily received the life-changing news that she was accepted into the trial. Her fetal surgery was scheduled for July 12, 2021, at 25 weeks and five days gestation.

Farmer and Wang's team manufactures clinical grade stem cells -- mesenchymal stem cells -- from placental tissue in the UC Davis Health's CIRM-funded Institute for Regenerative Cures. The cells are known to be among the most promising type of cells in regenerative medicine.

The lab is a Good Manufacturing Practice (GMP) Laboratory for safe use in humans. It is here that they made the stem cell patch for Emily's fetal surgery.

"It's a four-day process to make the stem cell patch," said Priya Kumar, the scientist at the Center for Surgical Bioengineering in the Department of Surgery, who leads the team that creates the stem cell patches and delivers them to the operating room. "The time we pull out the cells, the time we seed on the scaffold, and the time we deliver, is all critical."

A first in medical history

During Emily's historic procedure, a 40-person operating and cell preparation team did the careful dance that they had been long preparing for.

After Emily was placed under general anesthetic, a small opening was made in her uterus and they floated the fetus up to that incision point so they could expose its spine and the spina bifida defect. The surgeons used a microscope to carefully begin the repair.

Then the moment of truth: The stem cell patch was placed directly over the exposed spinal cord of the fetus. The fetal surgeons then closed the incision to allow the tissue to regenerate.

"The placement of the stem cell patch went off without a hitch. Mother and fetus did great!" Farmer said.

The team declared the first-of-its-kind surgery a success.

Delivery day

On Sept. 20, 2021, at 35 weeks and five days gestation, Robbie was born at 5 pounds, 10 ounces, 19 inches long via C-section.

"One of my first fears was that I wouldn't be able to see her, but they brought her over to me. I got to see her toes wiggle for the first time. It was so reassuring and a little bit out of this world," Emily said.

For Farmer, this day is what she had long hoped for, and it came with surprises. If Robbie had remained untreated, she was expected to be born with leg paralysis.

"It was very clear the minute she was born that she was kicking her legs and I remember very clearly saying, 'Oh my God, I think she's wiggling her toes!'" said Farmer, who noted that the observation was not an official confirmation, but it was promising. "It was amazing. We kept saying, 'Am I seeing that? Is that real?'"

Both mom and baby are at home and in good health. Robbie just celebrated her first birthday.

The CuRe team is cautious about drawing conclusions and says a lot is still to be learned during this safety phase of the trial. The team will continue to monitor Robbie and the other babies in the trial until they are 6 years old, with a key checkup happening at 30 months to see if they are walking and potty training.

Read more at Science Daily

Oct 6, 2022

Laughing gas in space could mean life

Scientists at UC Riverside are suggesting something is missing from the typical roster of chemicals that astrobiologists use to search for life on planets around other stars -- laughing gas.

Chemical compounds in a planet's atmosphere that could indicate life, called biosignatures, typically include gases found in abundance in Earth's atmosphere today.

"There's been a lot of thought put into oxygen and methane as biosignatures. Fewer researchers have seriously considered nitrous oxide, but we think that may be a mistake," said Eddie Schwieterman, an astrobiologist in UCR's Department of Earth and Planetary Sciences.

This conclusion, and the modeling work that led to it, are detailed in an article published today in the Astrophysical Journal.

To reach it, Schwieterman led a team of researchers that determined how much nitrous oxide living things on a planet similar to Earth could possibly produce. They then made models simulating that planet around different kinds of stars and determined amounts of N2O that could be detected by an observatory like the James Webb Space Telescope.

"In a star system like TRAPPIST-1, the nearest and best system to observe the atmospheres of rocky planets, you could potentially detect nitrous oxide at levels comparable to CO2 or methane," Schwieterman said.

There are multiple ways that living things can create nitrous oxide, or N2O. Microorganisms are constantly transforming other nitrogen compounds into N2O, a metabolic process that can yield useful cellular energy.

"Life generates nitrogen waste products that are converted by some microorganisms into nitrates. In a fish tank, these nitrates build up, which is why you have to change the water," Schwieterman said

"However, under the right conditions in the ocean, certain bacteria can convert those nitrates into N2O," Schwieterman said. "The gas then leaks into the atmosphere."

Under certain circumstances, N2O could be detected in an atmosphere and still not indicate life. Schwieterman's team accounted for this in their modeling. A small amount of nitrous oxide is created by lightning, for example. But alongside N2O, lightning also creates nitrogen dioxide, which would offer astrobiologists a clue that non-living weather or geological processes created the gas.

Others who have considered N2O as a biosignature gas often conclude it would be difficult to detect from so far away. Schwieterman explained that this conclusion is based on N2O concentrations in Earth's atmosphere today. Because there isn't a lot of it on this planet, which is teeming with life, some believe it would also be hard to detect elsewhere.

"This conclusion doesn't account for periods in Earth's history where ocean conditions would have allowed for much greater biological release of N2O. Conditions in those periods might mirror where an exoplanet is today," Schwieterman said.

Schwieterman added that common stars like K and M dwarfs produce a light spectrum that is less effective at breaking up the N2O molecule than our sun is. These two effects combined could greatly increase the predicted amount of this biosignature gas on an inhabited world.

The research team included UCR astrobiologists Daria Pidhorodetska, Andy Ridgwell, and Timothy Lyons, as well as scientists from Purdue University, the Georgia Institute of Technology, American University, and the NASA Goddard Space Flight Center.

Read more at Science Daily

Geneticists discover new wild goat subspecies via ancient DNA

Geneticists from Trinity College Dublin, together with a team of international collaborators, have discovered a previously unknown lineage of wild goats over ten millennia old. The research was subject to open peer review and recommendation at PCI Genomics and has just been published in the journal eLife.

The new goat type, discovered from genetic screening of bone remains and referred to as "the Taurasian tur," likely survived the Last Glacial Maximum (the ice age), which stranded their ancestors in the high peaks of the Taurus Mountains in Turkey where their remains were found.

A chance discovery at Direkli Cave

Over 12,000 years ago, hunter-gatherers in the Taurus Mountains of southern Turkey relied heavily on local game for food and subsistence. Located near the present-day village of Döngel and at an elevation of ~1,100 m above sea level, Direkli Cave was used for roughly three millennia (~14,000-11,000 years ago) as a seasonal camp for these hunters and may have been inhabited year-round.

"Among the artefacts found at Direkli Cave were large amounts of bone remains with distinct processing marks, indicating that wild goats were butchered there for consumption," says Dr Kevin Daly, from Trinity's School of Genetics and Microbiology, who is first author of the research article.

"With the cave surrounded by high peaks, reaching ~2,200 m, the wild goat or bezoar ibex (Capra aegagrus) that inhabit the region today were likely the target of these Late Pleistocene hunters."

During genetic screening of goat bone remains from Direkli, the geneticists noticed something unusual: many of the goats carried mitochondrial genomes similar to a different species of wild goat.

Whereas the domestic goat is derived from the bezoar ibex, other species of wild goat are still alive today and are found in relatively restricted regions. These include the East and West Caucasus tur, two sister species (or subspecies) of wild goat now found only in the Caucasus Mountains in Georgia. Many of the Direkli Cave samples carried mitochondria related to these Caucasus tur, despite Direkli Cave being around 800 km from their current habitat.

Dr Daly added: "An even greater surprise came when we examined the Direkli Cave goats' nuclear genomes: while most looked like the bezoar ibex, as expected, one sample appeared different from the rest. This sample, Direkli4, showed more ancestral genetic variants than other Direkli goats, indicating it might have been a different species than the others."

To better understand this, the Trinity team collaborated with researchers from Muséum national d'Histoire naturelle of Paris to generate genetic data from other species in the Capra group.

A new lineage of Tur

The team was surprised to see that the Direkli4 sample in fact grouped with the Caucasian tur -- appearing to be a sister group to both East and West types. Intrigued, the team screened more material from Direkli Cave and found an additional two samples with a "tur-like" genome, suggesting that a population of these tur relatives lived in the Taurus Mountains close to local bezoar ibex, with both hunted by humans in pre-historic times.

The team suggest a name for the discovered Taurasian tur: Capra taurensis or Capra caucasica taurensis; researchers still classify living tur as either subspecies or two distinct species.

As tur are larger and heavier than other wild goats, with a distinctive horn shape, it should be possible to identify a group of tur relatives in animal remains. Horn remains are absent at Direkli Cave, despite the large numbers of remains -- possibly pointing to these being a valuable prize among hunters. But archaeozoologists in the team showed there were a lot of large-bodied goats at Direkli Cave -- and possibly at other mountainous locations in southwest Asia.

"We hope that this will encourage re-evaluation and analysis of faunal remains in the region as there could be some exciting discoveries still to be found," added Dr Daly.

A victim of climatic change and human activity?

The team suggest that the ancestors of tur lived across a broader geographical area over the past 100,000 years, from the Caucasus Mountains to the Taurus Mountains by the Mediterranean -- and that climate change may have caused habitat fragmentation.

Dr Daly said: "The Last Glacial Maximum, or ice age, may have made many areas inhospitable, forcing these goats to compete with other species. The Taurasian tur may have been a leftover group, restricted to the peaks in the Taurus Mountains. Increasing human activity would have placed additional pressure on the Taurasian tur, with hunting evidenced at Direkli Cave.

Read more at Science Daily

Ancient ice age valleys offer clues to future ice sheet change

Deep valleys buried under the seafloor of the North Sea record how the ancient ice sheets that used to cover the UK and Europe expelled water to stop themselves from collapsing.

A new study published this week has surprised the research team, who discovered that the valleys took just hundreds of years to form as they transported vast amounts of meltwater away from under the ice and out into the sea.

This new understanding of when the vast ice sheets melted 20,000 years ago has implications for how glaciers may respond to climate warming today. The study is published in the journal Quaternary Science Reviews.

Tunnel valleys are enormous channels, sometimes up to 150km long, 6km wide and 500m deep (each several times larger than Loch Ness), that drain water from beneath melting ice sheets. There are thousands buried beneath the seafloor of the North Sea that record the melting of ice sheets that have covered the UK and Western Europe over the last two million years.

Lead author James Kirkham, from British Antarctic Survey (BAS) and the University of Cambridge, says:

"This is an exciting discovery. We know that these spectacular valleys are carved out during the death throes of ice sheets. By using a combination of state-of-the-art subsurface imaging techniques and a computer model, we have learnt that tunnel valleys can be eroded rapidly beneath ice sheets experiencing extreme warmth."

The team analysed 'jaw-droppingly detailed' seismic images that provide a 3D scan of the Earth's buried layers. Informed by delicate clues discovered within the valleys, the authors performed a series of computer modelling experiments to simulate valley development, and test how quickly they formed as the last ice sheet to cover the UK melted away at the end of the most recent ice age about 20,000 years ago.

The research suggests that this process is quick by geological timescales, with the melting ice forming giant tunnel valleys within hundreds of years, expelling water that could otherwise accelerate rates of ice loss.

Traditionally, the drainage of water from beneath ice sheets is thought to stabilise ice flow, a process that could potentially buffer modern ice sheets from collapse in a warming climate. But while inspecting the detailed seismic scans, the authors began to find tell-tale signatures of both stagnant and rapid ice movement within the valleys, complicating the picture of how these rapidly forming channels might affect future ice sheet behaviour.

What is certain, is that the surprisingly fast rate at which these tunnels form means that scientists need to start considering their effects in models of how today's ice sheets will evolve in the coming decades to centuries.

There are no modern analogues for this rapid process, but these ancient valleys, now buried hundreds of metres beneath the muds of the North Sea seafloor, record a mechanism for how ice sheets respond to extreme warmth that is missing from present-day ice sheet models. Such models do not currently resolve fine-scale water drainage processes, despite them appearing to be an important control on future ice loss rates and ultimately sea level rise.

James Kirkham continues: "The pace at which these giant channels can form means that they are an important, yet currently ignored, mechanism that may potentially help to stabilise ice sheets in a warming world. As climate change continues to drive the retreat of the modern-day Greenland and Antarctic ice sheets at ever increasing rates, our results call for renewed investigation of how tunnel valleys may help to stabilise contemporary ice losses, and therefore sea level rise, if they switch on beneath the Earth's ice sheets in the future."

Dr Kelly Hogan, co-author and a geophysicist at BAS, says:

"We have been observing these huge meltwater channels from areas covered by ice sheets in the past for more than a century but we did not really understand how they formed. Our results show, for the first time, that the most important mechanism is probably summer melting at the ice surface that makes its way to the bed through cracks or chimneys-like conduits and then flows under the pressure of the ice sheet to cut the channels. Surface melting is already hugely important for the Greenland Ice Sheet today, and this process of water transport through the system will only increase as our climate warms. The crucial question now is will this "extra" meltwater flow in channels cause our ice sheets to flow more quickly, or more slowly, into the sea."

Read more at Science Daily

On-site reactors could affordably turn CO2 into valuable chemicals

New technology developed at the University of Waterloo could make a significant difference in the fight against climate change by affordably converting harmful carbon dioxide (CO2) into fuels and other valuable chemicals on an industrial scale.

Outlined in a study published today in the journal Nature Energy, the system yields 10 times more carbon monoxide (CO) -- which can be used to make ethanol, methane and other desirable substances -- than existing, small-scale technologies now limited to testing in laboratories.

Its individual cells can also be stacked to form reactors of any size, making the technology a customizable, economically viable solution that could be installed right on site, for example, at factories with CO2 emissions.

"This is a critical bridge to connect CO2 lab technology to industrial applications," said Dr. Zhongwei Chen, a chemical engineering professor at Waterloo. "Without it, it is very difficult for materials-based technologies to be used commercially because they are just too expensive."

The system features devices known as electrolyzers that convert CO2, a major greenhouse gas produced by burning fossil fuels, into CO using water and electricity.

Electrolyzers developed by the researchers have new electrodes and a new kind of liquid-based electrolyte, which is saturated with CO2 and flowed through the devices for conversion into CO via an electrochemical reaction.

Their electrolyzers are essentially 10-centimetre by 10-centimetre cells, many times larger than existing devices, that can be stacked and configured in reactors of any size.

"This is a completely new model for a CO2 reactor," said Chen, the Canada Research Chair in Advanced Materials for Clean Energy. "It makes the whole process economically viable for industrialization and can be customized to meet specific requirements."

The researchers envision on-site reactors at coal-fired power plants and factories, perhaps the size of a house or more, that would be directly fed CO2 emissions, further reducing costs by eliminating the need to capture and collect CO2 first.

They are also developing plans to power the reactors with on-site renewable energy sources such as solar panels, contributing to the environmental benefits.

"I'm excited by the potential of this technology," Chen said. "If we really want to make a difference by reducing emissions, we have to concentrate on reducing costs to make it affordable."

Read more at Science Daily

Triassic specimen found to be early relative of pterosaurs a century after its discovery

A new study of a tiny Triassic fossil reptile first discovered over 100 years ago in the north east of Scotland has revealed it to be a close relative of the species that would become pterosaurs -- iconic flying reptiles of the age of the dinosaurs.

The research, published in Nature, was carried out by a team of scientists led by Dr Davide Foffa, Research Associate at National Museums Scotland, and now a Research Fellow at the University of Birmingham. Working together with colleagues at Virginia Tech, the team used Computed Tomography (CT) to provide the first accurate whole skeleton reconstruction of Scleromochlus taylori.

The results reveal new anatomical details that conclusively identify it as a close pterosaur relative. It falls within a group known as Pterosauromorpha, comprising an extinct group of reptiles called lagerpetids together with pterosaurs.

Living approximately 240 -210 million years ago, lagerpetids were a group of relatively small (cat or small dog-sized) active reptiles. Schleromochlus was smaller still at under 20 centimetres in length. The results support the hypothesis that the first flying reptiles evolved from small, likely bipedal ancestors.

The finding settles a century-long debate. There had previously been disagreement as to whether the reptile, Scleromochlus, represented an evolutionary step in the direction of pterosaurs, dinosaurs or else some other reptilian offshoot.

The fossil of Scleromochlus is poorly preserved in a block of sandstone, which has made it difficult to study in sufficient detail to properly identify its anatomical features. The fossil is one of a group known as the Elgin Reptiles, comprising Triassic and Permian specimens found in the sandstone of the Morayshire region of north east Scotland around the town of Elgin.

The specimens are held mostly in the collections of National Museums Scotland, Elgin Museum and the Natural History Museum. The latter holds Scleromochlus, which was originally found at Lossiemouth.

Dr Foffa said: "It's exciting to be able to resolve a debate that's been going on for over a century, but it is far more amazing to be able to see and understand an animal which lived 230 million years ago and its relationship with the first animals ever to have flown. This is another discovery which highlights Scotland's important place in the global fossil record, and also the importance of museum collections that preserve such specimens, allowing us to use new techniques and technologies to continue to learn from them long after their discovery."

Professor Paul Barrett at the Natural History Museum said: "The Elgin reptiles aren't preserved as the pristine, complete skeletons that we often see in museum displays. They're mainly represented by natural moulds of their bone in sandstone and -- until fairly recently -- the only way to study them was to use wax or latex to fill these moulds and make casts of the bones that once occupied them. However, the use of CT scanning has revolutionized the study of these difficult specimens and has enabled us to produce far more detailed, accurate and useful reconstructions of these animals from our deep past."

Professor Sterling Nesbitt at Virgina Tech said: "Pterosaurs were the first vertebrates to evolve powered flight and for nearly two centuries, we did not know their closest relatives. Now we can start filling in their evolutionary history with the discovery of tiny close relatives that enhance our knowledge about how they lived and where they came from"

Read more at Science Daily

Oct 5, 2022

Astronomers find a 'cataclysmic' pair of stars with the shortest orbit yet

Nearly half the stars in our galaxy are solitary like the sun. The other half comprises stars that circle other stars, in pairs and multiples, with orbits so tight that some stellar systems could fit between Earth and the moon.

Astronomers at MIT and elsewhere have discovered a stellar binary, or pair of stars, with an extremely short orbit, appearing to circle each other every 51 minutes. The system seems to be one of a rare class of binaries known as a "cataclysmic variable," in which a star similar to our sun orbits tightly around a white dwarf -- a hot, dense core of a burned-out star.

A cataclysmic variable occurs when the two stars draw close, over billions of years, causing the white dwarf to start accreting, or eating material away from its partner star. This process can give off enormous, variable flashes of light that, centuries ago, astronomers assumed to be a result of some unknown cataclysm.

The newly discovered system, which the team has tagged ZTF J1813+4251, is a cataclysmic variable with the shortest orbit detected to date. Unlike other such systems observed in the past, the astronomers caught this cataclysmic variable as the stars eclipsed each other multiple times, allowing the team to precisely measure properties of each star.

With these measurements, the researchers ran simulations of what the system is likely doing today and how it should evolve over the next hundreds of millions of years. They conclude that the stars are currently in transition, and that the sun-like star has been circling and "donating" much of its hydrogen atmosphere to the voracious white dwarf. The sun-like star will eventually be stripped down to a mostly dense, helium-rich core. In another 70 million years, the stars will migrate even closer together, with an ultrashort orbit reaching just 18 minutes, before they begin to expand and drift apart.

Decades ago, researchers at MIT and elsewhere predicted that such cataclysmic variables should transition to ultrashort orbits. This is the first time such a transitioning system has been observed directly.

"This is a rare case where we caught one of these systems in the act of switching from hydrogen to helium accretion," says Kevin Burdge, a Pappalardo Fellow in MIT's Department of Physics. "People predicted these objects should transition to ultrashort orbits, and it was debated for a long time whether they could get short enough to emit detectable gravitational waves. This discovery puts that to rest."

Burdge and colleagues report their discovery in Nature. The study's co-authors include collaborators from multiple institutions, including the Harvard and Smithsonian Center for Astrophysics.

Sky search


The astronomers discovered the new system within a vast catalog of stars, observed by the Zwicky Transient Facility (ZTF), a survey that uses a camera attached to a telescope at the Palomar Observatory in California to take high-resolution pictures of wide swaths of the sky.

The survey has taken more than 1,000 images of each of the more than 1 billion stars in the sky, recording each star's changing brightness over days, months, and years.

Burdge combed through the catalog, looking for signals of systems with ultrashort orbits, the dynamics of which can be so extreme that they should give off dramatic bursts of light and emit gravitational waves.

"Gravitational waves are allowing us to study the universe in a totally new way," says Burdge, who is searching the sky for new gravitational-wave sources.

For this new study, Burdge looked through the ZTF data for stars that appeared to flash repeatedly, with a period of less than an hour -- a frequency that typically signals a system of at least two closely orbiting objects, with one crossing the other and briefly blocking its light.

He used an algorithm to weed through over 1 billion stars, each of which was recorded in more than 1,000 images. The algorithm sifted out about 1 million stars that appeared to flash every hour or so. Among these, Burdge then looked by eye for signals of particular interest. His search zeroed in on ZTF J1813+4251 -- a system that resides about 3,000 light years from Earth, in the Hercules constellation.

"This thing popped up, where I saw an eclipse happening every 51 minutes, and I said, ok, this is definitely a binary," Burdge recalls.

A dense core

He and his colleagues further focused on the system using the W.M. Keck Observatory in Hawaii and the Gran Telescopio Canarias in Spain. They found that the system was exceptionally "clean," meaning they could clearly see its light change with each eclipse. With such clarity, they were able to precisely measure each object's mass and radius, as well as their orbital period.

They found that the first object was likely a white dwarf, at 1/100th the size of the sun and about half its mass. The second object was a sun-like star near the end of its life, at a tenth the size and mass of the sun (about the size of Jupiter). The stars also appeared to orbit each other every 51 minutes.

Yet, something didn't quite add up.

"This one star looked like the sun, but the sun can't fit into an orbit shorter than eight hours -- what's up here?" Burdge says.

He soon hit upon an explanation: Nearly 30 years ago, researchers including MIT emeritus professor Saul Rappaport, had predicted that ultrashort-orbit systems should exist as cataclysmic variables. As the white dwarf eats orbits the sun-like star and eats away its light hydrogen, the sun-like star should burn out, leaving a core of helium -- an element that is more dense than hydrogen, and heavy enough to keep the dead star in a tight, ultrashort orbit.

Burdge realized that ZTF J1813+4251 was likely a cataclysmic variable, in the act of transitioning from a hydrogen- to helium-rich body. The discovery both confirms the predictions made by Rappaport and others, and also stands as the shortest orbit cataclysmic variable detected to date.

Read more at Science Daily

Multiple health benefits of b-type procyanidin-rich foods like chocolate and apples consumed in right amounts

B-type procyanidins, made of catechin oligomers, are a class of polyphenols found abundantly in foods like cocoa, apples, grape seeds, and red wine. Several studies have established the benefits of these micronutrients in reducing the risk of cardiovascular diseases and strokes. B-type procyanidins are also successful in controlling hypertension, dyslipidemia, and glucose intolerance. Studies attest to the physiological benefits of their intake on the central nervous system (CNS), namely an improvement in cognitive functions. These physiological changes follow a pattern of hormesis -- a phenomenon in which peak benefits of a substance are achieved at mid-range doses, becoming progressively lesser at lower and higher doses.

The dose-response relationship of most bioactive compounds follows a monotonic pattern, in which a higher dose shows a greater response. However, in some exceptional cases, a U-shaped dose-response curve is seen. This U-shaped curve signifies hormesis -- an adaptive response, in which a low dose of usually a harmful compound induces resistance in the body to its higher doses. This means that exposure to low levels of a harmful trigger can induce the activation of stress-resistant pathways, leading to greater repair and regeneration capabilities. In case of B-type procyanidins, several in vitro studies support their hormetic effects, but these results have not been demonstrated in vivo.

To address this knowledge gap, researchers from Shibaura Institute of Technology (SIT), Japan, led by Professor Naomi Osakabe from the Department of Bioscience and Engineering, reviewed the data from intervention trials supporting hormetic responses of B-type procyanidin ingestion. The team, comprising Taiki Fushimi and Yasuyuki Fujii from the Graduate School of Engineering and Science (SIT), also conducted in vivo experiments to understand possible connections between B-type procyanidin hormetic responses and CNS neurotransmitter receptor activation. Their article was made available online on June 15, 2022 and has been published in volume 9 of Frontiers of Nutrition on September 7, 2022.

The researchers noted that a single oral administration of an optimal dose of cocoa flavanol temporarily increased the blood pressure and heart rate in rats. But the hemodynamics did not change when the dose was increased or decreased. Administration of B-type procyanidin monomer and various oligomers produced similar results. According to Professor Osakabe, "These results are consistent with those of intervention studies following a single intake of food rich in B-type procyanidin, and support the U-shaped dose-response theory, or hormesis, of polyphenols."

To observe whether the sympathetic nervous system (SNS) is involved in the hemodynamic changes induced by B-type procyanidins, the team administered adrenaline blockers in test rats. This successfully decreased the temporary increase in heart rate induced by the optimal dose of cocoa flavanol. A different kind of blocker -- a1 blocker -- inhibited the transient rise in blood pressure. This suggested that the SNS, which controls the action of adrenaline blockers, is responsible for the hemodynamic and metabolic changes induced by a single oral dose of B-type procyanidin.

The researchers next ascertained why optimal doses, and not high doses, are responsible for the thermogenic and metabolic responses. They co-administered a high dose of cocoa flavanol and yohimbine (an α2 blocker) and noted a temporary but distinct increase in blood pressure in test animals. Similar observations were made with the use of B-type procyanidin oligomer and yohimbine. Professor Osakabe surmises, "Since α2 blockers are associated with the down-regulation of the SNS, the reduced metabolic and thermogenic outputs at a high dose of B-type procyanidins seen in our study may have induced α2 auto-receptor activation. Thus, SNS deactivation may be induced by a high dose of B-type procyanidins."

Previous studies have proven the role of the gut-brain axis in controlling hormetic stress-related responses. The activation of the hypothalamus-pituitary-adrenal (HPA) axis by optimal stress has a strong influence on memory, cognition, and stress tolerance. This article highlights how HPA activation occurs after a single dose of B-type procyanidin, suggesting that stimulation with an oral dose of B-type procyanidin might be a stressor for mammals and cause SNS activation.

Read more at Science Daily

Petting dogs engages the social brain, according to neuroimaging

Researchers led by Rahel Marti at the University of Basel in Switzerland report that viewing, feeling, and touching real dogs leads to increasingly higher levels of activity in the prefrontal cortex of the brain. Published in PLOS ONE on October 5, the study shows that this effect persists after the dogs are no longer present, but is reduced when real dogs are replaced with stuffed animals. The findings have implications for animal-assisted clinical therapy.

Because interacting with animals, particularly dogs, is known to help people cope with stress and depression, researchers think that a better understanding of the associated brain activity could help clinicians design improved systems for animal-assisted therapy. The prefrontal cortex might be particularly relevant because it helps regulate and process social and emotional interactions.

In the study, activity in the prefrontal cortex of the brain was non-invasively measured with infrared neuroimaging technology as 19 men and women each viewed a dog, reclined with the same dog against their legs, or petted the dog. Each of these conditions was also performed with Leo, a stuffed lion with fur that was filled with a water bottle to match the temperature and weight of the dogs.

Results showed that prefrontal brain activity was greater when participants interacted with the real dogs, and that this difference was largest for petting, which was the most interactive condition. Another key difference was that prefrontal brain activity increased each time people interacted with the real dog. This was not observed with successive interactions with the stuffed lion, indicating that the response might be related to familiarity or social bonding.

Future studies will be needed to examine the issue of familiarity in detail and whether petting animals can trigger a similar boost of prefrontal brain activity in patients with socioemotional deficits.

The authors add: "The present study demonstrates that prefrontal brain activity in healthy subjects increased with a rise in interactional closeness with a dog or a plush animal, but especially in contact with the dog the activation is stronger. This indicates that interactions with a dog might activate more attentional processes and elicit stronger emotional arousal than comparable nonliving stimuli."

Read more at Science Daily

Sound reveals giant blue whales dance with the wind to find food

A study by MBARI researchers and their collaborators published today in Ecology Letters sheds new light on the movements of mysterious, endangered blue whales. The research team used a directional hydrophone on MBARI's underwater observatory, integrated with other advanced technologies, to listen for the booming vocalizations of blue whales. They used these sounds to track the movements of blue whales and learned that these ocean giants respond to changes in the wind.

Along California's Central Coast, spring and summer bring coastal upwelling. From March through July, seasonal winds push the top layer of water out to sea, allowing the cold water below to rise to the surface. The cooler, nutrient-rich water fuels blooms of tiny phytoplankton, jumpstarting the food web in Monterey Bay, from small shrimp-like krill all the way to giant whales. When the winds create an upwelling event, blue whales seek out the plumes of cooler water, where krill are most abundant. When upwelling stops, the whales move offshore into habitat that is transected by shipping lanes.

"This research and its underlying technologies are opening new windows into the complex, and beautiful, ecology of these endangered whales," said John Ryan, a biological oceanographer at MBARI and lead author of this study. "These findings demonstrate a new resource for managers seeking ways to better protect blue whales and other species."

The directional hydrophone is a specialized underwater microphone that records sounds and identifies the direction from which they originate. To use this technology to study blue whale movements, researchers needed to confirm that the hydrophone reliably tracked whales. This meant matching the acoustic bearings to a calling whale that was being tracked by GPS. With confidence in the acoustic methods established, the research team examined two years of acoustic tracking of the regional blue whale population.

This study built upon previous research led by MBARI Senior Scientist Kelly Benoit-Bird, which revealed that swarms of forage species -- anchovies and krill -- reacted to coastal upwelling. This time, researchers combined satellite and mooring data of upwelling conditions and echosounder data on krill aggregations with the acoustic tracks of foraging blue whales logged by the directional hydrophone.

"Previous work by the MBARI team found that when coastal upwelling was strongest, anchovies and krill formed dense swarms within upwelling plumes. Now, we've learned that blue whales track these dynamic plumes, where abundant food resources are available," explained Ryan.

Blue whales recognize when the wind is changing their habitat and identify places where upwelling aggregates their essential food -- krill. For a massive animal weighing up to 150 tonnes (165 tons), finding these dense aggregations is a matter of survival.

While scientists have long recognized that blue whales seasonally occupy Monterey Bay during the upwelling season, this research has revealed that the whales closely track the upwelling process on a very fine scale of both space (kilometers) and time (days to weeks).

"Tracking many individual wild animals simultaneously is challenging in any ecosystem. This is especially difficult in the open ocean, which is often opaque to us as human observers," said William Oestreich, previously a graduate student at Stanford University's Hopkins Marine Station and now a postdoctoral fellow at MBARI. "Integration of technologies to measure these whales' sounds enabled this important discovery about how groups of predators find food in a dynamic ocean. We're excited about the future discoveries we can make by eavesdropping on blue whales and other noisy ocean animals."

Background

Blue whales (Balaenoptera musculus) are the largest animals on Earth, but despite their large size, scientists still have many unanswered questions about their biology and ecology. These gentle giants seasonally gather in the Monterey Bay region to feed on small shrimp-like crustaceans called krill.

Blue whales are elusive animals. They can travel large distances underwater very quickly, making them challenging to track. MBARI researchers and collaborators employed a novel technique for tracking blue whales -- sound.

MBARI's MARS (Monterey Accelerated Research System) observatory offers a platform for studying the ocean in new ways. Funded by the National Science Foundation, the cabled observatory provides continuous power and data connectivity to support a variety of instruments for scientific experiments.

In 2015, MBARI researchers installed a hydrophone, or underwater microphone, on the observatory. The trove of acoustic data from the hydrophone has provided important insights into the ocean soundscape, from the migratory and feeding behaviors of blue whales to the impact of noise from human activities.

In 2019, MBARI and the Naval Postgraduate School installed a second hydrophone on the observatory. The directional hydrophone gives the direction from which a sound originated. This information can reveal spatial patterns for sounds underwater, identifying where sounds came from. By tracking the blue whales' B call -- the most powerful and prevalent vocalization among the regional blue whale population -- researchers could follow the movements of individual whales as they foraged within the region.

Researchers compared the directional hydrophone's recordings to data logged by tags that scientists from Stanford University had previously deployed on blue whales. Validating this new acoustic tracking method opens new opportunities for simultaneously logging the movements of multiple whales. It may also enable animal-borne tag research by helping researchers find whales to tag. "The integrated suite of technologies demonstrated in this paper represents a transformative tool kit for interdisciplinary research and mesoscale ecosystem monitoring that can be deployed at scale throughout protected marine habitats. This is a game changer and brings both cetacean biology and biological oceanography to the next level," said Jeremy Goldbogen, an associate professor at Stanford University's Hopkins Marine Station and a coauthor of the study.

This new methodology has implications not only for understanding how whales interact with their environment and one another but also for advancing management and conservation.

Despite protections, blue whales remain endangered, primarily from the risk of collisions with ships. This study showed that blue whales in Monterey Bay National Marine Sanctuary regularly occupy habitat transected by shipping lanes. Acoustic tracking of whales may provide real-time information for resource managers to mitigate risk, for example, through vessel speed reduction or rerouting during critical periods. "These kinds of integrated tools could allow us to spatially and temporally monitor, and eventually even predict, ephemeral biological hotspots. This promises to be a watershed advancement in the adaptive management of risks for protected and endangered species," said Brandon Southall, president and senior scientist for Southall Environmental Associates Inc. and a coauthor of the research study.

Read more at Science Daily

Oct 4, 2022

Collision may have formed the Moon in mere hours, simulations reveal

Billions of years ago, a version of our Earth that looks very different than the one we live on today was hit by an object about the size of Mars, called Theia -- and out of that collision the Moon was formed. How exactly that formation occurred is a scientific puzzle researchers have studied for decades, without a conclusive answer.

Most theories claim the Moon formed out of the debris of this collision, coalescing in orbit over months or years. A new simulation puts forth a different theory -- the Moon may have formed immediately, in a matter of hours, when material from the Earth and Theia was launched directly into orbit after the impact.

"This opens up a whole new range of possible starting places for the Moon's evolution," said Jacob Kegerreis, a postdoctoral researcher at NASA's Ames Research Center in California's Silicon Valley, and lead author of the paper on these results published in The Astrophysical Journal Letters. "We went into this project not knowing exactly what the outcomes of these high-resolution simulations would be. So, on top of the big eye-opener that standard resolutions can give you misleading answers, it was extra exciting that the new results could include a tantalisingly Moon-like satellite in orbit."

The simulations used in this research are some of the most detailed of their kind, operating at the highest resolution of any simulation run to study the Moon's origins or other giant impacts. This extra computational power showed that lower-resolution simulations can miss out on important aspects of these kinds of collisions, allowing researchers to see new behaviors emerge in a way previous studies just couldn't see.

A Puzzle of Planetary History

Understanding the Moon's origins requires using what we know about the Moon -- our knowledge of its mass, orbit, and the precise analysis of lunar rock samples -- and coming up with scenarios that could lead to what we see today.

Previously prevailing theories could explain some aspects of the Moon's properties quite well, such as its mass and orbit, but with some major caveats. One outstanding mystery has been why the composition of the Moon is so similar to Earth's. Scientists can study the composition of a material based on its isotopic signature, a chemical clue to how and where an object was created. The lunar samples scientists have been able to study in labs show very similar isotopic signatures to rocks from Earth, unlike rocks from Mars or elsewhere in the solar system. This makes it likely that much of the material that makes up the Moon originally came from Earth.

In previous scenarios where Theia sprayed out into orbit and mixed with only a little material from Earth, it's less likely we'd see such strong similarities -- unless Theia was also isotopically similar to Earth, an unlikely coincidence. In this theory, more Earth material is used to create the Moon, particularly its outer layers, which could help to explain this similarity in composition.

There have been other theories proposed to explain these similarities in composition, such as the synestia model -- where the Moon is formed inside a swirl of vaporized rock from the collision -- but these arguably struggle to explain the Moon's current orbit.

This faster, single-stage formation theory offers a cleaner and more elegant explanation for both these outstanding issues. It could also give new ways to find answers for other unsolved mysteries. This scenario can put the Moon into a wide orbit with an interior that isn't fully molten, potentially explaining properties like the Moon's tilted orbit and thin crust -- making it one of the most enticing explanations for the Moon's origins yet.

Getting closer to confirming which of these theories is correct will require analysis of future lunar samples brought back to Earth for study from NASA's future Artemis missions. As scientists gain access to samples from other parts of the Moon and from deeper beneath the Moon's surface, they will be able to compare how real-world data matches up to these simulated scenarios, and what they indicate about how the Moon has evolved over its billions of years of history.

A Shared Origin

Beyond simply learning more about the Moon, these studies can bring us closer to understanding how our own Earth became the life-harboring world it is today.

"The more we learn about how the Moon came to be, the more we discover about the evolution of our own Earth," said Vincent Eke, a researcher at Durham University and a co-author on the paper. "Their histories are intertwined -- and could be echoed in the stories of other planets changed by similar or very different collisions."

The cosmos is filled with collisions -- impacts are an essential part of how planetary bodies form and evolve. On Earth, we know that the impact with Theia and other changes throughout its history are part of how it was able to gather the materials necessary for life. The better scientists can simulate and analyze what's at play in these collisions, the more prepared we are to understand how a planet could evolve to be habitable like our own Earth.

Read more at Science Daily

The last 12,000 years show a more complex climate history than previously thought

We rely on climate models to predict the future, but models cannot be fully tested as climate observations rarely extend back more than 150 years. Understanding the Earth's past climate history across a longer period gives us an invaluable opportunity to test climate models on longer timescales and reduce uncertainties in climate predictions. In this context, changes in the average surface temperature of the Earth during the current interglacial Epoch, the Holocene (approximately the past 12,000 years), have been thoroughly debated over the past decades. Reconstructions of past temperature seem to indicate that global mean temperature showed a maximum around 6,000 years ago and has cooled until the onset of the current climate crisis during the industrial revolution.

Climate model simulations, on the other hand, suggest continuous warming since the start of the Holocene. In 2014, researchers named this major mismatch between models and past climate observations the "Holocene Temperature Conundrum."

In this new study, scientists used the largest available database of past temperature reconstructions extending back 12,000 years to carefully investigate the geographic pattern of temperature change during the Holocene. Olivier Cartapanis and colleagues find that, contrary to previously thought, there is no globally synchronous warm period during the Holocene. Instead, the warmest temperatures are found at different times not only in different regions but also between the ocean and on land. This questions how meaningful comparisons of the global mean temperature between reconstructions and models actually are.

According to the lead author Olivier Cartapanis, "the results challenge the paradigm of a Holocene Thermal Maximum occurring at the same time worldwide." And, while the warmest temperature was reached between 4,000 and 8,000 years ago in western Europe and northern America, the surface ocean temperature cooled since about 10,000 years ago at mid-high latitudes and remained stable in the tropics. The regional variability in the timing of maximum temperature suggests that high latitude insolation and ice extent played major roles in driving climate changes throughout the Holocene.

Read more at Science Daily

Dinosaur-killing asteroid triggered global tsunami that scoured seafloor thousands of miles from impact site

The miles-wide asteroid that struck Earth 66 million years ago wiped out nearly all the dinosaurs and roughly three-quarters of the planet's plant and animal species.

It also triggered a monstrous tsunami with mile-high waves that scoured the ocean floor thousands of miles from the impact site on Mexico's Yucatan Peninsula, according to a new University of Michigan-led study.

The study, scheduled for online publication Oct. 4 in the journal AGU Advances, presents the first global simulation of the Chicxulub impact tsunami to be published in a peer-reviewed scientific journal. In addition, U-M researchers reviewed the geological record at more than 100 sites worldwide and found evidence that supports their models' predictions about the tsunami's path and power.

"This tsunami was strong enough to disturb and erode sediments in ocean basins halfway around the globe, leaving either a gap in the sedimentary records or a jumble of older sediments," said lead author Molly Range, who conducted the modeling study for a master's thesis under U-M physical oceanographer and study co-author Brian Arbic and U-M paleoceanographer and study co-author Ted Moore.

The review of the geological record focused on "boundary sections," marine sediments deposited just before or just after the asteroid impact and the subsequent K-Pg mass extinction, which closed the Cretaceous Period.

"The distribution of the erosion and hiatuses that we observed in the uppermost Cretaceous marine sediments are consistent with our model results, which gives us more confidence in the model predictions," said Range, who started the project as an undergraduate in Arbic's lab in the Department of Earth and Environmental Sciences.

The study authors calculated that the initial energy in the impact tsunami was up to 30,000 times larger than the energy in the December 2004 Indian Ocean earthquake tsunami, which killed more than 230,000 people and is one of the largest tsunamis in the modern record.

The team's simulations show that the impact tsunami radiated mainly to the east and northeast into the North Atlantic Ocean, and to the southwest through the Central American Seaway (which used to separate North America and South America) into the South Pacific Ocean.

In those basins and in some adjacent areas, underwater current speeds likely exceeded 20 centimeters per second (0.4 mph), a velocity that is strong enough to erode fine-grained sediments on the seafloor.

In contrast, the South Atlantic, the North Pacific, the Indian Ocean and the region that is today the Mediterranean were largely shielded from the strongest effects of the tsunami, according to the team's simulation. In those places, the modeled current speeds were likely less than the 20 cm/sec threshold.

For the review of the geological record, U-M's Moore analyzed published records of 165 marine boundary sections and was able to obtain usable information from 120 of them. Most of the sediments came from cores collected during scientific ocean-drilling projects.

The North Atlantic and South Pacific had the fewest sites with complete, uninterrupted K-Pg boundary sediments. In contrast, the largest number of complete K-Pg boundary sections were found in the South Atlantic, the North Pacific, the Indian Ocean and the Mediterranean.

"We found corroboration in the geological record for the predicted areas of maximal impact in the open ocean," said Arbic, professor of earth and environmental sciences who oversaw the project. "The geological evidence definitely strengthens the paper."

Of special significance, according to the authors, are outcrops of the K-Pg boundary on the eastern shores of New Zealand's north and south islands, which are more than 12,000 kilometers (7,500 miles) from the Yucatan impact site.

The heavily disturbed and incomplete New Zealand sediments, called olistostromal deposits, were originally thought to be the result of local tectonic activity. But given the age of the deposits and their location directly in the modeled pathway of the Chicxulub impact tsunami, the U-M-led research team suspects a different origin.

"We feel these deposits are recording the effects of the impact tsunami, and this is perhaps the most telling confirmation of the global significance of this event," Range said.

The modeling portion of the study used a two-stage strategy. First, a large computer program called a hydrocode simulated the chaotic first 10 minutes of the event, which included the impact, crater formation and initiation of the tsunami. That work was conducted by co-author Brandon Johnson of Purdue University.

Based on the findings of previous studies, the researchers modeled an asteroid that was 14 kilometers (8.7 miles) in diameter, moving at 12 kilometers per second (27,000 mph). It struck granitic crust overlain by thick sediments and shallow ocean waters, blasting a roughly 100-kilometer-wide (62-mile-wide) crater and ejecting dense clouds of soot and dust into the atmosphere.

Two and a half minutes after the asteroid struck, a curtain of ejected material pushed a wall of water outward from the impact site, briefly forming a 4.5-kilometer-high (2.8-mile-high) wave that subsided as the ejecta fell back to Earth.

Ten minutes after the projectile hit the Yucatan, and 220 kilometers (137 miles) from the point of impact, a 1.5-kilometer-high (0.93-mile-high) tsunami wave -- ring-shaped and outward-propagating -- began sweeping across the ocean in all directions, according to the U-M simulation.

At the 10-minute mark, the results of Johnson's iSALE hydrocode simulations were entered into two tsunami-propagation models, MOM6 and MOST, to track the giant waves across the ocean. MOM6 has been used to model tsunamis in the deep ocean, and NOAA uses the MOST model operationally for tsunami forecasts at its Tsunami Warning Centers.

"The big result here is that two global models with differing formulations gave almost identical results, and the geologic data on complete and incomplete sections are consistent with those results," said Moore, professor emeritus of earth and environmental sciences. "The models and the verification data match nicely."

According to the team's simulation:
 

  • One hour after impact, the tsunami had spread outside the Gulf of Mexico and into the North Atlantic.
     
  • Four hours after impact, the waves had passed through the Central American Seaway and into the Pacific.
     
  • Twenty-four hours after impact, the waves had crossed most of the Pacific from the east and most of the Atlantic from the west and entered the Indian Ocean from both sides.
     
  • By 48 hours after impact, significant tsunami waves had reached most of the world's coastlines.


For the current study, the researchers did not attempt to estimate the extent of coastal flooding caused by the tsunami.

However, their models indicate that open-ocean wave heights in the Gulf of Mexico would have exceeded 100 meters (328 feet), with wave heights of more than 10 meters (32.8 feet) as the tsunami approached North Atlantic coastal regions and parts of South America's Pacific coast.

As the tsunami neared those shorelines and encountered shallow bottom waters, wave heights would have increased dramatically through a process called shoaling. Current speeds would have exceeded the 20 centimeters per second threshold for most coastal areas worldwide.

"Depending on the geometries of the coast and the advancing waves, most coastal regions would be inundated and eroded to some extent," according to the study authors. "Any historically documented tsunamis pale in comparison with such global impact."

Read more at Science Daily

Eating late increases hunger, decreases calories burned, and changes fat tissue

Obesity afflicts approximately 42 percent of the U.S. adult population and contributes to the onset of chronic diseases, including diabetes, cancer, and other conditions. While popular healthy diet mantras advise against midnight snacking, few studies have comprehensively investigated the simultaneous effects of late eating on the three main players in body weight regulation and thus obesity risk: regulation of calorie intake, the number of calories you burn, and molecular changes in fat tissue. A new study by investigators from Brigham and Women's Hospital, a founding member of the Mass General Brigham healthcare system, found that when we eat significantly impacts our energy expenditure, appetite, and molecular pathways in adipose tissue. Their results are published in Cell Metabolism.

"We wanted to test the mechanisms that may explain why late eating increases obesity risk," explained senior author Frank A. J. L. Scheer, PhD, Director of the Medical Chronobiology Program in the Brigham's Division of Sleep and Circadian Disorders. "Previous research by us and others had shown that late eating is associated with increased obesity risk, increased body fat, and impaired weight loss success. We wanted to understand why."

"In this study, we asked, 'Does the time that we eat matter when everything else is kept consistent?'" said first author Nina Vujovic, PhD, a researcher in the Medical Chronobiology Program in the Brigham's Division of Sleep and Circadian Disorders. "And we found that eating four hours later makes a significant difference for our hunger levels, the way we burn calories after we eat, and the way we store fat."

Vujovic, Scheer and their team studied 16 patients with a body mass index (BMI) in the overweight or obese range. Each participant completed two laboratory protocols: one with a strictly scheduled early meal schedule, and the other with the exact same meals, each scheduled about four hours later in the day. In the last two to three weeks before starting each of the in-laboratory protocols, participants maintained fixed sleep and wake schedules, and in the final three days before entering the laboratory, they strictly followed identical diets and meal schedules at home. In the lab, participants regularly documented their hunger and appetite, provided frequent small blood samples throughout the day, and had their body temperature and energy expenditure measured. To measure how eating time affected molecular pathways involved in adipogenesis, or how the body stores fat, investigators collected biopsies of adipose tissue from a subset of participants during laboratory testing in both the early and late eating protocols, to enable comparison of gene expression patterns/levels between these two eating conditions.

Results revealed that eating later had profound effects on hunger and appetite-regulating hormones leptin and ghrelin, which influence our drive to eat. Specifically, levels of the hormone leptin, which signals satiety, were decreased across the 24 hours in the late eating condition compared to the early eating conditions. When participants ate later, they also burned calories at a slower rate and exhibited adipose tissue gene expression towards increased adipogenesis and decreased lipolysis, which promote fat growth. Notably, these findings convey converging physiological and molecular mechanisms underlying the correlation between late eating and increased obesity risk.

Vujovic explains that these findings are not only consistent with a large body of research suggesting that eating later may increase one's likelihood of developing obesity, but they shed new light on how this might occur. By using a randomized crossover study, and tightly controlling for behavioral and environmental factors such as physical activity, posture, sleep, and light exposure, investigators were able to detect changes the different control systems involved in energy balance, a marker of how our bodies use the food we consume.

In future studies, Scheer's team aims to recruit more women to increase the generalizability of their findings to a broader population. While this study cohort included only five female participants, the study was set up to control for menstrual phase, reducing confounding but making recruiting women more difficult. Going forward, Scheer and Vujovic are also interested in better understanding the effects of the relationship between meal time and bedtime on energy balance.

"This study shows the impact of late versus early eating. Here, we isolated these effects by controlling for confounding variables like caloric intake, physical activity, sleep, and light exposure, but in real life, many of these factors may themselves be influenced by meal timing," said Scheer. "In larger scale studies, where tight control of all these factors is not feasible, we must at least consider how other behavioral and environmental variables alter these biological pathways underlying obesity risk. "

Read more at Science Daily

Oct 3, 2022

Cosmic ray protons reveal new spectral structures at high energies

Discovered in 1912, cosmic rays have been studied extensively and our current understanding of them is compiled into what is called the Standard Model. Recently, this understanding has been challenged by the detection of unexpected spectral structures in the cosmic ray proton energy spectrum. Now, scientists take this further with high-statistics and low-uncertainty measurement of these protons over a broader energy range using the CALorimetric Electron Telescope, confirming the presence of such structures.

Cosmic rays constitute high-energy protons and atomic nuclei that originate from stars (both within our galaxy and from other galaxies) and are accelerated by supernovae and other high-energy astrophysical objects. Our current understanding of the Galactic cosmic ray energy spectrum suggests that it follows a power-law dependence, in that the spectral index of protons detected within a certain energy range goes down by power law as energy increases. But recent observations made using magnetic spectrometers for low energy levels and calorimeters for high energy levels has hinted at a deviation from this power-law variation, with the spectral index of protons becoming larger around an energy of few hundred GeV at energies up to 10 TeV. Following this "spectral hardening," characterized by a smaller absolute value of the spectral index, a "spectral softening" has been detected above 10 TeV using the CALorimetric Electron Telescope (CALET), a space telescope installed at the International Space Station. However, better measurements with high statistics and low uncertainty need to be performed over a broad energy spectrum for the confirmation of these spectral structures.

This is exactly what a team of international researchers led by Associate Professor Kazuyoshi Kobayashi from Waseda University in Japan set out to do. "With the data collected by CALET over roughly 6.2 years, we have put forth a detailed spectral structure of the cosmic ray protons. The novelty of our data lies in the high-statistics measurement over a broader energy range of 50 GeV to 60 TeV," elaborates Kobayashi. The findings of their study, which included contributions from Professor Emeritus Shoji Torii from Waseda University (PI, or Principal Investigator, of CALET project) and Professor Pier Simone Marrocchesi from University of Siena in Italy, was published in the journal Physical Review Letters on 1 September 2022.

The new observations confirmed the presence of spectral hardening and softening below and above 10 TeV, suggesting that the proton energy spectrum is not consistent with a single power law variation for the entire range. Moreover, the spectral softening starting at around 10 TeV is consistent with a previous measurement reported by the Dark Matter Particle Explorer (DAMPE) space telescope. Interestingly enough, the transition by spectral softening was found to be sharper than that by spectral hardening.

The variations and the uncertainty in the new CALET data were controlled using Monte Carlo simulations. The statistics was improved by a factor of around 2.2 and the spectral hardening feature was confirmed with a higher significance of more than 20 sigmas.

Talking about the significance of this research, Kobayashi remarks, "This result will significantly contribute to our understanding of cosmic ray acceleration by supernovae and the propagation mechanism of cosmic rays. The next step would be to extend our measurement of the proton spectra to even higher energies with reduced systematic uncertainties. This should be accompanied by a shift in the theoretical understanding to accommodate the new observations."

Read more at Science Daily

Upcycling in the past: Viking beadmakers' secrets revealed

Ribe was an important trading town in the Viking Age. At the beginning of the 8th century, a trading place was established on the north side of the river Ribe, to which traders and craftsmen flocked from far and wide to manufacture and sell goods such as brooches, suit buckles, combs and coloured glass beads.

When glass became a scarce commodity in the Early Medieval time, coloured glass cubes -- so-called tesserae -- were torn down from mosaics in abandoned Roman and Byzantine temples, palaces and baths, transported North and traded at emporia towns such as Ribe, where the beadmakers melted them down in large vessels and shaped them into beads.

Until now, archaeologists have assumed that the pearl makers used the opaque white tesserae as raw material for the production of white, opaque beads.

Smart and sustainable production

And it is here that a geochemist and an archaeologist from Aarhus University together with a museum curator from Ribe have made a surprising discovery, which they have just published in the scientific journal Archaeological and Anthropological Sciences:

The chemical composition of white Viking beads from one of the earliest workshops showed that the glassmakers had found a more sustainable way to save time and wood for their furnaces: crush gold-gilded, transparent glass cubes, remelt them at low temperature, stir to trap air in the form of bubbles, and finally wrap the glass around an iron mandrel to form beads and voila! -- opaque white beads created in a short time using a minimum of resources.

The valuable ultra-thin sheets of gold stuck to the surface of the gold mosaic stone were of course salvaged by the glassmaker prior to remelting the glass, but the new findings show that some gold inevitably had ended up in the melting pot. Tiny drops of gold in the white beads, the many air holes (which is why the beads are opaque), as well as the fact that there are no chemical color tracers present, the researchers show that it was in fact the gold mosaic stones that was the raw material for the beads.

Such traces of gold were found not only in the white but also in the blue beads from the same workshop. Here the chemistry shows that the glassmaker's recipe consisted of a mixture of the blue and golden mosaic stones. Mixing them was necessary because the Roman blue mosaic stones contained high concentrations of chemical substances which made them opaque -- and therefore ideal for mosaics, but not for blue beads. By thus diluting the chemical substances, the result was the deep blue, transparent glass that we know from Viking Age beads.

Connoisseur craftsmanship

The bead maker in Ribe could instead have chosen to dilute the glass mixture with old shards from funnel beakers, which were also found in the workshop. But these turned out to be old, contaminated, Roman glass that had been remelted over and over again.

"And the glassmakers in Ribe were clearly connoisseurs who preferred the clearest glass they could get their hands on," says Gry Hoffmann Barfod from the Department of Geoscience at Aarhus University. She adds:

"For a geochemist, it has been a privilege to work with the fantastic material, and to discover how relevant the knowledge stored here is for our society today."

Interdisciplinary research

The interdisciplinary study was a collaboration between Gry Barfod, Søren Sindbæk, professor of archeology at the Danish National Research Foundation's Center for Urban Network Development (UrbNet) at Aarhus University, and museum curator Claus Feveile at the Museum of Southwest Jutland specializing in the Viking Age and Ribe's earliest history.

"The most outstanding achievements at the Ribe trading site were not just the products, but also the circular economy and their awareness to preserve limited resources" states professor Søren Sindbæk.

And museum curator Claus Feveile comments:

"These exciting results clearly show the potential of elucidating new facts about the vikings. By combining our high-resolution excavations with such chemical analyses I predict many more revelations in the near future."

Read more at Science Daily

Scientists crack upcycling plastics to reduce greenhouse gas emissions

Scientists from the University of Illinois Urbana-Champaign, University of California, Santa Barbara, and Dow have developed a breakthrough process to transform the most widely produced plastic -- polyethylene (PE) -- into the second-most widely produced plastic, polypropylene (PP), which could reduce greenhouse gas emissions (GHG).

"The world needs more and better options for extracting the energy and molecular value from its waste plastics," said co-lead author Susannah Scott, Distinguished Professor and Mellichamp Chair of Sustainable Catalytic Processing at UC Santa Barbara. Conventional plastic recycling methods result in low-value plastic molecules and, thus, offer little incentive to recycle the mountains of plastic waste that have accumulated over the past several decades. But, Scott added, "turning polyethylene into propylene, which can then be used to make a new polymer, is how we start to build a circular economy for plastics."

"We started by conceptualizing this approach and demonstrated its promise first through theoretical modeling -- now we have proved that it can be done experimentally in a way that is scalable and potentially applicable to current industry demands," said co-lead author Damien Guironnet, a professor of chemical and biomolecular engineering at Illinois, who published the first study outlining the necessary catalytic reactions in 2020.

The new study published in the Journal of the American Chemical Society announces a series of coupled catalytic reactions that transform PE, which is #2 and #4 plastic that make up 29% of the world's plastic consumption, into the building block propylene that is the key ingredient to produce PP, also known as #5 plastic that accounts for close to 25% of the world's plastic consumption.

This study establishes a proof-of-concept for upcycling PE plastic with more than 95% selectivity into propylene. The researchers have built a reactor that creates a continuous flow of propylene that can be converted into PP easily using current technology -- making this discovery scalable and rapidly implementable.

"Our preliminary analysis suggests that if just 20% of the world's PE could be recovered and converted via this route, it could represent a potential savings of GHG emissions comparable to taking 3 million cars off the road," said Garrett Strong, a graduate student associated with the project.

The goal is to cut each very long PE molecule many times to obtain many small pieces, which are the propylene molecules. First, a catalyst removes hydrogen from the PE, creating a reactive location on the chain. Next, the chain is split in two at this location using a second catalyst, which caps the ends using ethylene. Finally, a third catalyst moves the reactive site along the PE chain so the process can be repeated. Eventually, all that is left are a large number of propylene molecules.

"Think of cutting a baguette in half, and then cutting precisely-sized pieces off the end of each half -- where the speed at which you cut controls the size of each slice," Guironnet said.

"Now that we have established the proof of concept, we can start to improve the efficiency of the process by designing catalysts that are faster and more productive, making it possible to scale up," Scott said. "Since our end-product is already compatible with current industry separation processes, better catalysts will make it possible to implement this breakthrough rapidly."

The work presented in this publication is highly complementary to a paper published in Science last week. Both groups used virgin plastics and similar chemistries. However, the Science team used a different process in an enclosed batch reactor, requiring much higher pressure -- which is energy intensive -- and the need to recycle more ethylene.

"If we are to upcycle a significant fraction of the over 100 million tons of plastic waste we generate each year, we need solutions that are highly scalable," Guironnet said. "Our team demonstrated the chemistry in a flow reactor we developed to produce propylene highly selectively and continuously. This is a key advance to address the immense volume of the problem that we are facing."

Read more at Science Daily

Solar harvesting system has potential to generate solar power 24/7

The great inventor Thomas Edison once said, "So long as the sun shines, man will be able to develop power in abundance." His wasn't the first great mind to marvel at the notion of harnessing the power of the sun; for centuries inventors have been pondering and perfecting the way to harvest solar energy.

They've done an amazing job with photovoltaic cells which convert sunlight directly into energy. And still, with all the research, history and science behind it, there are limits to how much solar power can be harvested and used -- as its generation is restricted only to the daytime.

A University of Houston professor is continuing the historic quest, reporting on a new type of solar energy harvesting system that breaks the efficiency record of all existing technologies. And no less important, it clears the way to use solar power 24/7.

"With our architecture, the solar energy harvesting efficiency can be improved to the thermodynamic limit," reports Bo Zhao, Kalsi Assistant Professor of mechanical engineering and his doctoral student Sina Jafari Ghalekohneh in the journal Physical Review Applied. The thermodynamic limit is the absolute maximum theoretically possible conversion efficiency of sunlight into electricity.

Finding more efficient ways to harness solar energy is critical to transitioning to a carbon-free electric grid. According to a recent study by the U.S. Department of Energy Solar Energy Technologies Office and the National Renewable Energy Laboratory, solar could account for as much as 40% of the nation's electricity supply by 2035 and 45% by 2050, pending aggressive cost reductions, supportive policies and large-scale electrification.

How Does it Work?

Traditional solar thermophotovoltaics (STPV) rely on an intermediate layer to tailor sunlight for better efficiency. The front side of the intermediate layer (the side facing the sun) is designed to absorb all photons coming from the sun. In this way, solar energy is converted to thermal energy of the intermediate layer and elevates the temperature of the intermediate layer.

But the thermodynamic efficiency limit of STPVs, which has long been understood to be the blackbody limit (85.4%), is still far lower than the Landsberg limit (93.3%), the ultimate efficiency limit for solar energy harvesting.

"In this work, we show that the efficiency deficit is caused by the inevitable back emission of the intermediate layer towards the sun resulting from the reciprocity of the system. We propose nonreciprocal STPV systems that utilize an intermediate layer with nonreciprocal radiative properties," said Zhao. "Such a nonreciprocal intermediate layer can substantially suppress its back emission to the sun and funnel more photon flux towards the cell.

We show that, with such improvement, the nonreciprocal STPV system can reach the Landsberg limit, and practical STPV systems with single-junction photovoltaic cells can also experience a significant efficiency boost."

Besides improved efficiency, STPVs promise compactness and dispatchability (electricity that can be programmed on demand based on market needs).

In one important application scenario, STPVs can be coupled with an economical thermal energy storage unit to generate electricity 24/7.

Read more at Science Daily