Feb 5, 2022

Puffy planets lose atmospheres, become super-Earths

Astronomers have identified two different cases of "mini-Neptune" planets that are losing their puffy atmospheres and likely transforming into super-Earths. Radiation from the planets' stars is stripping away their atmospheres, driving hot gas to escape like steam from a pot of boiling water.

"Most astronomers suspected that young, small mini-Neptunes must have evaporating atmospheres," says Michael Zhang, lead author of both studies and a graduate student at Caltech. "But nobody had ever caught one in the process of doing so until now."

The findings are published in two separate papers in The Astronomical Journal: one is based on data from W. M. Keck Observatory on Maunakea, Hawai'i and the other paper involves observations from NASA's Hubble Space Telescope. Together, the studies help paint a picture of how exotic worlds like these form and evolve.

Mini-Neptunes are a class of exoplanets, which are planets that orbit stars outside our solar system. These worlds, which are smaller, denser versions of the planet Neptune, consist of large rocky cores surrounded by thick blankets of gas.

In the new studies, a team of astronomers led by Caltech used Keck Observatory's Near-Infrared Spectrograph (NIRSPEC) to study one of two mini-Neptune planets in the star system called TOI 560, located 103 light-years away; and they used Hubble to look at two mini-Neptunes orbiting HD 63433, located 73 light-years away.

Their results show that atmospheric gas is escaping from the innermost mini-Neptune in TOI 560, called TOI 560.01, and from the outermost mini-Neptune in HD 63433, called HD 63433 c.

Furthermore, Keck Observatory data surprisingly showed the gas around TOI 560.01 was escaping predominantly toward the star.

"This was unexpected, as most models predict that the gas should flow away from the star," says Professor of Planetary Science Heather Knutson, Zhang's advisor and a co-author of the study. "We still have a lot to learn about how these outflows work in practice."

Planetary Gap Explained?


Since the first exoplanets orbiting Sun-like stars were discovered in the mid-1990s, thousands of others have been found. Many of these orbit close to their stars, and the smaller, rocky ones generally fall into two groups: mini-Neptunes and super-Earths. The super-Earths are as large as 1.6 times the size of Earth (and occasionally as large as 1.75 times the size of Earth), while the mini-Neptunes are between two and four times the size of Earth. Few planets with sizes between these two planet types have been detected.

One possible explanation for this gap is that mini-Neptunes are transforming into super-Earths. The mini-Neptunes are theorized to be cocooned by primordial atmospheres made of hydrogen and helium. The hydrogen and helium are left over from the formation of the central star, which is born out of clouds of gas. If a mini-Neptune is small enough and close enough to its star, stellar X-rays and ultraviolet radiation can strip away its primordial atmosphere over a period of hundreds of millions of years, scientists theorize. This would then leave behind a rocky super-Earth with a substantially smaller radius, which could, in theory, still retain a relatively thin atmosphere similar to that surrounding our own planet.

"A planet in the gap would have enough atmosphere to puff up its radius, making it intercept more stellar radiation and thereby enabling fast mass loss," says Zhang. "But the atmosphere is thin enough that it gets lost quickly. This is why a planet wouldn't stay in the gap for long."

Other scenarios could explain the gap, according to the astronomers. For instance, the smaller rocky planets might have never gathered gas envelopes in the first place, and mini-Neptunes could be water worlds and not enveloped in hydrogen gas. This latest discovery of two mini-Neptunes with escaping atmospheres represents the first direct evidence to support the theory that mini-Neptunes are indeed turning into super-Earths.

Signatures in the Sunlight

The astronomers were able to detect the escaping atmospheres by watching the mini-Neptunes cross in front of, or transit, their host stars. The planets cannot be seen directly but when they pass in front of their stars as seen from our point of view on Earth, telescopes can look for absorption of starlight by atoms in the planets' atmospheres. In the case of the mini-Neptune TOI 560.01, the researchers found signatures of helium. For the star system HD 63433, the team found signatures of hydrogen in the outermost planet they studied, called HD 63433 c, but not the inner planet, HD 63433 b.

"The inner planet may have already lost its atmosphere," Zhang explains.

The speed of the gases provides the evidence that the atmospheres are escaping. The observed helium around TOI 560.01 is moving as fast as 20 kilometers per second, while the hydrogen around HD 63433 c is moving as fast as 50 kilometers per second. The gravity of these mini-Neptunes is not strong enough to hold on to such fast-moving gas. The extent of the outflows around the planets also indicates escaping atmospheres: the cocoon of gas around TOI 560.01 is at least 3.5 times as large as the radius of the planet, and the cocoon around HD 63433 c is at least 12 times the radius of the planet.

As for the strange discovery that the gas lost from TOI 560.01 was flowing toward -- instead of away from -- its host star, future observations of other mini-Neptunes should reveal if TOI 560.01 is an anomaly or whether an inward-moving atmospheric outflow is more common.

Read more at Science Daily

CRISPR-Cas9 can generate unexpected, heritable mutations

CRISPR-Cas9, the "genetic scissors," creates new potential for curing diseases; but treatments must be reliable. In a new study, researchers have discovered that the method can give rise to unforeseen changes in DNA that can be inherited by the next generation. These scientists therefore urge caution and meticulous validation before using CRISPR-Cas9 for medical purposes.

CRISPR-Cas9 is an effective tool for genome modification in microorganisms, as well as animals and plants. In health care, the method creates scope for curing numerous genetic diseases, provided the DNA is modified correctly and undergoes no unexpected changes. To date, such unwanted mutations have been studied in cells, but knowledge of the consequences in living organisms remains limited.

"In this project, we studied the effects of CRISPR-Cas9 in zebrafish, a small aquarium fish. Since DNA molecules and their mechanisms are similar in all animals, we think the results should be similar in humans, for example," says Adam Ameur, associate professor at Uppsala University and the Science for Life Laboratory (SciLifeLab).

When they studied the genome of more than 1,000 zebrafish from two generations, the researchers found unexpected mutations of various types. In some cases, DNA fragments that were larger than anticipated underwent changes, while in other cases mutations occurred in the wrong location in the genome. Unforeseen mutations were found in first-generation zebrafish, but also in their offspring.

"Knowing these unexpected mutations are heritable is important, since they can have long-term consequences for future generations. But that can happen only if you change the genome of embryos or germ cells," says Ida Höijer, PhD of Uppsala University and SciLifeLab.

In healthcare, methods tailored to correct genes in a particular tissue or cell type are now being developed. Although such treatments pose no risk to future generations, caution is advisable.

"CRISPR-Cas9 can be an amazingly valuable tool in health care. But we need to minimise the risk of unwanted effects, and we can do this by carefully validating the modified cells with the latest DNA sequencing technologies," Ameur says.

Read more at Science Daily

Feb 4, 2022

Too many disk galaxies than theory allows

The Standard Model of Cosmology describes how the universe came into being according to the view of most physicists. Researchers at the University of Bonn have now studied the evolution of galaxies within this model, finding considerable discrepancies with actual observations. The University of St. Andrews in Scotland and Charles University in the Czech Republic were also involved in the study. The results have now been published in the Astrophysical Journal.

Most galaxies visible from Earth resemble a flat disk with a thickened center. They are therefore similar to the sports equipment of a discus thrower. According to the Standard Model of Cosmology, however, such disks should form rather rarely. This is because in the model, every galaxy is surrounded by a halo of dark matter. This halo is invisible, but exerts a strong gravitational pull on nearby galaxies due to its mass. "That's why we keep seeing galaxies merging with each other in the model universe," explains Prof. Dr. Pavel Kroupa of the Helmholtz Institute for Radiation and Nuclear Physics at the University of Bonn.

This crash has two effects, the physicist explains: "First, the galaxies penetrate in the process, destroying the disk shape. Second, it reduces the angular momentum of the new galaxy created by the merger." Put simply, this greatly decreases its rotational speed. The rotating motion normally ensures that the centrifugal forces acting during this process cause a new disk to form. However, if the angular momentum is too small, a new disk will not form at all.

Large discrepancy between prediction and reality

In the current study, Kroupa's doctoral student, Moritz Haslbauer, led an international research group to investigate the evolution of the universe using the latest supercomputer simulations. The calculations are based on the Standard Model of Cosmology; they show which galaxies should have formed by today if this theory were correct. The researchers then compared their results with what is currently probably the most accurate observational data of the real Universe visible from Earth.

"Here we encountered a significant discrepancy between prediction and reality," Haslbauer says: "There are apparently significantly more flat disk galaxies than can be explained by theory." However, the resolution of the simulations is limited even on today's supercomputers. It may therefore be that the number of disk galaxies that would form in the Standard Model of Cosmology has been underestimated. "However, even if we take this effect into account, there remains a serious difference between theory and observation that cannot be remedied," Haslbauer points out.

The situation is different for an alternative to the Standard Model, which dispenses with dark matter. According to the so-called MOND theory (the acronym stands for "MilgrOmiaN Dynamics), galaxies do not grow by merging with each other. Instead, they are formed from rotating gas clouds that become more and more condensed. In a MOND universe, galaxies also grow by absorbing gas from their surroundings. However, mergers of full-grown galaxies are rare in MOND. "Our research group in Bonn and Prague has uniquely developed the methods to do calculations in this alternative theory," says Kroupa, who is also a member of the Transdisciplinary Research Units "Modelling" and "Matter" at the University of Bonn. "MOND's predictions are consistent with what we actually see."

Read more at Science Daily

Origin of supermassive black hole flares identified: Largest-ever simulations suggest flickering powered by magnetic 'reconnection'

Black holes aren't always in the dark. Astronomers have spotted intense light shows shining from just outside the event horizon of supermassive black holes, including the one at our galaxy's core. However, scientists couldn't identify the cause of these flares beyond the suspected involvement of magnetic fields.

By employing computer simulations of unparalleled power and resolution, physicists say they've solved the mystery: Energy released near a black hole's event horizon during the reconnection of magnetic field lines powers the flares, the researchers report January 14 in The Astrophysical Journal Letters.

The new simulations show that interactions between the magnetic field and material falling into the black hole's maw cause the field to compress, flatten, break and reconnect. That process ultimately uses magnetic energy to slingshot hot plasma particles at near light speed into the black hole or out into space. Those particles can then directly radiate away some of their kinetic energy as photons and give nearby photons an energy boost. Those energetic photons make up the mysterious black hole flares.

In this model, the disk of previously infalling material is ejected during flares, clearing the area around the event horizon. This tidying up could provide astronomers an unhindered view of the usually obscured processes happening just outside the event horizon.

"The fundamental process of reconnecting magnetic field lines near the event horizon can tap the magnetic energy of the black hole's magnetosphere to power rapid and bright flares," says study co-lead author Bart Ripperda, a joint postdoctoral fellow at the Flatiron Institute's Center for Computational Astrophysics (CCA) in New York City and Princeton University. "This is really where we're connecting plasma physics with astrophysics."

Ripperda co-authored the new study with CCA associate research scientist Alexander Philippov, Harvard University scientists Matthew Liska and Koushik Chatterjee, University of Amsterdam scientists Gibwa Musoke and Sera Markoff, Northwestern University scientist Alexander Tchekhovskoy and University College London scientist Ziri Younsi.

A black hole, true to its name, emits no light. So flares must originate from outside the black hole's event horizon -- the boundary where the black hole's gravitational pull becomes so strong that not even light can escape. Orbiting and infalling material surrounds black holes in the form of an accretion disk, like the one around the behemoth black hole found in the M87 galaxy. This material cascades toward the event horizon near the black hole's equator. At the north and south poles of some of these black holes, jets of particles shoot out into space at nearly the speed of light.

Identifying where the flares form in a black hole's anatomy is incredibly difficult because of the physics involved. Black holes bend time and space and are surrounded by powerful magnetic fields, radiation fields and turbulent plasma -- matter so hot that electrons detach from their atoms. Even with the help of powerful computers, previous efforts could only simulate black hole systems at resolutions too low to see the mechanism that powers the flares.

Ripperda and his colleagues went all in on boosting the level of detail in their simulations. They used computing time on three supercomputers -- the Summit supercomputer at Oak Ridge National Laboratory in Tennessee, the Longhorn supercomputer at the University of Texas at Austin, and the Flatiron Institute's Popeye supercomputer located at the University of California, San Diego. In total, the project took millions of computing hours. The result of all this computational muscle was by far the highest-resolution simulation of a black hole's surroundings ever made, with over 1,000 times the resolution of previous efforts.

The increased resolution gave the researchers an unprecedented picture of the mechanisms leading to a black hole flare. The process centers on the black hole's magnetic field, which has magnetic field lines that spring out from the black hole's event horizon, forming the jet and connecting to the accretion disk. Previous simulations revealed that material flowing into the black hole's equator drags magnetic field lines toward the event horizon. The dragged field lines begin stacking up near the event horizon, eventually pushing back and blocking the material flowing in.

With its exceptional resolution, the new simulation for the first time captured how the magnetic field at the border between the flowing material and the black hole's jets intensifies, squeezing and flattening the equatorial field lines. Those field lines are now in alternating lanes pointing toward the black hole or away from it. When two lines pointing in opposite directions meet, they can break, reconnect and tangle. In between connection points, a pocket forms in the magnetic field. Those pockets are filled with hot plasma that either falls into the black hole or is accelerated out into space at tremendous speeds, thanks to energy taken from the magnetic field in the jets.

"Without the high resolution of our simulations, you couldn't capture the subdynamics and the substructures," Ripperda says. "In the low-resolution models, reconnection doesn't occur, so there's no mechanism that could accelerate particles."

Plasma particles in the catapulted material immediately radiate some energy away as photons. The plasma particles can further dip into the energy range needed to give nearby photons an energy boost. Those photons, either passersby or the photons initially created by the launched plasma, make up the most energetic flares. The material itself ends up in a hot blob orbiting in the vicinity of the black hole. Such a blob has been spotted near the Milky Way's supermassive black hole. "Magnetic reconnection powering such a hot spot is a smoking gun for explaining that observation," Ripperda says.

The researchers also observed that after the black hole flares for a while, the magnetic field energy wanes, and the system resets. Then, over time, the process begins anew. This cyclical mechanism explains why black holes emit flares on set schedules ranging from every day (for our Milky Way's supermassive black hole) to every few years (for M87 and other black holes).

Ripperda thinks that observations from the recently launched James Webb Space Telescope combined with those from the Event Horizon Telescope could confirm whether the process seen in the new simulations is happening and if it changes images of a black hole's shadow. "We'll have to see," Ripperda says. For now, he and his colleagues are working to improve their simulations with even more detail.

Read more at Science Daily

Tweaked genes borrowed from bacteria excite heart cells in live mice

Biomedical engineers at Duke University have demonstrated a gene therapy that helps heart muscle cells electrically activate in live mice. The first demonstration of its kind, the approach features engineered bacterial genes that code for sodium ion channels and could lead to therapies to treat a wide variety of electrical heart diseases and disorders.

The results appeared online February 2 in the journal Nature Communications.

"We were able to improve how well heart muscle cells can initiate and spread electrical activity, which is hard to accomplish with drugs or other tools," said Nenad Bursac, professor of biomedical engineering at Duke. "The method we used to deliver genes in heart muscle cells of mice has been previously shown to persist for a long time, which means it could effectively help hearts that struggle to beat as regularly as they should."

Sodium-ion channels are proteins in the outer membranes of electrically excitable cells, such as heart or brain cells, that transmit electrical charges into the cell. In the heart, these channels tell muscle cells when to contract and pass the instruction along so that the organ pumps blood as a cohesive unit. Damaged heart cells, however, whether from disease or trauma, often lose all or part of their ability to transmit these signals and join the effort.

One approach researchers can take to restoring this functionality is gene therapy. By delivering the genes responsible for creating sodium channel proteins, the technique can produce more ion channels in the diseased cells to help boost their activity.

In mammals, sodium channel genes are unfortunately too large to fit within the viruses currently used in modern gene therapies in humans. To skirt this issue, Bursac and his laboratory instead turned to smaller genes that code for similar sodium ion channels in bacteria. While these bacterial genes are different than their human counterparts, evolution has conserved many similarities in the channel design since multi-cellular organisms diverged from bacteria hundreds of millions of years ago.

Several years ago, Hung Nguyen, a former doctoral student in Bursac's laboratory who now works for Fujifilm Diosynth Biotechnologies, mutated these bacterial genes so that the channels they encode could become active in human cells. In the new work, current doctoral student Tianyu Wu further optimized the content of the genes and combined them with a "promoter" that exclusively restricts channel production to heart muscle cells. The researchers then tested their approach by delivering a virus loaded with the bacterial gene into veins of a mouse to spread throughout the body.

"We worked to find where the sodium ion channels were actually formed, and, as we hoped, we found that they only went into the working muscle cells of the heart within the atria and ventricles," Wu said. "We also found that they did not end up in the heart cells that originate the heartbeat, which we also wanted to avoid."

This gene therapy approach only delivers extra genes within a cell; it does not attempt to cut out, replace or rewrite the existing DNA in any way. Scientists believe these types of delivered genes make proteins while floating freely within the cell, making use of the existing biochemical machinery. Previous research with this viral gene delivery approach suggests the transplanted genes should remain active for many years.

As a proof of concept, tests on cells in a laboratory setting suggest that the treatment improves electrical excitability enough to prevent human abnormalities like arrhythmias. Within live mice, the results demonstrate that the sodium ion channels are active in the hearts, showing trends toward improved excitability. However, further tests are needed to measure how much of an improvement is made on the whole-heart level, and whether it is enough to rescue electrical function in damaged or diseased heart tissue to be used as a viable treatment.

Moving forward, the researchers have already identified different bacterial sodium channel genes that work better in preliminary benchtop studies. The team is also working with the laboratories of Craig Henriquez, professor of biomedical engineering at Duke, and Andrew Landstrom, director of the Duke Pediatric Research Scholars Program, to test the ability of these genes to restore heart functionality in mouse models that mimic human heart diseases.

Read more at Science Daily

Early humans placed the hearth at the optimal location in their cave -- for maximum benefit and minimum smoke exposure

A groundbreaking study in prehistoric archaeology at Tel Aviv University provides evidence for high cognitive abilities in early humans who lived 170,000 years ago. In a first-of-its kind study, the researchers developed a software-based smoke dispersal simulation model and applied it to a known prehistoric site. They discovered that the early humans who occupied the cave had placed their hearth at the optimal location -- enabling maximum utilization of the fire for their activities and needs while exposing them to a minimal amount of smoke.

The study was led by PhD student Yafit Kedar, and Prof. Ran Barkai from the Jacob M. Alkow Department of Archaeology and Ancient Near Eastern Cultures at TAU, together with Dr. Gil Kedar. The paper was published in Scientific Reports.

Yafit Kedar explains that the use of fire by early humans has been widely debated by researchers for many years, regarding questions such as: At what point in their evolution did humans learn how to control fire and ignite it at will? When did they begin to use it on a daily basis? Did they use the inner space of the cave efficiently in relation to the fire? While all researchers agree that modern humans were capable of all these things, the dispute continues about the skills and abilities of earlier types of humans.

Yafit Kedar: "One focal issue in the debate is the location of hearths in caves occupied by early humans for long periods of time. Multilayered hearths have been found in many caves, indicating that fires had been lit at the same spot over many years. In previous studies, using a software-based model of air circulation in caves, along with a simulator of smoke dispersal in a closed space, we found that the optimal location for minimal smoke exposure in the winter was at the back of the cave. The least favorable location was the cave's entrance."

In the current study the researchers applied their smoke dispersal model to an extensively studied prehistoric site -- the Lazaret Cave in southeastern France, inhabited by early humans around 170,000 to 150,000 years ago. Yafit Kedar: "According to our model, based on previous studies, placing the hearth at the back of the cave would have reduced smoke density to a minimum, allowing the smoke to circulate out of the cave right next to the ceiling. But in the archaeological layers we examined, the hearth was located at the center of the cave. We tried to understand why the occupants had chosen this spot, and whether smoke dispersal had been a significant consideration in the cave's spatial division into activity areas."

To answer these questions, the researchers performed a range of smoke dispersal simulations for 16 hypothetical hearth locations inside the 290sqm cave. For each hypothetical hearth they analyzed smoke density throughout the cave using thousands of simulated sensors placed 50cm apart from the floor to the height of 1.5m.

To understand the health implications of smoke exposure, measurements were compared with the average smoke exposure recommendations of the World Health Organization. In this way four activity zones were mapped in the cave for each hearth: a red zone which is essentially out of bounds due to high smoke density; a yellow area suitable for short-term occupation of several minutes; a green area suitable for long-term occupation of several hours or days; and a blue area which is essentially smoke-free.

Yafit and Gil Kedar: "We found that the average smoke density, based on measuring the number of particles per spatial unit, is in fact minimal when the hearth is located at the back of the cave -- just as our model had predicted. But we also discovered that in this situation, the area with low smoke density, most suitable for prolonged activity, is relatively distant from the hearth itself.

Early humans needed a balance -- a hearth close to which they could work, cook, eat, sleep, get together, warm themselves, etc. while exposed to a minimum amount of smoke. Ultimately, when all needs are taken into consideration -- daily activities vs. the damages of smoke exposure -- the occupants placed their hearth at the optimal spot in the cave."

The study identified a 25m2 area in the cave which would be optimal for locating the hearth in order to enjoy its benefits while avoiding too much exposure to smoke. Astonishingly, in the several layers examined by in this study, the early humans actually did place their hearth within this area.

Prof. Barkai concludes: "Our study shows that early humans were able, with no sensors or simulators, to choose the perfect location for their hearth and manage the cave's space as early as 170,000 years ago -- long before the advent of modern humans in Europe. This ability reflects ingenuity, experience, and planned action, as well as awareness of the health damage caused by smoke exposure. In addition, the simulation model we developed can assist archaeologists excavating new sites, enabling them to look for hearths and activity areas at their optimal locations."

Read more at Science Daily

Feb 3, 2022

New Earth Trojan asteroid

An International team of astronomers led by researcher Toni Santana-Ros, from the University of Alicante and the Institute of Cosmos Sciences of the University of Barcelona (ICCUB), has confirmed the existence of the second Earth Trojan asteroid known to date, the 2020 XL5, after a decade of search. The results of the study have been published in the journal Nature Communications.

All celestial objects that roam around our solar system feel the gravitational influence of all the other massive bodies that build it, including the Sun and the planets. If we consider only the Earth-Sun system, Newton's laws of gravity state that there are five points where all the forces that act upon an object located at that point cancel each other out. These regions are called Lagrangian points, and they are areas of great stability. Earth Trojan asteroids are small bodies that orbit around the L4 or L5 Lagrangian points of the Sun-Earth system.

These results confirm that 2020 XL5 is the second transient Earth Trojan asteroid known to date, and everything indicates it will remain Trojan -- that is, it will be located at the Lagrangian point -- for four thousand years, thus it is qualified as transient. The researchers have provided an estimation of the object bulk size (around one kilometer in diameter, larger than the Earth Trojan asteroid known to date, the 2010 TK7, which was 0.3 kilometres in diameter), and have made a study of the impulse a rocket needs to reach the asteroid from Earth.

Although Trojan asteroids have been known to exist for decades in other planets such as Venus, Mars, Jupiter, Uranus and Neptune, it was not until 2011 that the first Earth Trojan asteroid was found. The astronomers have described many observational strategies for the detection of new Earth Trojans. "There have been many previous attempts to find Earth Trojans, including in situ surveys such as the search within the L4 region, carried out by the NASA OSIRIS-Rex spacecraft, or the search within the L5 region, conducted by the JAXA Hayabusa-2 mission," notes Toni Santana-Ros, author of the publication. He adds that "all the dedicated efforts had so far failed to discover any new member of this population."

The low success in these searches can be explained by the geometry of an object orbiting the Earth-Sun L4 or L5 as seen from our planet. These objects are usually observable close to the sun. The observation time window between the asteroid rising above the horizon and sunrise is, therefore, very small. Therefore, astronomers point their telescopes very low on the sky where the visibility conditions are at their worst and with the handicap of the imminent sunlight saturating the background light of the images just a few minutes in the observation.

To solve this problem, the team carried out a search of 4-meter telescopes that would be able to observe under such conditions, and they finally obtained the data from the 4.3m Lowel Discovery telescope (Arizona, United States), and the 4.1m SOAR telescope, operated by the National Science Foundation NOIRLab (Cerro Pachón, Chile).

The discovery of the Earth Trojan asteroids is very significant because these can hold a pristine record on the early conditions in the formation of the Solar System, since the primitive trojans might have been co-orbiting the planets during their formation, and they add restrictions to the dynamic evolution of the Solar System. In addition, Earth Trojans are the ideal candidates for potential space missions in the future.

Since the L4 Lagrangian point shares the same orbit as the Earth, it takes a low change in velocity to be reached. This implies that a spacecraft would need a low energy budget to remain in its shared orbit with the Earth, keeping a fixed distance to it. "Earth Trojans could become ideal bases for an advanced exploration of the Solar System; they could even become a source of resources," concludes Santana-Ros.

Read more at Science Daily

People with less memory loss in old age gain more knowledge

Do cognitive abilities change together, or do they change independently of each other? An international research team from the USA, Sweden, and Germany involving the Max Planck Institute for Human Development has presented new findings now published in Science Advances.

At the age of 20, people usually find it easier to learn something new than at the age of 70. People aged 70, however, typically know more about the world than those aged 20. In lifespan psychology this is known as the difference between "fluid" and "crystallized" cognitive abilities. Fluid abilities primarily capture individual differences in brain integrity at the time of measurement, whereas crystallized abilities primarily capture individual differences in accumulated knowledge.

Accordingly, fluid and crystallized abilities differ in their average age trajectories. Fluid abilities like memory already start to decline in middle adulthood. In contrast. crystallized abilities such as vocabulary show increases until later adulthood and only evince decline in advanced old age.

This divergence in the average trajectories of fluid and crystallized abilities has led to the assumption that people can compensate for fluid losses with crystallized gains. For instance, if an individual's memory declines, this loss, it is assumed, can be compensated for by an increase in knowledge.

A study of a research team from Germany, Sweden and the USA now shows that this compensation hypothesis has more limits than previously claimed. The researchers analyzed data from two longitudinal studies, the Virginia Cognitive Aging Project (VCAP) study from the USA and the Betula study from Sweden. In the VCAP study, 3633 female and 1933 male participants aged 18-99 years at the first occasion of measurement were followed for a period of up to 18 years and assessed up to eight times. The Betula study involved 1803 women and 1517 men who were between 25 and 95 years old at the first measurement occasion and examined up to four times over 18 years.

The research team used multivariate methods of change measurement to examine the extent to which individual differences in changes in crystallized abilities are related to individual differences in fluid changes. The findings are clear: The correlations between the two types of changes observed in both studies were very high. Thus, individual differences in cognitive development are, to a large extent, domain-general and do not follow the fluid-crystallized divide. What this means is that individuals who show greater losses in fluid abilities simultaneously show smaller gains in crystallized abilities, and persons whose fluid abilities hardly decline show large gains in crystallized abilities.

These findings are in accordance with the everyday observation that some people remain mentally fit in many areas into very old age while others' cognitive functioning declines across the board.

"In intelligence research, people often talk about a general factor or g-factor of intelligence that expresses the commonality of different cognitive abilities," says the lead author of the study, Elliot Tucker-Drob of the Department of Psychology and the Population Research Center at the University of Texas at Austin, USA. "In previous work, we have already demonstrated that not only individual differences in cognitive abilities at a given point in time can be captured by a general factor, but also changes of cognitive abilities. Our new results confirm this finding and demonstrate that changes in crystallized abilities can indeed be subsumed under a general factor of common change."

"Our findings call for a revision of textbook knowledge," adds Ulman Lindenberger, Director of the Center for Lifespan Psychology at the Max Planck Institute for Human Development in Berlin. "If those who show the largest fluid losses also show the smallest crystallized gains, then this places tighter limits on the compensatory power of knowledge than previously believed." For example, people whose memory is declining, also show a low gain in knowledge, even though they are in most need of such gains. Conversely, individuals with small fluid losses and strong crystallized gains are less likely to be in need of relying on compensatory processes to begin with.

Read more at Science Daily

How a SARS-CoV-2 infection can become severe COVID-19

Infection with SARS-CoV-2 leaves some people almost unaffected, while others develop life-threatening COVID-19 symptoms. So far, we do not understand exactly why symptoms and disease severity, especially in infections with the original variant, vary so significantly. A team of scientists has now discovered that severe courses of the disease are not only marked by strong immune activation and inflammatory reactions, but also by a dysfunctional endothelium, in other words, the vascular system: If this barrier between blood flow and tissue is damaged, the patient's condition deteriorates.

'In our study, we investigated which immune cells are activated in severe cases and in what way the endothelium, in other words the blood vessels, and their activation play a role in the disease progress', explains Prof. Christine Falk, Scientist at the Hanover Medical School (MHH) and the German Center for Infection Research (DZIF). Many clinical symptoms, such as the destruction of blood vessels in the lungs and acute respiratory distress syndrome, pointed to an impact on the endothelium.

The endothelium is a thin layer of cells that line blood vessels, forming a barrier between blood flow and the surrounding tissues. Infection with SARS-CoV-2 appears to cause strong activation of immune and endothelial cells in the lungs, resulting in the release of various soluble plasma proteins. Severe COVID-19 cases are associated with a dysfunction of the endothelium, wherein the barrier between the alveoli and the surrounding vessels is no longer intact.

The scientists studied 25 patients with severe COVID-19 and 17 recovered patients in the intensive care unit (ICU). They were able to prove that the severity of the disease is linked to disruption of the endothelial barrier and can be measured by looking at inflammatory and endothelial plasma proteins. A pattern of seven plasma proteins appears to be associated with a severe form of the disease, which is characterised by strong inflammatory processes and in which the endothelium is permanently damaged. Furthermore, recovery from severe COVID-19 cases seems to be related to the regeneration of this endothelial barrier.

Which immune cells were detected in the COVID-19 ICU patients? The study showed excessive activation of T-lymphocytes and natural killer cells as well as development of memory T-cells and strong proliferation of plasmablasts, cells that can produce large amounts of antibodies. Furthermore, ICU patients infected with SARS-CoV-2 had high titres of spike- and nucleocapsid-specific antibodies. The researchers found particularly interesting that the immune cell phenotype of these patients mainly changed over time and was less related to progressive severity of the disease. The progression of COVID-19, on the other hand, was closely linked to increased levels of various soluble plasma proteins, namely certain inflammatory mediators and especially endothelial factors.

'We were able to demonstrate that ICU patients with COVID-19 can be divided into different groups based on their plasma protein profile, which are associated with disease severity', explains lead author Louisa Ruhl, a PhD doctoral student at MHH. This finding is of great importance for identification of potential biomarkers for severe COVID-19 courses as well as for the development and use of new therapeutic concepts.

Christine Falk's team now wants to investigate which elements of the immune system lead to activation and damage of the endothelium and whether the strong activation of the immune system also leads to the development of virus-specific T-lymphocytes that can recognise and destroy infected cells and thus contribute to overreaction. Moreover, the study has shown that there are also shifts in the immune cell repertoire in recovered ICU patients with COVID-19. This could be related to the development of Long-COVID cases. These aspects are currently being promoted as part of the COFONI initiative of the state of Lower Saxony with a Fasttrack and a Flexfund project. In collaboration with partners from pneumology (Prof. Tobias Welte, MHH and DZL) and neurology (Prof. Günter Höglinger, MHH), they are not only investigating as to whether the endothelial inflammation with the overreaction of the T-lymphocytes and natural killer cells causes lasting damage, but also to what extent the regeneration of the lung is impaired and the nervous system is affected.

Read more at Science Daily

Scientists develop insect-sized flying robots with flapping wings

A new drive system for flapping wing autonomous robots has been developed by a University of Bristol team, using a new method of electromechanical zipping that does away with the need for conventional motors and gears.

This new advance, published today in the journal Science Robotics, could pave the way for smaller, lighter and more effective micro flying robots for environmental monitoring, search and rescue, and deployment in hazardous environments.

Until now, typical micro flying robots have used motors, gears and other complex transmission systems to achieve the up-and-down motion of the wings. This has added complexity, weight and undesired dynamic effects.

Taking inspiration from bees and other flying insects, researchers from Bristol's Faculty of Engineering, led by Professor of Robotics Jonathan Rossiter, have successfully demonstrated a direct-drive artificial muscle system, called the Liquid-amplified Zipping Actuator (LAZA), that achieves wing motion using no rotating parts or gears.

The LAZA system greatly simplifies the flapping mechanism, enabling future miniaturization of flapping robots down to the size of insects.

In the paper, the team show how a pair of LAZA-powered flapping wings can provide more power compared with insect muscle of the same weight, enough to fly a robot across a room at 18 body lengths per second.

They also demonstrated how the LAZA can deliver consistent flapping over more than one million cycles, important for making flapping robots that can undertake long-haul flights.

The team expect the LAZA to be adopted as a fundamental building block for a range of autonomous insect-like flying robots.

Dr Tim Helps, lead author and developer of the LAZA system said "With the LAZA, we apply electrostatic forces directly on the wing, rather than through a complex, inefficient transmission system. This leads to better performance, simpler design, and will unlock a new class of low-cost, lightweight flapping micro-air vehicles for future applications, like autonomous inspection of off-shore wind turbines."

Read more at Science Daily

Feb 2, 2022

Moons may yield clues to what makes planets habitable

Earth's moon is vitally important in making Earth the planet we know today: the moon controls the length of the day and ocean tides, which affect the biological cycles of lifeforms on our planet. The moon also contributes to Earth's climate by stabilizing Earth's spin axis, offering an ideal environment for life to develop and evolve.

Because the moon is so important to life on Earth, scientists conjecture that a moon may be a potentially beneficial feature in harboring life on other planets. Most planets have moons, but Earth's moon is distinct in that it is large compared to the size of Earth; the moon's radius is larger than a quarter of Earth's radius, a much larger ratio than most moons to their planets.

Miki Nakajima, an assistant professor of earth and environmental sciences at the University of Rochester, finds that distinction significant. And in a new study that she led, published in Nature Communications, she and her colleagues at the Tokyo Institute of Technology and the University of Arizona examine moon formations and conclude that only certain types of planets can form moons that are large in respect to their host planets.

"By understanding moon formations, we have a better constraint on what to look for when searching for Earth-like planets," Nakajima says. "We expect that exomoons [moons orbiting planets outside our solar system] should be everywhere, but so far we haven't confirmed any. Our constraints will be helpful for future observations."

The origin of Earth's moon


Many scientists have historically believed Earth's large moon was generated by a collision between proto-Earth -- Earth at its early stages of development -- and a large, Mars-sized impactor, approximately 4.5 billion years ago. The collision resulted in the formation of a partially vaporized disk around Earth, which eventually formed into the moon.

In order to find out whether other planets can form similarly large moons, Nakajima and her colleagues conducted impact simulations on the computer, with a number of hypothetical Earth-like rocky planets and icy planets of varying masses. They hoped to identify whether the simulated impacts would result in partially vaporized disks, like the disk that formed Earth's moon.

The researchers found that rocky planets larger than six times the mass of Earth (6M) and icy planets larger than one Earth mass (1M) produce fully -- rather than partially -- vaporized disks, and these fully-vaporized disks are not capable of forming fractionally large moons.

"We found that if the planet is too massive, these impacts produce completely vapor disks because impacts between massive planets are generally more energetic than those between small planets," Nakajima says.

After an impact that results in a vaporized disk, over time, the disk cools and liquid moonlets -- a moon's building blocks -- emerge. In a fully-vaporized disk, the growing moonlets in the disk experience strong gas drag from vapor, falling onto the planet very quickly. In contrast, if the disk is only partially vaporized, moonlets do not feel such strong gas drag.

"As a result, we conclude that a completely vapor disk is not capable of forming fractionally large moons," Nakajima says. "Planetary masses need to be smaller than those thresholds we identified in order to produce such moons."

The search for Earth-like planets

The constraints outlined by Nakajima and her colleagues are important for astronomers investigating our universe; researchers have detected thousands of exoplanets and possible exomoons, but have yet to definitively spot a moon orbiting a planet outside our solar system.

Read more at Science Daily

Climate change has likely begun to suffocate the world’s fisheries

By 2080, around 70% of the world's oceans could be suffocating from a lack of oxygen as a result of climate change, potentially impacting marine ecosystems worldwide, according to a new study. The new models find mid-ocean depths that support many fisheries worldwide are already losing oxygen at unnatural rates and passed a critical threshold of oxygen loss in 2021.

Oceans carry dissolved oxygen as a gas, and just like land animals, aquatic animals need that oxygen to breathe. But as the oceans warm due to climate change, their water can hold less oxygen. Scientists have been tracking the oceans' steady decline in oxygen for years, but the new study provides new, pressing reasons to be concerned sooner rather than later.

The new study is the first to use climate models to predict how and when deoxygenation, which is the reduction of dissolved oxygen content in water, will occur throughout the world's oceans outside its natural variability.

It finds that significant, potentially irreversible deoxygenation of the ocean's middle depths that support much of the world's fished species began occurring in 2021, likely affecting fisheries worldwide. The new models predict that deoxygenation is expected to begin affecting all zones of the ocean by 2080.

The results were published in the AGU journal Geophysical Research Letters, which publishes high-impact, short-format reports with immediate implications spanning all Earth and space sciences.

The ocean's middle depths (from about 200 to 1,000 meters deep), called mesopelagic zones, will be the first zones to lose significant amounts of oxygen due to climate change, the new study finds. Globally, the mesopelagic zone is home to many of the world's commercially fished species, making the new finding a potential harbinger of economic hardship, seafood shortages and environmental disruption.

Rising temperatures lead to warmer waters that can hold less dissolved oxygen, which creates less circulation between the ocean's layers. The middle layer of the ocean is particularly vulnerable to deoxygenation because it is not enriched with oxygen by the atmosphere and photosynthesis like the top layer, and the most decomposition of algae -- a process that consumes oxygen -- occurs in this layer.

"This zone is actually very important to us because a lot of commercial fish live in this zone," says Yuntao Zhou, an oceanographer at Shanghai Jiao Tong University and lead study author. "Deoxygenation affects other marine resources as well, but fisheries [are] maybe most related to our daily life."

The new findings are deeply concerning and adds to the urgency to engage meaningfully in mitigating climate change, says Matthew Long, an oceanographer at NCAR who was not involved in the study.

"Humanity is currently changing the metabolic state of the largest ecosystem on the planet, with really unknown consequences for marine ecosystems," he said. "That may manifest in significant impacts on the ocean's ability to sustain important fisheries."

Evaluating vulnerability

The researchers identified the beginning of the deoxygenation process in three ocean depth zones -- shallow, middle and deep -- by modeling when the loss of oxygen from the water exceeds natural fluctuations in oxygen levels. The study predicted when deoxygenation would occur in global ocean basins using data from two climate model simulations: one representing a high emissions scenario and the other representing a low emissions scenario.

In both simulations, the mesopelagic zone lost oxygen at the fastest rate and across the largest area of the global oceans, although the process begins about 20 years later in the low emissions scenario. This indicates that lowering carbon dioxide and other greenhouse gas emissions could help delay the degradation of global marine environments.

The researchers also found that oceans closer to the poles, like the west and north Pacific and the southern oceans, are particularly vulnerable to deoxygenation. They're not yet sure why, although accelerated warming could be the culprit. Areas in the tropics known for having low levels of dissolved oxygen, called oxygen minimum zones, also seem to be spreading, according to Zhou.

Read more at Science Daily

Number of Earth's tree species estimated to be 14% higher than currently known, with some 9,200 species yet to be discovered

A new study involving more than 100 scientists from across the globe and the largest forest database yet assembled estimates that there are about 73,000 tree species on Earth, including about 9,200 species yet to be discovered.

The global estimate is about 14% higher than the current number of known tree species. Most of the undiscovered species are likely to be rare, with very low populations and limited spatial distribution, the study shows.

That makes the undiscovered species especially vulnerable to human-caused disruptions such as deforestation and climate change, according to the study authors, who say the new findings will help prioritize forest conservation efforts.

"These results highlight the vulnerability of global forest biodiversity to anthropogenic changes, particularly land use and climate, because the survival of rare taxa is disproportionately threatened by these pressures," said University of Michigan forest ecologist Peter Reich, one of two senior authors of a paper scheduled for publication Jan. 31 in Proceedings of the National Academy of Sciences.

"By establishing a quantitative benchmark, this study could contribute to tree and forest conservation efforts and the future discovery of new trees and associated species in certain parts of the world," said Reich, director of the Institute for Global Change Biology at U-M's School for Environment and Sustainability.

For the study, the researchers combined tree abundance and occurrence data from two global datasets -- one from the Global Forest Biodiversity Initiative and the other from TREECHANGE -- that use ground-sourced forest-plot data. The combined databases yielded a total of 64,100 documented tree species worldwide, a total similar to a previous study that found about 60,000 tree species on the planet.

"We combined individual datasets into one massive global dataset of tree-level data," said the study's other senior author, Jingjing Liang of Purdue University, coordinator of the Global Forest Biodiversity Initiative.

"Each set comes from someone going out to a forest stand and measuring every single tree -- collecting information about the tree species, sizes and other characteristics. Counting the number of tree species worldwide is like a puzzle with pieces spread all over the world."

After combining the datasets, the researchers used novel statistical methods to estimate the total number of unique tree species at biome, continental and global scales -- including species yet to be discovered and described by scientists. A biome is a major ecological community type, such as a tropical rainforest, a boreal forest or a savanna.

Their conservative estimate of the total number of tree species on Earth is 73,274, which means there are likely about 9,200 tree species yet to be discovered, according to the researchers, who say their new study uses a vastly more extensive dataset and more advanced statistical methods than previous attempts to estimate the planet's tree diversity. The researchers used modern developments of techniques first devised by mathematician Alan Turing during World War II to crack Nazi code, Reich said.

Roughly 40% of the undiscovered tree species -- more than on any other continent -- are likely to be in South America, which is mentioned repeatedly in the study as being of special significance for global tree diversity.

South America is also the continent with the highest estimated number of rare tree species (about 8,200) and the highest estimated percentage (49%) of continentally endemic tree species -- meaning species found only on that continent.

Hot spots of undiscovered South American tree species likely include the tropical and subtropical moist forests of the Amazon basin, as well as Andean forests at elevations between 1,000 meters (about 3,300 feet) and 3,500 meters (about 11,480 feet).

"Beyond the 27,000 known tree species in South America, there might be as many as another 4,000 species yet to be discovered there. Most of them could be endemic and located in diversity hot spots of the Amazon basin and the Andes-Amazon interface," said Reich, who was recruited by U-M's Biosciences Initiative and joined the faculty last fall from the University of Minnesota, where he maintains a dual appointment.

"This makes forest conservation of paramount priority in South America, especially considering the current tropical forest crisis from anthropogenic impacts such as deforestation, fires and climate change," he said.

Worldwide, roughly half to two-thirds of all already known tree species occur in tropical and subtropical moist forests, which are both species-rich and poorly studied by scientists. Tropical and subtropical dry forests likely hold high numbers of undiscovered tree species, as well.

"Extensive knowledge of tree richness and diversity is key to preserving the stability and functioning of ecosystems," said study lead author Roberto Cazzolla Gatti of the University of Bologna in Italy.

Read more at Science Daily

A map for the sense of smell

The distinctive smell of a flower… the unmistakable aroma of coffee… the dangers linked with inhaling smoke fumes. Sensory systems have evolved to provide us with immediate, finely tuned information about the world around us, whether they are colors processed through our visual system or certain pitches interpreted through our hearing.

This barrage of information is processed by our sensory systems. Scientists have uncovered maps that depict how sensory neurons are arranged based on their function to effectively process such information. This kind of functional map, however, had not yet been identified for the sense of smell. University of California San Diego researchers have now described such a smell sensory map in fruit flies. On the surface of fly antennae, where odorous chemicals are detected, the scientists have discovered how the fly olfactory system is organized, and why.

This new map was published in the Proceedings of the National Academy of Sciences by a team lead by graduate student Shiuan-Tze Wu from the laboratory of Biological Sciences Associate Professor Chih-Ying Su. The study details how the fly's olfactory receptor neurons, the components that sense smell, are organized within the sensory hairs.

"We are constantly being bombarded by hundreds of odorous chemicals in our environment," said Su, the corresponding author of the study. "We have described a peripheral mechanism that has allowed the fly to make sense of such overwhelmingly complex stimuli."

The researchers provide evidence that the fruit fly's olfactory system, which Su described as simple yet elegant, is structured to give the insect the ability to make quick assessments of odors in an unusual way that circumvents synaptic communication, which is metabolically expensive. Rather, the insect's olfactory receptor neurons (ORNs) communicate through electrical interactions with nearby ORNs. This offers an energy-saving, "metabolically cheap" way to process "meaningful odor blends without involving costly synaptic computation," the researchers note in the paper.

The study describes how compartments with two ORNs are arranged to detect cues with opposite meanings for the fly. Such cues either promote or inhibit certain behaviors -- to quickly and efficiently assess complex odors in their environment.

"This arrangement provides a means to both evaluate and shape the countervailing sensory signals relayed to higher brain centers for further processing," according to the paper.

In this study, the Su lab collaborated with UC San Diego Neurobiology Assistant Professor Johnatan Aljadeff, who built a mathematical model which explains how electrical interactions help in extracting relevant information.

"In asking questions about the functional meaning of this organization, we found that nature has chosen a specific way of structuring this sensory assay," said Aljadeff. "If we can understand the principle of this type of processing, there could be future engineering applications." Aljadeff is funded by a Defense Advanced Research Projects Agency (DARPA) Young Faculty Award to investigate such questions.

Read more at Science Daily

Feb 1, 2022

Even dying stars can still give birth to planets

Planets are usually not much older than the stars around which they revolve. Take the Sun: it was born 4.6 billion years ago, and not long after that, Earth came into the world. But KU Leuven astronomers have discovered that a completely different scenario is also possible. Even if they are near death, some types of stars can possibly still form planets. If this is confirmed, theories on planet formation will need to be adjusted.

Planets such as Earth, and all other planets in our solar system, were formed not long after the Sun. Our Sun started to burn 4.6 billion years ago, and in the next million years, the matter around it clumped into protoplanets. The birth of the planets in that protoplanetary disc, a gigantic pancake made of dust and gas, so to speak, with the Sun in the middle, explains why they all orbit in the same plane.

But such discs of dust and gas needn't necessarily only surround newborn stars. They can also develop independently from star formation, for example around binary stars of which one is dying (binary stars are two stars that orbit each other, also called a binary system). When the end approaches for a medium-sized star (like the Sun), it catapults the outer part of its atmosphere into space, after which it slowly dies out as a so-called white dwarf. However, in the case of binary stars, the gravitational pull of the second star causes the matter ejected by the dying star to form a flat, rotating disc. Moreover, this disc strongly resembles the protoplanetary discs that astronomers observe around young stars elsewhere in the Milky Way.

This we already knew. However, what is new is that the discs surrounding so-called evolved binary stars not uncommonly show signs that could point to planet formation, as discovered by an international team of astronomers led by KU Leuven researchers. What's more, their observations show that this is the case for one in ten of these binary stars. "In ten per cent of the evolved binary stars with discs we studied, we see a large cavity (a void/opening, ed.) in the disc," says KU Leuven astronomer Jacques Kluska, first author of the article in the journal Astronomy & Astrophysics in which the discovery is described. "This is an indication that something is floating around there that has collected all matter in the area of the cavity."

Second-generation planets

The clean-up of the matter could be the work of a planet. That planet might not have formed at the very beginning of one of the binary stars' life, but at the very end. The astronomers moreover found further strong indications for the presence of such planets. "In the evolved binary stars with a large cavity in the disc, we saw that heavy elements such as iron were very scarce on the surface of the dying star," says Kluska. "This observation leads one to suspect that dust particles rich in these elements were trapped by a planet." By the way, the Leuven astronomer doesn't rule out the possibility that in this way, several planets can be formed around these binary stars.

The discovery was made when the astronomers were drawing up an inventory of evolved binary stars in our Milky Way. They did that based on existing, publicly available observations. Kluska and his colleagues counted 85 of such binary star pairs. In ten pairs, the researchers came across a disc with a large cavity on the infrared images.

Current theories put to the test


If new observations confirm the existence of planets around evolved binary stars, and if it turns out the planets were only formed after one of the stars had reached the end of its life, the theories on planet formation will need to be adjusted. "The confirmation or refutation of this extraordinary way of planet formation will be an unprecedented test for the current theories," according to Professor Hans Van Winckel, head of the KU Leuven Institute of Astronomy.

Read more at Science Daily

What the rise of oxygen on early Earth tells us about life on other planets

When did the Earth reach oxygen levels sufficient to support animal life? Researchers from McGill University have discovered that a rise in oxygen levels occurred in step with the evolution and expansion of complex, eukaryotic ecosystems. Their findings represent the strongest evidence to date that extremely low oxygen levels exerted an important limitation on evolution for billions of years.

"Until now, there was a critical gap in our understanding of environmental drivers in early evolution. The early Earth was marked by low levels of oxygen, till surface oxygen levels rose to be sufficient for animal life. But projections for when this rise occurred varied by over a billion years -- possibly even well before animals had evolved," says Maxwell Lechte, a postdoctoral researcher in the Department of Earth and Planetary Sciences under the supervision of Galen Halverson at McGill University.

Ironstones provide insights into early life

To find answers, the researchers examined iron-rich sedimentary rocks from around the world deposited in ancient coastal environments. In analyzing the chemistry of the iron in these rocks, the researchers were able to estimate the amount of oxygen present when the rocks formed, and the impact it would have had on early life like eukaryotic microorganisms -- the precursors to modern animals.

"These ironstones offer insights into the oxygen levels of shallow marine environments, where life was evolving. The ancient ironstone record indicates around less than 1 % of modern oxygen levels, which would have had an immense impact on ecological complexity," says Changle Wang, a researcher at the Chinese Academy of Sciences who co-led the study with Lechte.

"These low oxygen conditions persisted until about 800 million years ago, right when we first start to see evidence of the rise of complex ecosystems in the rock record. So if complex eukaryotes were around before then, their habitats would have been restricted by low oxygen," says Lechte.

Earth remains the only place in the universe known to harbor life. Today, Earth's atmosphere and oceans are rich with oxygen, but this wasn't always the case. The oxygenation of the Earth's ocean and atmosphere was the result of photosynthesis, a process used by plants and other organisms to convert light into energy -- releasing oxygen into the atmosphere and creating the necessary conditions for respiration and animal life.

Searching for signs of life beyond our solar system

According to the researchers, the new findings suggests that Earth's atmosphere was capable of maintaining low levels of atmospheric oxygen for billions of years. This has important implications for exploration of signs of life beyond our solar system, because searching for traces of atmospheric oxygen is one way to look for evidence of past or present life on another planet -- or what scientists call a biosignature.

Scientists use Earth's history to gauge the oxygen levels under which terrestrial planets can stabilize. If terrestrial planets can stabilize at low atmospheric oxygen levels, as suggested by the findings, the best chance for oxygen detection will be searching for its photochemical byproduct ozone, say the researchers.

"Ozone strongly absorbs ultraviolet light, making ozone detection possible even at low atmospheric oxygen levels. This work stresses that ultraviolet detection in space-based telescopes will significantly increase our chances of finding likely signs of life on planets outside our solar system," says Noah Planavsky, a biogeochemist at Yale University.

Read more at Science Daily

The two types of climate coping and what they mean for your health

When it comes to coping with climate change, there may be two types of people: those who take action to try to improve the environment and those who don't bother because they don't believe their actions will make a difference.

Knowing who's who could help public policymakers better target their messaging around climate change, suggests a new study led by University of Arizona researcher Sabrina Helm.

Helm, an associate professor in the College of Agriculture and Life Sciences' Norton School of Family and Consumer Sciences, studies climate anxiety and consumer behavior.

In her latest research, published in the journal Anxiety, Stress and Coping, Helm set out to identify how different people cope, psychologically and behaviorally, with the stressor of a changing climate.

She and her collaborators surveyed 334 parents who had children between the ages of 3 and 10 living with them. They were asked about their general climate change beliefs, how stressed they feel about environmental issues, how they cope with that stress and how effective they think consumers can be in combating climate change. They also were asked how often they engage in certain behaviors, such as eating meat, traveling by air or making efforts to conserve energy and water. And they were asked questions about their mental and overall health.

Based on the survey responses, the researchers identified two prevailing climate change coping profiles: adaptive approach coping and maladaptive avoidance coping.

About 70% of survey respondents belonged to the first group -- the adaptive approach coping profile. They tended to have higher levels of environmental concern, and related stress, and believed more in consumer effectiveness. They expressed more wishful thinking and a desire to problem solve, and were more likely to engage in pro-environmental behaviors.

The remaining 30% were in the maladaptive avoidance coping group. They were less likely than those in the first group to feel guilt or personal responsibility for climate change. They also had less wishful thinking and were less likely to engage in pro-environmental behaviors or believe that their actions would make a difference.

Helm and her collaborators wondered whether people in the adaptive approach group -- who tend to feel more climate-related stress -- would have worse mental health overall, since previous studies have linked environmental stress to negative mental health outcomes.

Surprisingly, Helm said, they found no differences between the two groups with regard to general health, anxiety or depressive symptoms.

"Overall, we know that climate change-related anxiety is on the rise, and that may be the case for both of these profiles," Helm said. "We didn't look at climate anxiety specifically, but we looked at depressive and anxiety symptoms in general; the two groups didn't differ in their level of anxiety or mental health outcomes."

There also were not significant differences in the demographic makeup of the two groups when it came to factors such as race, income, education level or employment status. However, women were more likely to be in the adaptive approach coping group, which is consistent with the findings of prior research, Helm said.

"There's a whole host of literature suggesting that females have more environmental concern," she said.

The fact that the demographics of the two groups were so similar suggests that targeting climate change-related messaging based on demographic information alone might not be the most effective strategy, Helm said. While it might be tougher to do, determining a person's climate change coping profile could be more useful for those attempting to communicate about environmental issues and what people can do to make a difference.

"If you think in terms of messaging about climate change or environmental problems, very often we look at social demographic targeting, and according to our findings, that's not very useful because those two profiles should probably be receiving different kinds of messaging," Helm said. "Those who are already acting pro-environmentally need reinforcement of that behavior, versus those who are in the maladaptive avoidance coping profile who don't do much at all and need to be incentivized to start doing something."

Helm said future research should look at whether the same two coping profiles exist in children and teenagers, who may be experiencing greater anxiety about climate change.

Read more at Science Daily

Complex three-dimensional kidney tissue generated in the lab from the scratch

A research team based in Kumamoto University (Japan) has created complex 3D kidney tissue in the lab solely from cultured mouse embryonic stem (ES) cells. These organoids could lead the way to better kidney research and, eventually, artificial kidneys for human transplant.

By focusing on an often-overlooked tissue type of organoid generation research, a type of organ tissue made up of various support and connective tissues called the stroma, Dr. Ryuichi Nishinakamura and his team were able to generate the last of a three-part puzzle that they had been working on for several years. Once the three pieces were combined, the resulting structure was found to be kidney-like in its architecture. The researchers believe that their work will be used to advance kidney research and even lead to a transplantable organ in the future.

The kidney is a very important organ for continued good health because it acts as a filter to extract waste and excess water from blood. It is a complex organ that develops from the combination of three components. Protocols have already been established by various research teams, including Dr. Nishinakamura's team at the Institute of Molecular Embryology and Genetics (IMEG) at Kumamoto University, to induce two of the components (the nephron progenitor and the ureteric bud) from mouse ES cells.

In this, their most recent work, the IMEG team has developed a method to induce the third and final component, kidney-specific stromal progenitor, in mice. Furthermore, by combining these three components in vitro, the researchers were able to generate a kidney-like 3D tissue, consisting of extensively branched tubules and several other kidney-specific structures.

The researchers believe that this is the first ever report on the in-lab generation of such a complex kidney structure from scratch. The IMEG team has already succeeded in inducing the first two components from human iPS cells. If this last component can also be generated from human cells, a similarly complex human kidney should be achievable.

Read more at Science Daily

Jan 31, 2022

Low volcanic temperature ushered in global cooling and the thriving of dinosaurs

Researchers in Japan, Sweden, and the US have unearthed evidence that low volcanic temperatures led to the fourth mass extinction, enabling dinosaurs to flourish during the Jurassic period.

Large volcanic eruptions create climatic fluctuations, ushering in evolutionary changes. Yet it is the volcanic temperature of the eruption that determines whether the climate cools or warms.

Since the emergence of early animals, five mass extinctions have taken place. The fourth mass extinction occurred at the end of the Triassic Period -- roughly 201 million years ago. This mass extinction saw many marine and land animals go extinct, especially large-body, crocodilian-line reptiles known as pseudosuchia. Approximately 60-70% of animal species disappeared. As a result, small bodied dinosaurs were able to grow and prosper.

Scientists think the fourth mass extinction was triggered by the eruptions in the Central Atlantic Magmatic Province -- one of the largest regions of volcanic rock. But the correlation between the eruption and mass extinction has not yet been clarified.

Using analysis of sedimentary organic molecules and a heating experiment, current professor emeritus at Tohoku University, Kunio Kaiho and his team demonstrated how low temperature magma slowly heated sedimentary rocks, causing high sulfur dioxide (SO2) and low carbon dioxide emissions (CO2).

The SO2 gas was distributed throughout the stratosphere, converting to sulfuric acid aerosols. The instantaneous increase of global albedo caused short-term cooling, which could have contributed to the mass extinction.

Kaiho and his team took marine sedimentary rock samples from Austria and the United Kingdom and analyzed the organic molecules and mercury (Hg) in them. They found four discrete benzo[e]pyrene + benzo[ghi]perylene + coronene -Hg enrichments.

The discovery of low coronene in the first enrichment was particularly revealing. The second, third, and fifth mass extinction had high coronene concentrations. A low concentration indicates that low temperature heating caused high SO2 release and global cooling.

"We believe the extinction was the product of large volcanic eruptions because the benzo[e]pyrene + benzo[ghi]perylene + coronene anomaly could only be seen around the time frame of the mass extinctions," said Kaiho.

Read more at Science Daily

Locations of ancient Maya sacred groves of cacao trees discovered

For as much as modern society worships chocolate, cacao -- the plant chocolate comes from -- was believed to be even more divine to ancient Mayas. The Maya considered cacao beans to be a gift from the gods and even used them as currency because of their value.

As such, cacao bean production was carefully controlled by the Maya leaders of northern Yucatan, with cacao trees only grown in sacred groves. But no modern researcher has ever been able to pinpoint where these ancient sacred groves were located -- until now.

Researchers at Brigham Young University, including professor emeritus Richard Terry and graduate students Bryce Brown and Christopher Balzotti, worked closely with archaeologists from the U.S. and Mexico to identify locations the Maya used to provide the perfect blend of humidity, calm and shade required by cacao trees. While the drier climate of the Yucatan peninsula is inhospitable to cacao growth, the team realized the vast array of sinkholes common to the peninsula have microclimates with just the right conditions.

As detailed in a study newly published in the Journal of Archaeological Science Reports, the team conducted soil analyses on 11 of those sinkholes and found that the soil of nine of them contained evidence of theobromine and caffeine -- combined biomarkers unique to cacao. Archaeologists also found evidence of ancient ceremonial rituals -- such as staircase ramps for processions, stone carvings, altars and offerings like jade and ceramics (including tiny ceramic cacao pods) -- in several sinkholes.

"We looked for theobromine for several years and found cacao in some places we didn't expect," said Terry, who recently retired from BYU. "We were also amazed to see the ceremonial artifacts. My students rappelled into one of these sinkholes and said, 'Wow! There is a structure in here!' It was a staircase that filled one-third of the sinkhole with stone."

To extract and analyze the sinkhole soil for cacao biomarkers -- specifically theobromine and caffeine -- the team developed a new method of soil extraction. This involved drying the soil samples and passing them through a sieve, covering them with hot water, having them centrifuged and passed through extraction disks, and analyzing the extracts by mass spectrometry. To increase the sensitivity of their testing, the research team compared the results of the soil samples to seven control samples with no history of exposure to the biomarkers.

The findings of the BYU study indicate that cacao groves played an important role in ancient rituals and trade routes of the ancient Maya, impacting the entirety of the Mesoamerican economy. A 70-mile Maya "highway" in the area that was the main artery for trade passes near hundreds of sinkholes, so it is likely that the leaders who commissioned the highway development also controlled cacao production. The evidence of cacao cultivation alongside archaeological findings also supports the idea that cacao was important in the ideological move from a maize god to a sun god.

In one sinkhole near Coba, Mexico, a village 45 minutes from modern day Tulum, the research team found the arm and bracelet of a figurine attached to an incense jar and several ceramic modeled cacao pods. They also found remnant cacao trees growing there, making it quite possible that this sinkhole, named "Dzadz Ion," was the location of a sacred cacao grove during the Late Postclassic period (About A.D. 1000 to 1400).

"Now we have these links between religious structures and the religious crops grown in these sinkholes," Terry said. "Knowing that the cacao beans were used as currency, it means the sinkholes were a place where the money could be grown and controlled. This new understanding creates a rich historical narrative of a highly charged Maya landscape with economic, political and spiritual value."

Read more at Science Daily

2D material in three dimensions

The carbon material graphene has no well-defined thickness, it merely consists of one single layer of atoms. It is therefore often referred to as a "two-dimensional material." Trying to make a three-dimensional structure out of it may sound contradictory at first, but it is an important goal: if the properties of the graphene layer are to be exploited best, then as much active surface area as possible must be integrated within a limited volume.

The best way to achieve this goal is to produce graphene on complex branched nanostructures. This is exactly what a cooperation between CNR Nano in Pisa, TU Wien (Vienna) and the University of Antwerp has now achieved. This could help, for example, to increase the storage capability per volume for hydrogen or to build chemical sensors with higher sensitivity.

From solid to porous

In Prof. Ulrich Schmid's group (Institute for Sensor and Actuator Systems, TU Wien), research has been conducted for years on how to transform solid materials such as silicon carbide into extremely fine, porous structures in a precisely controlled way. "If you can control the porosity, then many different material properties can be influenced as a result," explains Georg Pfusterschmied, one of the authors of the current paper.

The technological procedures required to achieve this goal are challenging: "It is an electrochemical process that consists of several steps," says Markus Leitgeb, a chemist who also works in Ulrich Schmid's research group at TU Wien. "We work with very specific etching solutions, and apply tailored electric current characteristics in combination with UV irradiation." This allows to etch tiny holes and channels into certain materials.

Because of this expertise in the realization of porous structures, Stefan Heun's team from the Nanoscience Institute of the Italian National Research Council CNR turned to their colleagues at TU Wien. The Pisa team was looking for a method to produce graphene surfaces in branched nanostructures to enable larger graphene surface areas. And the technology developed at TU Wien is perfectly suited for this task.

"The starting material is silicon carbide -- a crystal of silicon and carbon," says Stefano Veronesi who performed the graphene growth at CNR Nano in Pisa. "If you heat this material, the silicon evaporates, the carbon remains and if you do it right, it can form a graphene layer on the surface."

An electrochemical etching process was therefore developed at TU Wien that turns solid silicon carbide into the desired porous nanostructure. About 42 % of the volume is removed in this process. The remaining nanostructure was then heated in high vacuum in Pisa so that graphene formed on the surface. The result was then examined in detail in Antwerp. This revealed the success of the new process: indeed, a large number of graphene flakes form on the intricately shaped surface of the 3D nanostructure.

Read more at Science Daily

Small group of genetic variants found in extremely ill patients with COVID may help explain big differences in how sick people get

The search to better understand the tremendous range of responses to infection with the COVID-19 virus -- from symptom free to critically ill -- has uncovered in some of the sickest patients a handful of rare structural gene variants involved in body processes, like inflammation, which the virus needs to be successful.

"The virus has to attach to our cells, it has to get inside our cells and it has to multiply inside our cells. It also has to attract inflammation," says Dr. Ravindra Kolhe, director of the Georgia Esoteric and Molecular Laboratory at the Medical College of Georgia at Augusta University. "We have identified genes with structural changes in very sick individuals that are part of all four of these essential processes."

In apparently the first study of its kind, investigators used optical genome mapping, to get a thorough, three-dimensional assessment of the genome of 52 severally ill patients with COVID-19.

In nine of the sickest patients, they identified seven rare structural variants affecting a total of 31 genes involved in key pathways mediating the response between a person, or host, and a virus. These include innate immunity, our frontline immune defense against invaders like viruses; the inflammatory response, a key response to an infection that, gone awry, can also destroy the lungs of some of the sickest patients; and the ability of a virus to replicate and spread. As an example, one variant they identified can lead to overexpression of keratin genes. Keratins are proteins that are the structural components of things like our hair and nails, but that also have been identified as key to the transmission of both flu viruses and the COVID-19 virus between cells and are known to be upregulated in the respiratory tract during an infection.

"It's a hyperactivation of the normal systems," says Kolhe, corresponding author of the study, published by the international collaborative COVID-19 Host Genome Research consortium in the journal iScience.

"Millions of people get infected, and fortunately only a very small percentage become symptomatic, and a very small percentage of the symptomatic individuals require oxygen and a small percentage of those individuals are hospitalized and die," Kolhe says. "But even a small percentage amounts to millions of people and that is too many."

"Our data show that large (structural variants) identified using optical genome mapping might further explain the inter-individual clinical variability in response to COVID-19," the investigators write.

Large structural variants account for much of the genetic diversity among us, including changes that are just unique to the individual and those that can increase their risk of problems like cancer. Optical genome mapping is an emerging technology that can detect these larger variants with multiple changes, like deletion or insertion of genetic material and/or when a section of chromosome is reversed.

The investigators say that while more work needs to be done, their findings about the potential role of structural variants in the host-virus interaction point toward the need to look for genetic variations, ideally with a simple- to-use blood assay. Once identified, the goal would be to initiate proactive moves for these individuals like ensuring vaccination and boosting and potentially more aggressive treatment early on, like monoclonal antibody therapy, to help these individuals better combat COVID, Kolhe says.

Clinical studies have identified factors like older age, being male, hypertension, diabetes and other chronic conditions as risk factors associated with the degree of illness from COVID-19. The nine sickest patients in this study shared common comorbid conditions, 32 of the patients required mechanical ventilation to support their breathing and a total of 13 of the 52 patients died while in intensive care.

But in their studies, which also included individuals who were negative for the COVID-19 virus and those who were positive but asymptomatic, there were again outliers, including individuals with comorbid conditions who remained asymptomatic when infected with SARS-CoV-2 and those who were perfectly healthy but became extremely ill when infected, another indicator of a role for genetics in determining the degree of response, Kolhe says.

Kolhe notes that the large structural variants they found in the sickest patients were not caused by the virus rather used by the virus and may not increase susceptibility to other, even similar, conditions.

Overall, the individuals in this study had about 40 rare structural variants, which other studies have indicated is about average.

The COVID-19 Host Genome Research consortium currently has a membership of 34 institutions, including Duke and Columbia universities, the National Cancer Institute and the New York Genome Center, exploring different aspects of how structural variants impact the divergent individual responses to infection with the COVID-19 virus.

The group began to emerge after more commonplace gene sequencing studies, which essentially lay out the DNA in a straight line to look for problematic and smaller variations in the usual order of its four base pairs -- adenine, thymine, guanine and cytosine -- on thousands of patients have yielded little information to help explain -- and ideally predict -- the wide variations in how sick people will get. Better than 30% of the known disease-causing variants are larger than the single base pair changes sequencing can identify, according to the Human Gene Mutation Database.

Even the amount of virus in an individual does not directly correlate with how sick the individual gets, Kolhe says. "We had individuals with very high viral loads who did not even know they were positive," he says. "It is something in the host genome that is different."

Some studies have found that blood type might be a factor in predicting risk, specifically type A, and there have been some specific gene findings as well that predispose to immune deficiencies that may make people more susceptible.

Read more at Science Daily

Jan 30, 2022

Climate change in the Early Holocene

New insight into how our early ancestors dealt with major shifts in climate is revealed in research, published today [27 Jan] in Nature Ecology & Evolution, by an international team, led by Professor Rick Schulting from Oxford University's School of Archaeology.

It reveals, new radiocarbon dates show the large Early Holocene cemetery of Yuzhniy Oleniy Ostrov, at Lake Onega, some 500 miles north of Moscow, previously thought to have been in use for many centuries, was, in fact, used for only one to two centuries. Moreover, this seems to be in response to a period of climate stress.

The team believes the creation of the cemetery reveals a social response to the stresses caused by regional resource depression. At a time of climate change, Lake Onega, as the second largest lake in Europe, had its own ecologically resilient microclimate. This would have attracted game, including elk, to its shores while the lake itself would have provided a productive fishery. Because of the fall in temperature, many of the region's shallower lakes could have been susceptible to the well-known phenomenon of winter fish kills, caused by depleted oxygen levels under the ice.

The creation of the cemetery at the site would have helped define group membership for what would have been previously dispersed bands of hunter-gatherers -- mitigating potential conflict over access to the lake's resources.

But when the climate improved, the team found, the cemetery largely went out of use, as the people presumably returned to a more mobile way of life and the lake became less central.

The behavioural changes -- to what could be seen as a more 'complex' social system, with abundant grave offerings -- were situation-dependent. But they suggest the presence of important decision makers and, say the team, the findings also imply that early hunting and gathering communities were highly flexible and resilient.

The results have implications for understanding the context for the emergence and dissolution of socioeconomic inequality and territoriality under conditions of socio-ecological stress.

Radiocarbon dating of the human remains and associated animal remains at the site reveals that the main use of the cemetery spanned between 100-300 years, centring on ca. 8250 to 8,000 BP. This coincides remarkably closely with the 8.2 ka dramatic cooling event, so this site could provide evidence for how these humans responded to a climate-driven environmental change.

The Holocene (the current geological epoch which began approximately 11,700 years before present) has been relatively stable in comparison to current events. But there are a number of climate fluctuations recorded in the Greenland ice cores. The best known of these is the 8,200 years ago cooling event, the largest climatic downturn in the Holocene, lasting lasted one to two centuries. But there is little evidence that the hunter-gatherers, who occupied most of Europe at this time, were much affected, and if they were, in what specific ways.

Yuzhniy Oleniy Ostrov is one of the largest Early Holocene cemeteries in northern Eurasia, with up to 400 possible graves, 177 of which were excavated in the 1930s by a team of Russian archaeologists. Based on their work, the cemetery site has an important position in European Mesolithic studies, in part because of the variation in the accompanying grave offerings. Some graves lack these entirely, to those with abundant and elaborate offerings.

Read more at Science Daily

Scientists regrow frog's lost leg

For millions of patients who have lost limbs for reasons ranging from diabetes to trauma, the possibility of regaining function through natural regeneration remains out of reach. Regrowth of legs and arms remains the province of salamanders and superheroes.

But in a study published in the journal Science Advances, scientists at Tufts University and Harvard University's Wyss Institute have brought us a step closer to the goal of regenerative medicine.

On adult frogs, which are naturally unable to regenerate limbs, the researchers were able to trigger regrowth of a lost leg using a five-drug cocktail applied in a silicone wearable bioreactor dome that seals in the elixir over the stump for just 24 hours. That brief treatment sets in motion an 18-month period of regrowth that restores a functional leg.

Many creatures have the capability of full regeneration of at least some limbs, including salamanders, starfish, crabs, and lizards. Flatworms can even be cut up into pieces, with each piece reconstructing an entire organism. Humans are capable of closing wounds with new tissue growth, and our livers have a remarkable, almost flatworm-like capability of regenerating to full size after a 50% loss.

But loss of a large and structurally complex limb -- an arm or leg -- cannot be restored by any natural process of regeneration in humans or mammals. In fact, we tend to cover major injuries with an amorphous mass of scar tissue, protecting it from further blood loss and infection and preventing further growth.

Kickstarting Regeneration

The Tufts researchers triggered the regenerative process in African clawed frogs by enclosing the wound in a silicone cap, which they call a BioDome, containing a silk protein gel loaded with the five-drug cocktail.

Each drug fulfilled a different purpose, including tamping down inflammation, inhibiting the production of collagen which would lead to scarring, and encouraging the new growth of nerve fibers, blood vessels, and muscle. The combination and the bioreactor provided a local environment and signals that tipped the scales away from the natural tendency to close off the stump, and toward the regenerative process.

The researchers observed dramatic growth of tissue in many of the treated frogs, re-creating an almost fully functional leg. The new limbs had bone structure extended with features similar to a natural limb's bone structure, a richer complement of internal tissues (including neurons), and several "toes" grew from the end of the limb, although without the support of underlying bone.

The regrown limb moved and responded to stimuli such as a touch from a stiff fiber, and the frogs were able to make use of it for swimming through water, moving much like a normal frog would.

"It's exciting to see that the drugs we selected were helping to create an almost complete limb," said Nirosha Murugan, research affiliate at the Allen Discovery Center at Tufts and first author of the paper. "The fact that it required only a brief exposure to the drugs to set in motion a months-long regeneration process suggests that frogs and perhaps other animals may have dormant regenerative capabilities that can be triggered into action."

The researchers explored the mechanisms by which the brief intervention could lead to long-term growth. Within the first few days after treatment, they detected the activation of known molecular pathways that are normally used in a developing embryo to help the body take shape.

Activation of these pathways could allow the burden of growth and organization of tissue to be handled by the limb itself, similar to how it occurs in an embryo, rather than require ongoing therapeutic intervention over the many months it takes to grow the limb.

How the BioDome Works

Animals naturally capable of regeneration live mostly in an aquatic environment. The first stage of growth after loss of a limb is the formation of a mass of stem cells at the end of the stump called a blastema, which is used to gradually reconstruct the lost body part. The wound is rapidly covered by skin cells within the first 24 hours after the injury, protecting the reconstructing tissue underneath.

"Mammals and other regenerating animals will usually have their injuries exposed to air or making contact with the ground, and they can take days to weeks to close up with scar tissue," said David Kaplan, Stern Family Professor of Engineering at Tufts and co-author of the study. "Using the BioDome cap in the first 24 hours helps mimic an amniotic-like environment which, along with the right drugs, allows the rebuilding process to proceed without the interference of scar tissue."

Next Steps in Frogs and Mammals

Previous work by the Tufts team showed a significant degree of limb growth triggered by a single drug, progesterone, with the BioDome. However, the resulting limb grew as a spike and was far from the more normally shaped, functional limb achieved in the current study.

The five-drug cocktail represents a significant milestone toward the restoration of fully functional frog limbs and suggests further exploration of drug and growth factor combinations could lead to regrown limbs that are even more functionally complete, with normal digits, webbing, and more detailed skeletal and muscular features.

"We'll be testing how this treatment could apply to mammals next," said corresponding author Michael Levin, Vannevar Bush Professor of Biology in the School of Arts & Sciences, director of the Allen Discovery Center at Tufts, and associate faculty member of the Wyss Institute.

Read more at Science Daily