May 22, 2021

An inconstant Hubble constant? Research suggests fix to cosmological cornerstone

More than 90 years ago, astronomer Edwin Hubble observed the first hint of the rate at which the universe expands, called the Hubble constant.

Almost immediately, astronomers began arguing about the actual value of this constant, and over time, realized that there was a discrepancy in this number between early universe observations and late universe observations.

Early in the universe's existence, light moved through plasma -- there were no stars yet -- and from oscillations similar to sound waves created by this, scientists deduced that the Hubble constant was about 67. This means the universe expands about 67 kilometers per second faster every 3.26 million light-years.

But this observation differs when scientists look at the universe's later life, after stars were born and galaxies formed. The gravity of these objects causes what's called gravitational lensing, which distorts light between a distant source and its observer.

Other phenomena in this late universe include extreme explosions and events related to the end of a star's life. Based on these later life observations, scientists calculated a different value, around 74. This discrepancy is called the Hubble tension.

Now, an international team including a University of Michigan physicist has analyzed a database of more than 1,000 supernovae explosions, supporting the idea that the Hubble constant might not actually be constant.

Instead, it may change based on the expansion of the universe, growing as the universe expands. This explanation likely requires new physics to explain the increasing rate of expansion, such as a modified version of Einstein's gravity.

The team's results are published in the Astrophysical Journal.

"The point is that there seems to be a tension between the larger values for late universe observations and lower values for early universe observation," said Enrico Rinaldi, a research fellow in the U-M Department of Physics. "The question we asked in this paper is: What if the Hubble constant is not constant? What if it actually changes?"

The researchers used a dataset of supernovae -- spectacular explosions that mark the final stage of a star's life. When they shine, they emit a specific type of light. Specifically, the researchers were looking at Type Ia supernovae.

These types of supernovae stars were used to discover that the universe was expanding and accelerating, Rinaldi said, and they are known as "standard candles," like a series of lighthouses with the same lightbulb. If scientists know their luminosity, they can calculate their distance by observing their intensity in the sky.

Next, the astronomers use what's called the "redshift" to calculate how the universe's rate of expansion might have increased over time. Redshift is the name of the phenomenon that occurs when light stretches as the universe expands.

The essence of Hubble's original observation is that the further away from the observer, the more wavelength becomes lengthened -- like you tacked a Slinky to a wall and walked away from it, holding one end in your hands. Redshift and distance are related.

In Rinaldi's team's study, each bin of stars has a fixed reference value of redshift. By comparing the redshift of each bin of stars, the researchers can extract the Hubble constant for each of the different bins.

In their analysis, the researchers separated these stars based on intervals of redshift. They placed the stars at one interval of distance in one "bin," then an equal number of stars at the next interval of distance in another bin, and so on. The closer the bin to Earth, the younger the stars are.

"If it's a constant, then it should not be different when we extract it from bins of different distances. But our main result is that it actually changes with distance," Rinaldi said. "The tension of the Hubble constant can be explained by some intrinsic dependence of this constant on the distance of the objects that you use."

Additionally, the researchers found that their analysis of the Hubble constant changing with redshift allows them to smoothly "connect" the value of constant from the early universe probes and the value from the late universe probes, Rinaldi said.

"The extracted parameters are still compatible with the standard cosmological understanding that we have," he said. "But this time they just shift a little bit as we change the distance, and this small shift is enough to explain why we have this tension."

Read more at Science Daily

Epigenetic mechanism can explain how chemicals in plastic may cause lower IQ levels

The chemical bisphenol F (found in plastics) can induce changes in a gene that is vital for neurological development. This discovery was made by researchers at the universities of Uppsala and Karlstad, Sweden. The mechanism could explain why exposure to this chemical during the fetal stage may be connected with a lower IQ at seven years of age -- an association previously seen by the same research group. The study is published in the scientific journal Environment International.

"We've previously shown that bisphenol F (BPF for short) may be connected with children's cognitive development. However, with this study, we can now begin to understand which biological mechanisms may explain such a link, which is unique for an epidemiological study." The speaker is Carl Gustaf Bornehag, Professor and head of Public Health Sciences at Karlstad University. He is the project manager of the Swedish Environmental Longitudinal Mother and Child, Asthma and Allergy (SELMA) study, from which the data were taken.

External factors can cause changes in gene activity through an "epigenetic" mechanism. This means that individual genes are modified by means of "methylation." Increased methylation in a DNA segment makes it more difficult for the cellular machinery to read that specific part. As a result, expression of methylated genes is often impaired.

The scientists measured BPF levels in urine from pregnant women in the first trimester and subsequently monitored their children after birth. DNA methylation was measured in the children at age seven, and their cognitive ability was investigated. Since the fetus comes into contact with the mother's blood via the placenta, it is also exposed to substances in the mother's body.

The analyses demonstrated that in fetuses exposed to higher levels of BPF, methylation increases in a specific part of the GRIN2B gene, which has a key neurological role. Further, higher methylation was associated with lower IQ in the children. However, the study also found that there appears to be a sex difference in these children's susceptibility to BPF. The epigenetic link between BPF and cognition was observed only in boys.

"The fact that we've been able to identify DNA methylation as a potential mechanism behind BPF's effect on IQ adds an important piece of evidence in work to understand how environmental chemicals affect us on a molecular level," says Elin Engdahl, a researcher in environmental toxicology at Uppsala University and the article's lead author.

Read more at Science Daily

May 21, 2021

Not all theories can explain the black hole M87*

As first pointed out by the German astronomer Karl Schwarzschild, black holes bend space-time to an extreme degree due to their extraordinary concentration of mass, and heat up the matter in their vicinity so that it begins to glow. New Zealand physicist Roy Kerr showed rotation can change the black hole's size and the geometry of its surroundings. The "edge" of a black hole is known as the event horizon, the boundary around the concentration of mass beyond which light and matter cannot escape and which makes the black hole "black." Black holes, theory predicts, can be described by a handful of properties: mass, spin, and a variety of possible charges.

In addition to black holes predicted from Einstein's theory of general relativity, one can consider those from models inspired by string theories, which describe matter and all particles as modes of tiny vibrating strings. String-inspired theories of black holes predict the existence of an additional field in the description of fundamental physics, which leads to observable modifications in the sizes of black holes as well as in the curvature in their vicinity.

Physicists Dr Prashant Kocherlakota and Professor Luciano Rezzolla from the Institute for Theoretical Physics at Goethe University Frankfurt, have now investigated for the first time how the different theories fit with the observational data of the black hole M87* at the centre of the galaxy Messier 87. The image of M87*, taken in 2019 by the international Event Horizon Telescope (EHT) collaboration, was the first experimental proof of the actual existence of black holes after the measurement of gravitational waves in 2015.

The result of these investigations: The data from M87* are in excellent agreement with the Einstein-based theories and to a certain extent with the string-based theories. Dr Prashant Kocherlakota explains: "With the data recorded by the EHT collaboration, we can now test different theories of physics with black hole images. Currently, we cannot reject these theories when describing the shadow size of M87*, but our calculations constrain the range of validity of these black hole models."

Professor Luciano Rezzolla says: "The idea of black holes for us theoretical physicists is at the same time a source of concern and of inspiration. While we still struggle with some of the consequences of black holes -- such as the event horizon or the singularity -- we seem always keen to find new black hole solutions also in other theories. It is therefore very important to obtain results like ours, which determine what is plausible and what is not. This was an important first step and our constraints will be improved as new observations are made."

Read more at Science Daily

Hubble tracks down fast radio bursts to galaxies' spiral arms

Astronomers using NASA's Hubble Space Telescope have traced the locations of five brief, powerful radio blasts to the spiral arms of five distant galaxies.

Called fast radio bursts (FRBs), these extraordinary events generate as much energy in a thousandth of a second as the Sun does in a year. Because these transient radio pulses disappear in much less than the blink of an eye, researchers have had a hard time tracking down where they come from, much less determining what kind of object or objects is causing them. Therefore, most of the time, astronomers don't know exactly where to look.

Locating where these blasts are coming from, and in particular, what galaxies they originate from, is important in determining what kinds of astronomical events trigger such intense flashes of energy. The new Hubble survey of eight FRBs helps researchers narrow the list of possible FRB sources.

Flash in the Night


The first FRB was discovered in archived data recorded by the Parkes radio observatory on July 24, 2001. Since then astronomers have uncovered up to 1,000 FRBs, but they have only been able to associate roughly 15 of them to particular galaxies.

"Our results are new and exciting. This is the first high-resolution view of a population of FRBs, and Hubble reveals that five of them are localized near or on a galaxy's spiral arms," said Alexandra Mannings of the University of California, Santa Cruz, the study's lead author. "Most of the galaxies are massive, relatively young, and still forming stars. The imaging allows us to get a better idea of the overall host-galaxy properties, such as its mass and star-formation rate, as well as probe what's happening right at the FRB position because Hubble has such great resolution."

In the Hubble study, astronomers not only pinned all of them to host galaxies, but they also identified the kinds of locations they originated from. Hubble observed one of the FRB locations in 2017 and the other seven in 2019 and 2020.

"We don't know what causes FRBs, so it's really important to use context when we have it," said team member Wen-fai Fong of Northwestern University in Evanston, Illinois. "This technique has worked very well for identifying the progenitors of other types of transients, such as supernovae and gamma-ray bursts. Hubble played a big role in those studies, too."

The galaxies in the Hubble study existed billions of years ago. Astronomers, therefore, are seeing the galaxies as they appeared when the universe was about half its current age.

Many of them are as massive as our Milky Way. The observations were made in ultraviolet and near-infrared light with Hubble's Wide Field Camera 3.

Ultraviolet light traces the glow of young stars strung along a spiral galaxy's winding arms. The researchers used the near-infrared images to calculate the galaxies' mass and find where older populations of stars reside.

Location, Location, Location

The images display a diversity of spiral-arm structure, from tightly wound to more diffuse, revealing how the stars are distributed along these prominent features. A galaxy's spiral arms trace the distribution of young, massive stars. However, the Hubble images reveal that the FRBs found near the spiral arms do not come from the very brightest regions, which blaze with the light from hefty stars. The images help support a picture that the FRBs likely do not originate from the youngest, most massive stars.

These clues helped the researchers rule out some of the possible triggers of types of these brilliant flares, including the explosive deaths of the youngest, most massive stars, which generate gamma-ray bursts and some types of supernovae. Another unlikely source is the merger of neutron stars, the crushed cores of stars that end their lives in supernova explosions. These mergers take billions of years to occur and are usually found far from the spiral arms of older galaxies that are no longer forming stars.

Magnetic Monsters

The team's Hubble results, however, are consistent with the leading model that FRBs originate from young magnetar outbursts. Magnetars are a type of neutron star with powerful magnetic fields. They're called the strongest magnets in the universe, possessing a magnetic field that is 10 trillion times more powerful than a refrigerator door magnet. Astronomers last year linked observations of an FRB spotted in our Milky Way galaxy with a region where a known magnetar resides.

"Owing to their strong magnetic fields, magnetars are quite unpredictable," Fong explained. "In this case, the FRBs are thought to come from flares from a young magnetar. Massive stars go through stellar evolution and becomes neutron stars, some of which can be strongly magnetized, leading to flares and magnetic processes on their surfaces, which can emit radio light. Our study fits in with that picture and rules out either very young or very old progenitors for FRBs."

The observations also helped the researchers strengthen the association of FRBs with massive, star-forming galaxies. Previous ground-based observations of some possible FRB host galaxies did not as clearly detect underlying structure, such as spiral arms, in many of them. Astronomers, therefore, could not rule out the possibility that FRBs originate from a dwarf galaxy hiding underneath a massive one. In the new Hubble study, careful image processing and analysis of the images allowed researchers to rule out underlying dwarf galaxies, according to co-author Sunil Simha of the University of California, Santa Cruz.

Read more at Science Daily

'No level of smoke exposure is safe'

Nearly a quarter of pregnant women say they've been around secondhand smoke -- in their homes, at work, around a friend or relative -- which, according to new research, is linked to epigenetic changes -- meaning changes to how genes are regulated rather than changes to the genetic code itself -- in babies that could raise the risk of developmental disorders and cancer.

The study, published today in Environmental Health Perspectives by researchers at Virginia Commonwealth University Massey Cancer Center, is the first to connect secondhand smoke during pregnancy with epigenetic modifications to disease-related genes, measured at birth, which supports the idea that many adult diseases have their origins in environmental exposures -- such as stress, poor nutrition, pollution or tobacco smoke -- during early development.

"What we recommend to mothers in general is that no level of smoke exposure is safe," said study lead author Bernard Fuemmeler, Ph.D., M.P.H., associate director for population science and interim co-leader of the Cancer Prevention and Control program at VCU Massey Cancer Center. "Even low levels of smoke from secondhand exposure affect epigenetic marks in disease-related pathways. That doesn't mean everyone who is exposed will have a child with some disease outcome, but it contributes to a heightened risk."

Fuemmeler and colleagues analyzed data from 79 pregnant women enrolled in the Newborn Epigenetics Study (NEST) between 2005 and 2011. During the first trimester, all had a concentration of cotinine -- a nicotine byproduct -- in their blood consistent with low levels of smoke exposure, ranging from essentially none to levels consistent with secondhand smoke.

After these women gave birth, the researchers sampled the umbilical cord blood, which is the same blood that circulates through the fetus in utero, and performed what's referred to as an epigenome-wide association study (EWAS) to search for correlations between blood cotinine levels of the mothers during pregnancy and epigenetic patterns in the babies at birth.

When cotinine levels were higher, the newborns were more likely to have epigenetic "marks" on genes that control the development of brain function, as well as genes related to diabetes and cancer.

These marks could mean either unusually many or unusually few molecules bound to the DNA strand, which affects how accessible a particular gene is. If a gene is bound up tightly by lots of marks, then it's harder for molecular machinery to access and less likely to be expressed. On the other hand, if a gene is relatively unencumbered, then it might be expressed at higher levels than normal. Tipping the scale in either direction could lead to disease.

To solidify their results, the team repeated the analysis in a separate sample of 115 women and found changes to two of the same disease-related epigenetic regions -- one that regulates genes involved in inflammation and diabetes and another that regulates cardiovascular and nervous system functions -- are correlated with cotinine levels in mothers.

In all cases, the analyses controlled for race, ethnicity, age, prior number of children and maternal education.

Read more at Science Daily

A complex link between body mass index and Alzheimer's

Though obesity in midlife is linked to an increased risk for Alzheimer's disease, new research suggests that a high body mass index later in life doesn't necessarily translate to greater chances of developing the brain disease.

In the study, researchers compared data from two groups of people who had been diagnosed with mild cognitive impairment -- half whose disease progressed to Alzheimer's in 24 months and half whose condition did not worsen.

The researchers zeroed in on two risk factors: body mass index (BMI) and a cluster of genetic variants associated with higher risk for Alzheimer's disease.

Their analysis showed that a higher genetic risk combined with a lower BMI was associated with a higher likelihood for progression to Alzheimer's, and that the association was strongest in men.

The finding does not suggest people should consider gaining weight in their later years as a preventive effort -- instead, researchers speculate that lower BMI in these patients was likely a consequence of neurodegeneration, the progressive damage to the brain that is a hallmark of Alzheimer's. Brain regions affected by Alzheimer's are also involved in controlling eating behaviors and weight regulation.

"We don't want people to think they can eat everything they want because of this lower BMI association," said senior study author Jasmeet Hayes, assistant professor of psychology at The Ohio State University.

"We know that maintaining a healthy weight and having a healthy diet are extremely important to keeping inflammation and oxidative stress down -- that's a risk factor that is modifiable, and it's something you can do to help improve your life and prevent neurodegenerative processes as much as possible," she said. "If you start to notice rapid weight loss in an older individual, that could actually be a reflection of a potential neurodegenerative disease process."

The study was published online recently in the Journals of Gerontology: Series A.

Previous research has found a link between obesity and negative cognitive outcomes, but in older adults closer to the age at which Alzheimer's disease is diagnosed, the results have been mixed, Hayes said. And though a variant to the gene known as APOE4 is the strongest single genetic risk factor for Alzheimer's, it explains only about 10 to 15% of overall risk, she said.

Hayes has focused her research program on looking at multiple risk factors at the same time to see how they might interact to influence risk -- and to identify health behaviors that may help reduce the risk.

"We're trying to add more and more factors. That is my goal, to one day build a more precise and better model of the different combinations of risk factors," said Hayes, also an investigator in Ohio State's Chronic Brain Injury Initiative. "Genetic risk is important, but it really explains only a small part of Alzheimer's disease, so we're really interested in looking at other factors that we can control."

For this study, the research team obtained data from the Alzheimer's Disease Neuroimaging Initiative, compiling a sample of 104 people for whom BMI and polygenic risk scores were available. Fifty-two individuals whose mild cognitive impairment (MCI) had progressed to Alzheimer's in 24 months were matched against demographically similar people whose MCI diagnosis did not change over two years. Their average age was 73.

Statistical analysis showed that individuals with mild cognitive impairment who had both a lower BMI and higher genetic risk for Alzheimer's were more likely to progress to Alzheimer's disease within 24 months compared to people with a higher BMI.

"We think there's interaction between the genetics and lower BMI, and having both of these risk factors causes more degeneration in certain brain regions to increase the likelihood of developing Alzheimer's disease," said Jena Moody, a graduate student in psychology at Ohio State and first author of the paper.

The effect of the BMI-genetic risk interaction was significant even after taking into account the presence of beta-amyloid and tau proteins in the patients' cerebrospinal fluid -- the core biomarkers of Alzheimer's disease.

The relationship between low BMI and high genetic risk and progression to Alzheimer's was stronger in males than in females, but a larger sample size and additional biological data would be needed to expand on that finding, the researchers said.

Because brain changes can begin long before cognitive symptoms surface, a better understanding of the multiple risk factors for Alzheimer's could open the door to better prevention options, Moody said.

"If you can identify people at higher risk before symptoms manifest, you could implement interventions and prevention techniques to either slow or prevent that progression from happening altogether," she said.

To date, scientists have suggested preventive steps include maintaining a healthy weight and diet and participating in activities that reduce inflammation and promote neurofunctioning, such as exercise and mentally stimulating activities.

"We're finding again and again how important inflammation is in the process," Hayes said. "Especially in midlife, trying to keep that inflammation down is such an important aspect of maintaining a healthy lifestyle and preventing accelerated aging."

Read more at Science Daily

Walking in their shoes: Using virtual reality to elicit empathy in healthcare providers

Research has shown empathy gives healthcare workers the ability to provide appropriate supports and make fewer mistakes. This helps increase patient satisfaction and enhance patient outcomes, resulting in better overall care. In an upcoming issue of the Journal of Medical Imaging and Radiation Sciences, published by Elsevier, multidisciplinary clinicians and researchers from Dalhousie University performed an integrative review to synthesize the findings regarding virtual reality (VR) as a pedagogical tool for eliciting empathetic behavior in medical radiation technologists (MRTs).

Informally, empathy is often described as the capacity to put oneself in the shoes of another. Empathy is essential to patient-centered care and crucial to the development of therapeutic relationships between carers (healthcare providers, healthcare students, and informal caregivers such as parents, spouses, friends, family, clergy, social workers, and fellow patients) and care recipients. Currently, there is a need for the development of effective tools and approaches that are standardizable, low-risk, safe-to-fail, easily repeatable, and could assist in eliciting empathetic behavior.

This research synthesis looked at studies investigating VR experiences that ranged from a single eight-minute session to sessions lasting 20-25 minutes in duration delivered on two separate days, both in immersive VR environments where participants assumed the role of a care recipient, and non-immersive VR environments where the participants assumed the role of a care provider in a simulated care setting. The two types of studies helped researchers gain an understanding of what it is like to have a specific disease or need and to practice interacting with virtual care recipients.

"Although the studies we looked at don't definitively show VR can help sustain empathy behaviors over time, there is a lot of promise for research and future applications in this area," explained lead author Megan Brydon, MSc, BHSc, RTNM, IWK Health Centre, Halifax, Nova Scotia, Canada.

The authors conclude that VR may provide an effective and wide-ranging tool for the learning of care recipients' perspectives and that future studies should seek to determine which VR experiences are the most effective in evoking empathetic behaviors. They recommend that these studies employ high order designs that are better able to control biases.

From Science Daily

May 20, 2021

The 'Great Dying'

The Paleozoic era culminated 251.9 million years ago in the most severe mass extinction recorded in the geologic record. Known as the "great dying," this event saw the loss of up to 96% of all marine species and around 70% of terrestrial species, including plants and insects.

The consensus view of scientists is that volcanic activity at the end of the Permian period, associated with the Siberian Traps Large Igneous Province, emitted massive quantities of greenhouse gases into the atmosphere over a short time interval. This caused a spike in global temperatures and a cascade of other deleterious environmental effects.

An international team of researchers from the United States, Sweden, and Australia studied sedimentary deposits in eastern Australia, which span the extinction event and provide a record of changing conditions along a coastal margin that was located in the high latitudes of the southern hemisphere. Here, the extinction event is evident as the abrupt disappearance of Glossopteris forest-mire ecosystems that had flourished in the region for millions of years. Data collected from eight sites in New South Wales and Queensland, Australia were combined with the results of climate models to assess the nature and pace of climate change before, during, and after the extinction event.

Results show that Glossopteris forest-mire ecosystems thrived through the final stages of the Permian period, a time when the climate in the region was gradually warming and becoming increasingly seasonal. The collapse of these lush environments was abrupt, coinciding with a rapid spike in temperatures recorded throughout the region. The post-extinction climate was 10-14°C warmer, and landscapes were no longer persistently wet, but results point to overall higher but more seasonal precipitation consistent with an intensification of a monsoonal climate regime in the high southern latitudes.

Because many areas of the globe experienced abrupt aridification in the wake of the "great dying," results suggest that high-southern latitudes may have served as important refugia for moisture-loving terrestrial groups.

The rate of present-day global warming rivals that experienced during the "great dying," but its signature varies regionally, with some areas of the planet experiencing rapid change while other areas remain relatively unaffected. The future effects of climate change on ecosystems will likely be severe. Thus, understanding global patterns of environmental change at the end of the Paleozoic can provide important insights as we navigate rapid climate change today.

From Science Daily

Alien radioactive element prompts creation rethink

The first-ever discovery of an extraterrestrial radioactive isotope on Earth has scientists rethinking the origins of the elements on our planet.

The tiny traces of plutonium-244 were found in ocean crust alongside radioactive iron-60. The two isotopes are evidence of violent cosmic events in the vicinity of Earth millions of years ago.

Star explosions, or supernovae create many of the heavy elements in the periodic table, including those vital for human life, such as iron, potassium and iodine.

To form even heavier elements, such as gold, uranium and plutonium it was thought that a more violent event may be needed, such as two neutron stars merging.

However, a study led by Professor Anton Wallner from The Australian National University (ANU) suggests a more complex picture.

"The story is complicated -- possibly this plutonium-244 was produced in supernova explosions or it could be left over from a much older, but even more spectacular event such as a neutron star detonation," lead author of the study, Professor Wallner said.

Any plutonium-244 and iron-60 that existed when the Earth formed from interstellar gas and dust over four billion years ago has long since decayed, so current traces of them must have originated from recent cosmic events in space.

The dating of the sample confirms two or more supernova explosions occurred near Earth.

"Our data could be the first evidence that supernovae do indeed produce plutonium-244," Professor Wallner said

"Or perhaps it was already in the interstellar medium before the supernova went off, and it was pushed across the solar system together with the supernova ejecta."

Professor Wallner also holds joint positions at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and Technical University Dresden in Germany, and conducted this work with researchers from Australia, Israel, Japan, Switzerland and Germany.

Read more at Science Daily

Unexpected 'Black Swan' defect discovered in soft matter

In new research, Texas A&M University scientists have for the first time revealed a single microscopic defect called a "twin" in a soft-block copolymer using an advanced electron microscopy technique. This defect may be exploited in the future to create materials with novel acoustic and photonic properties.

"This defect is like a black swan -- something special going on that isn't typical," said Dr. Edwin Thomas, professor in the Department of Materials Science and Engineering. "Although we chose a certain polymer for our study, I think the twin defect will be fairly universal across a bunch of similar soft matter systems, like oils, surfactants, biological materials and natural polymers. Therefore, our findings will be valuable to diverse research across the soft matter field."

The results of the study are detailed in the Proceedings of the National Academy of Sciences (PNAS).

Materials can be broadly classified as hard or soft matter. Hard materials, like metal alloys and ceramics, generally have a very regular and symmetric arrangement of atoms. Further, in hard matter, ordered groups of atoms arrange themselves into nanoscopic building blocks, called unit cells. Typically, these unit cells are comprised of only a few atoms and stack together to form the periodic crystal. Soft matter can also form crystals consisting of unit cells, but now the periodic pattern is not at the atomic level; it occurs at a much larger scale from assemblies of large molecules.

In particular, for an A-B diblock copolymer, a type of soft matter, the periodic molecular motif comprises of two linked chains: one chain of A units and one chain of B units. Each chain, called a block, has thousands of units linked together and a soft crystal forms by selective aggregation of the A units into domains and B units into domains that form huge unit cells compared to hard matter.

Another notable difference between soft and hard crystals is that structural defects have been much more extensively studied in hard matter. These imperfections can occur at a single atomic location within material, called a point defect. For example, point defects in the periodic arrangement of carbon atoms in a diamond due to nitrogen impurities create the exquisite "canary" yellow diamond. In addition, imperfections in crystals can be elongated as a line defect or spread across an area as a surface defect.

By and large, defects within hard materials have been extensively investigated using advanced electron imaging techniques. But in order to be able to locate and identify defects in their block copolymer soft crystals, Thomas and his colleagues used a new technique called slice-and-view scanning electron microscopy. This method allowed the researchers to use a fine ion beam to trim off a very thin slice of the soft material, then they used an electron beam to image the surface below the slice, then slice again, image again, over and over. These slices were then digitally stacked together to get a 3D view.

For their analysis, they investigated a diblock copolymer made of a polystyrene block and a polydimethylsiloxane block. At the microscopic level, a unit cell of this material exhibits a spatial pattern of the so-called "double gyroid" shape, a complex, periodic structure consisting of two intertwined molecular networks of which one has a left-handed rotation and the other, a right-handed rotation.

While the researchers were not actively looking for any particular defect in the material, the advanced imaging technique uncovered a surface defect, called a twin boundary. At either side of the twin juncture, the molecular networks abruptly transformed their handedness.

"I like to call this defect a topological mirror, and it's a really neat effect," said Thomas. "When you have a twin boundary, it's like looking at a reflection into a mirror, as each network crosses the boundary, the networks switch handedness, right becomes left and vice versa."

The researcher added that the consequences of having a twin boundary in a periodic structure that does not by itself have any inherent mirror symmetry could induce novel optical and acoustic properties that open new doors in materials engineering and technology.

Read more at Science Daily

A safer, greener way to make solar cells: Toxic solvent replaced

Scientists at SPECIFIC Innovation and Knowledge Centre, Swansea University, have found a way to replace the toxic, unsustainable solvents currently needed to make the next generation of solar technology.

Printed carbon perovskite solar cells have been described as a likely front runner to the market because they are extremely efficient at converting light to electricity, cheap and easy to make.

A major barrier to the large-scale manufacture and commercialisation of these cells is the solvents used to control crystallisation of the perovskite during fabrication: this is because they are made from unsustainable materials and are banned in many countries due to their toxicity and psychoactive effects.

SPECIFIC's researchers have discovered that a non-toxic biodegradable solvent called γ-Valerolactone (GVL) could replace these solvents without impacting cell performance.

GVL's list of advantages could improve the commercial viability of carbon perovskite solar devices:
 

  • It is made from sustainable feedstocks
  • There are no legal issues in its use around the world
  • It is suitable for use in large-scale manufacturing processes
  • It is non-toxic and biodegradable


Carys Worsley, who led the research as part of her doctorate, said:

"To be truly environmentally sustainable, the way that solar cells are made must be as green as the energy they produce. As the next generation of solar technologies approaches commercial viability, research to reduce the environmental impact of large-scale production will become increasingly important."

Professor Trystan Watson, research group leader, added:

"Many problems need to be resolved before these technologies become a commercial reality. This solvent problem was a major barrier, not only restricting large-scale manufacture but holding back research in countries where the solvents are banned.

We hope our discovery will enable countries that have previously been unable to participate in this research to become part of the community and accelerate the development of cleaner, greener energy."

Read more at Science Daily

May 19, 2021

Did Earth's early rise in oxygen help multicellular life evolve?

Scientists have long thought that there was a direct connection between the rise in atmospheric oxygen, which started with the Great Oxygenation Event 2.5 billion years ago, and the rise of large, complex multicellular organisms.

That theory, the "Oxygen Control Hypothesis," suggests that the size of these early multicellular organisms was limited by the depth to which oxygen could diffuse into their bodies. The hypothesis makes a simple prediction that has been highly influential within both evolutionary biology and geosciences: Greater atmospheric oxygen should always increase the size to which multicellular organisms can grow.

It's a hypothesis that's proven difficult to test in a lab. Yet a team of Georgia Tech researchers found a way -- using directed evolution, synthetic biology, and mathematical modeling -- all brought to bear on a simple multicellular lifeform called a 'snowflake yeast'. The results? Significant new information on the correlations between oxygenation of the early Earth and the rise of large multicellular organisms -- and it's all about exactly how much O2 was available to some of our earliest multicellular ancestors.

"The positive effect of oxygen on the evolution of multicellularity is entirely dose-dependent -- our planet's first oxygenation would have strongly constrained, not promoted, the evolution of multicellular life," explains G. Ozan Bozdag, research scientist in the School of Biological Sciences and the study's lead author. "The positive effect of oxygen on multicellular size may only be realized when it reaches high levels."

"Oxygen suppression of macroscopic multicellularity" is published in the May 14, 2021 edition of the journal Nature Communications. Bozdag's co-authors on the paper include Georgia Tech researchers Will Ratcliff, associate professor in the School of Biological Sciences; Chris Reinhard, associate professor in the School of Earth and Atmospheric Sciences; Rozenn Pineau, Ph.D. student in the School of Biological Sciences and the Interdisciplinary Graduate Program in Quantitative Biosciences (QBioS); along with Eric Libby, assistant professor at Umea University in Sweden and the Santa Fe Institute in New Mexico.

Directing yeast to evolve in record time

"We show that the effect of oxygen is more complex than previously imagined. The early rise in global oxygen should in fact strongly constrain the evolution of macroscopic multicellularity, rather than selecting for larger and more complex organisms," notes Ratcliff.

"People have long believed that the oxygenation of Earth's surface was helpful -- some going so far as to say it is a precondition -- for the evolution of large, complex multicellular organisms," he adds. "But nobody has ever tested this directly, because we haven't had a model system that is both able to undergo lots of generations of evolution quickly, and able to grow over the full range of oxygen conditions," from anaerobic conditions up to modern levels.

The researchers were able to do that, however, with snowflake yeast, simple multicellular organisms capable of rapid evolutionary change. By varying their growth environment, they evolved snowflake yeast for over 800 generations in the lab with selection for larger size.

The results surprised Bozdag. "I was astonished to see that multicellular yeast doubled their size very rapidly when they could not use oxygen, while populations that evolved in the moderately oxygenated environment showed no size increase at all," he says. "This effect is robust -- even over much longer timescales."

Size -- and oxygen levels -- matter for multicellular growth

In the team's research, "large size easily evolved either when our yeast had no oxygen or plenty of it, but not when oxygen was present at low levels," Ratcliff says. "We did a lot more work to show that this is actually a totally predictable and understandable outcome of the fact that oxygen, when limiting, acts as a resource -- if cells can access it, they get a big metabolic benefit. When oxygen is scarce, it can't diffuse very far into organisms, so there is an evolutionary incentive for multicellular organisms to be small -- allowing most of their cells access to oxygen -- a constraint that is not there when oxygen simply isn't present, or when there's enough of it around to diffuse more deeply into tissues."

Ratcliff says not only does his group's work challenge the Oxygen Control Hypothesis, it also helps science understand why so little apparent evolutionary innovation was happening in the world of multicellular organisms in the billion years after the Great Oxygenation Event. Ratcliff explains that geologists call this period the "Boring Billion" in Earth's history -- also known as the Dullest Time in Earth's History, and Earth's Middle Ages -- a period when oxygen was present in the atmosphere, but at low levels, and multicellular organisms stayed relatively small and simple.

Read more at Science Daily

What happens in the brain when we imagine the future?

In quiet moments, the brain likes to wander -- to the events of tomorrow, an unpaid bill, an upcoming vacation.

Despite little external stimulation in these instances, a part of the brain called the default mode network (DMN) is hard at work. "These regions seem to be active when people aren't asked to do anything in particular, as opposed to being asked to do something cognitively," says Penn neuroscientist Joseph Kable.

Though the field has long suspected that this neural network plays a role in imagining the future, precisely how it works hadn't been fully understood. Now, research from Kable and two former graduate students in his lab, Trishala Parthasarathi, associate director of scientific services at OrtleyBio, and Sangil Lee, a postdoc at University of California, Berkeley, sheds light on the matter.

In a paper published in the Journal of Neuroscience, the research team discovered that, when it comes to imagining the future, the default mode network actually splits into two complementary parts. One helps create and predict the imagined event, what the researchers call the "constructive" function. The other assesses whether that newly constructed event is positive or negative, what they call the "evaluative" function.

"It's a neat division," says Kable. "When psychologists talk about why humans have the ability to imagine the future, usually it's so we can decide what to do, plan, make decisions. But a critical function is the evaluative function; it's not just about coming up with a possibility but also evaluating it as good or bad."

Building on previous work

The DMN itself includes the ventromedial prefrontal cortex, posterior cingulate cortex, and regions in the medial temporal and parietal lobes, such as the hippocampus. It's aptly named, Kable says. "When you put people into a brain scanner and ask them to not do anything, to just sit there, these are the brain regions that seem to be active," he says.

Previous research had revealed which areas make up the DMN and that constructing and evaluating imagined events activates different components. Kable wanted to test that idea further, to better pinpoint the implicated regions and what's happening in each.

To do so, he and his team created a study in which 13 females and 11 males received prompts while in a functional magnetic resonance imaging (fMRI) machine. Participants had seven seconds to read one of 32 cues such as, "Imagine you're sitting on a warm beach on a tropical island," or "Imagine you win the lottery next year." They then had 12 seconds to think about the scenario, followed by 14 seconds to rate vividness and valence.

"Vividness is the degree to which the image that comes to mind has a lot of details and how much those details subjectively pop as opposed to being vague," Kable says. "Valence is an emotional evaluation. How positive or negative is the event? Is this something you want to have happen or not?"

Participants went through the process four times. Each time, the Penn researchers watched brain activity from the fMRI. The work confirmed two sub-networks at play.

"One network, which we'll call the dorsal default mode network, was influenced by valence. In other words, it was more active for positive events than for negative events, but it was not influenced at all by vividness. It seems to be involved in the evaluative function," Kable says.

The other sub-network, the ventral default mode network, was more active for highly vivid events than for events with no detail. "But it wasn't influenced by valence," he says. "It was equally active for both positive and negative events, showing that network really is involved in the construction piece of imagination."

Next steps

According to Kable, the findings offer a first step toward understanding the basis of imaginative abilities. This research asked participants to evaluate the positivity or negativity of an imagined event, but more complex assessments -- moving beyond the simple good-versus-bad dimension, for instance -- might offer further clues about this neural process.

That kind of analysis will likely comprise future work for the Kable lab, which has already begun using these findings to parse why people don't value future outcomes as much as immediate outcomes.

"One theory is that the future isn't as vivid, isn't as tangible and detailed and concrete as something right in front of your face," he says. "We've started to use our identification of the sub-network involved in construction to ask the question, how active is this network when people are thinking about future outcomes compared to the same outcome in the present."

And although the research was completed before COVID-19, Kable sees pandemic-related implications for these findings. "If you described what someone's life was going to be like to them before the pandemic hit -- you're going to work from home and wear a mask every time you go outside and not engage in any social contact -- it would blow their mind. And yet, once we have the actual experiences, it's no longer so strange. For me, this demonstrates that we still have far to go in understanding our imaginative capabilities."

Read more at Science Daily

New framework incorporating renewables and flexible carbon capture

As the global energy demand continues to grow along with atmospheric levels of carbon dioxide (CO2), there has been a major push to adopt more sustainable and more carbon-neutral energy sources. Solar/wind power and CO2 capture -- the process of capturing waste CO2 so it is not introduced into the atmosphere -- are two promising pathways for decarbonization, but both have significant drawbacks.

Solar and wind power is intermittent and cannot be deployed everywhere; CO2 capture processes are incredibly energy-intensive. Both of these pathways have benefits, but each on their own does not present a viable strategy at the moment. However, a research team led by Dr. Faruque Hasan, Kim Tompkins McDivitt '88 and Phillip McDivitt '87 Faculty Fellow and associate professor in the Artie McFerrin Department of Chemical Engineering at Texas A&M University, has uncovered a way to combine both of these processes together to increase the efficiency of both.

Much of Hasan's research deals with synergy and synergistic effects in complex systems. Synergy is the combined effect of cooperative interactions between two or more organizations, substances or other agents that is greater than the sum of their separate effects. To this end, Hasan examined the synergistic integration of renewables and flexible carbon capture with individual fossil power plants.

"We are addressing three things that each have pros and cons: fossil fuels are cheap, but they release a lot of CO2; CO2 capture is very beneficial for the environment, but it is prohibitively expensive; renewable energy sources such as wind or solar power are good for the environment, but the energy output is intermittent and variable," Hasan said.

While each area presents significant challenges individually, Hasan and his research team have found a significant benefit when all the components are used in tandem. In a research paper published in Energy & Environmental Science, Hasan and his doctoral students Manali Zantye and Akhil Arora examined the use of synergistic integration of renewables and flexible carbon capture and found a significant benefit to efficiency and cost reduction.

"Despite the growing interest in sustainable renewable energy sources, their intermittent availability would make it difficult to completely replace the dispatchable fossil-based energy generators in the near future," said Zantye, who is the first author of the paper.

CO2 capture is an energy-intensive process. Normally, this process runs alongside standard energy generation at power plants. As energy is generally priced on a demand basis, the use of CO2 capture processes during peak energy demand can quickly drive up operational costs to an unsustainable level. In this research, Hasan also found that utilizing a flexible CO2 capture system can greatly offset operational costs.

Normally, CO2 is captured into a large solvent tank and then removed in an energy-intensive process. In a flexible system, rather than removing the CO2 as it is introduced to the solvent, it can be stored for short periods of time and removed at non-peak times when the cost of power is lower. Further, by incorporating a renewable energy source, the cost of CO2 capture is offset even more.

According to Hasan, the synergistic framework presented in the research can dramatically improve the system beyond the component parts. "We have developed a computational framework to utilize dynamic operational schedules to manage all these very complex decisions," he said. "Developing carbon capture technology is very important, but equally important is how you integrate them. The operational aspect of integration is very important. Our study shows that this can be done in such a way that renewables, fossil fuels and carbon capture are all working together."

Read more at Science Daily

Closer to gene therapy that would restore hearing for the congenitally deaf

Researchers at Oregon State University have found a key new piece of the puzzle in the quest to use gene therapy to enable people born deaf to hear.

The work centers around a large gene responsible for an inner-ear protein, otoferlin. Mutations in otoferlin are linked to severe congenital hearing loss, a common type of deafness in which patients can hear almost nothing.

"For a long time otoferlin seemed to be a one-trick pony of a protein," said Colin Johnson, associate professor of biochemistry and biophysics in the OSU College of Science. "A lot of genes will find various things to do, but the otoferlin gene had appeared only to have one purpose and that was to encode sound in the sensory hair cells in the inner ear. Small mutations in otoferlin render people profoundly deaf."

In its regular form, the otoferlin gene is too big to package into a delivery vehicle for molecular therapy, so Johnson's team is looking at using a truncated version instead.

Research led by graduate student Aayushi Manchanda showed the shortened version needs to include a part of the gene known as the transmembrane domain, and one of the reasons for that was unexpected: Without the transmembrane domain, the sensory cells were slow to mature.

"That was surprising since otoferlin was known to help encode hearing information but had not been thought to be involved in sensory cell development," Johnson said.

Findings were published today in Molecular Biology of the Cell.

Scientists in Johnson's lab have been working for years with the otoferlin molecule and in 2017 they identified a truncated form of the gene that can function in the encoding of sound.

To test whether the transmembrane domain of otoferlin needed to be part of the shortened version of the gene, Manchanda introduced a mutation that truncated the transmembrane domain in zebrafish.

Zebrafish, a small freshwater species that go from a cell to a swimming fish in about five days, share a remarkable similarity to humans at the molecular, genetic and cellular levels, meaning many zebrafish findings are immediately relevant to humans. Embryonic zebrafish are transparent and can be easily maintained in small amounts of water.

"The transmembrane domain tethers otoferlin to the cell membrane and intracellular vesicles but it was not clear if this was essential and had to be included in a shortened form of otoferlin," Manchanda said. "We found that the loss of the transmembrane domain results in the sensory hair cells producing less otoferlin as well as deficits in hair cell activity. The mutation also caused a delay in the maturation of the sensory cells, which was a surprise. Overall the results argue that the transmembrane domain must be included in any gene therapy construct."

At the molecular level, Manchanda found that a lack of transmembrane domain led to otoferlin failing to properly link the synaptic vesicles filled with neurotransmitter to the cell membrane, causing less neurotransmitter to be released.

Read more at Science Daily

May 18, 2021

New evidence of how and when the Milky Way came together

New research provides the best evidence to date into the timing of how our early Milky Way came together, including the merger with a key satellite galaxy.

Using relatively new methods in astronomy, the researchers were able to identify the most precise ages currently possible for a sample of about a hundred red giant stars in the galaxy.

With this and other data, the researchers were able to show what was happening when the Milky Way merged with an orbiting satellite galaxy, known as Gaia-Enceladus, about 10 billion years ago.

Their results were published today (May 17, 2021) in the journal Nature Astronomy.

"Our evidence suggests that when the merger occurred, the Milky Way had already formed a large population of its own stars," said Fiorenzo Vincenzo, co-author of the study and a fellow in The Ohio State University's Center for Cosmology and Astroparticle Physics.

Many of those "homemade" stars ended up in the thick disc in the middle of the galaxy, while most that were captured from Gaia-Enceladus are in the outer halo of the galaxy.

"The merging event with Gaia-Enceladus is thought to be one of the most important in the Milky Way's history, shaping how we observe it today," said Josefina Montalban, with the School of Physics and Astronomy at the University of Birmingham in the U.K., who led the project.

By calculating the age of the stars, the researchers were able to determine, for the first time, that the stars captured from Gaia-Enceladus have similar or slightly younger ages compared to the majority of stars that were born inside the Milky Way.

A violent merger between two galaxies can't help but shake things up, Vincenzo said. Results showed that the merger changed the orbits of the stars already in the galaxy, making them more eccentric.

Vincenzo compared the stars' movements to a dance, where the stars from the former Gaia-Enceladus move differently than those born within the Milky Way. The stars even "dress" differently, Vincenzo said, with stars from outside showing different chemical compositions from those born inside the Milky Way.

The researchers used several different approaches and data sources to conduct their study.

One way the researchers were able to get such precise ages of the stars was through the use of asteroseismology, a relatively new field that probes the internal structure of stars.

Asteroseismologists study oscillations in stars, which are sound waves that ripple through their interiors, said Mathieu Vrard, a postdoctoral research associate in Ohio State's Department of Astronomy.

"That allows us to get very precise ages for the stars, which are important in determining the chronology of when events happened in the early Milky Way," Vrard said.

The study also used a spectroscopic survey, called APOGEE, which provides the chemical composition of stars -- another aid in determining their ages.

"We have shown the great potential of asteroseismology, in combination with spectroscopy, to age-date individual stars," Montalban said.

This study is just the first step, according to the researchers.

"We now intend to apply this approach to larger samples of stars, and to include even more subtle features of the frequency spectra," Vincenzo said.

"This will eventually lead to a much sharper view of the Milky Way's assembly history and evolution, creating a timeline of how our galaxy developed."

Read more at Science Daily

Stunning simulation of stars being born is most realistic ever

A team including Northwestern University astrophysicists has developed the most realistic, highest-resolution 3D simulation of star formation to date. The result is a visually stunning, mathematically-driven marvel that allows viewers to float around a colorful gas cloud in 3D space while watching twinkling stars emerge.

Called STARFORGE (Star Formation in Gaseous Environments), the computational framework is the first to simulate an entire gas cloud -- 100 times more massive than previously possible and full of vibrant colors -- where stars are born.

It also is the first simulation to simultaneously model star formation, evolution and dynamics while accounting for stellar feedback, including jets, radiation, wind and nearby supernovae activity. While other simulations have incorporated individual types of stellar feedback, STARFORGE puts them altogether to simulate how these various processes interact to affect star formation.

Using this beautiful virtual laboratory, the researchers aim to explore longstanding questions, including why star formation is slow and inefficient, what determines a star's mass and why stars tend to form in clusters.

The researchers have already used STARFORGE to discover that protostellar jets -- high-speed streams of gas that accompany star formation -- play a vital role in determining a star's mass. By calculating a star's exact mass, researchers can then determine its brightness and internal mechanisms as well as make better predictions about its death.

Newly accepted by the Monthly Notices of the Royal Astronomical Society, an advanced copy of the manuscript, detailing the research behind the new model, appeared online today. An accompanying paper, describing how jets influence star formation, was published in the same journal in February 2021.

"People have been simulating star formation for a couple decades now, but STARFORGE is a quantum leap in technology," said Northwestern's Michael Grudi?, who co-led the work. "Other models have only been able to simulate a tiny patch of the cloud where stars form -- not the entire cloud in high resolution. Without seeing the big picture, we miss a lot of factors that might influence the star's outcome."

"How stars form is very much a central question in astrophysics," said Northwestern's Claude-André Faucher-Giguère, a senior author on the study. "It's been a very challenging question to explore because of the range of physical processes involved. This new simulation will help us directly address fundamental questions we could not definitively answer before."

Grudi? is a postdoctoral fellow at Northwestern's Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA). Faucher-Giguère is an associate professor of physics and astronomy at Northwestern's Weinberg College of Arts and Sciences and member of CIERA. Grudi? co-led the work with Dávid Guszejnov, a postdoctoral fellow at the University of Texas at Austin.

From start to finish, star formation takes tens of millions of years. So even as astronomers observe the night sky to catch a glimpse of the process, they can only view a brief snapshot.

"When we observe stars forming in any given region, all we see are star formation sites frozen in time," Grudi? said. "Stars also form in clouds of dust, so they are mostly hidden."

For astrophysicists to view the full, dynamic process of star formation, they must rely on simulations. To develop STARFORGE, the team incorporated computational code for multiple phenomena in physics, including gas dynamics, magnetic fields, gravity, heating and cooling and stellar feedback processes. Sometimes taking a full three months to run one simulation, the model requires one of the largest supercomputers in the world, a facility supported by the National Science Foundation and operated by the Texas Advanced Computing Center.

The resulting simulation shows a mass of gas -- tens to millions of times the mass of the sun -- floating in the galaxy. As the gas cloud evolves, it forms structures that collapse and break into pieces, which eventually form individual stars. Once the stars form, they launch jets of gas outward from both poles, piercing through the surrounding cloud. The process ends when there is no gas left to form anymore stars.

Already, STARFORGE has helped the team discover a crucial new insight into star formation. When the researchers ran the simulation without accounting for jets, the stars ended up much too large -- 10 times the mass of the sun. After adding jets to the simulation, the stars' masses became much more realistic -- less than half the mass of the sun.

"Jets disrupt the inflow of gas toward the star," Grudi? said. "They essentially blow away gas that would have ended up in the star and increased its mass. People have suspected this might be happening, but, by simulating the entire system, we have a robust understanding of how it works."

Beyond understanding more about stars, Grudi? and Faucher-Giguère believe STARFORGE can help us learn more about the universe and even ourselves.

"Understanding galaxy formation hinges on assumptions about star formation," Grudi? said. "If we can understand star formation, then we can understand galaxy formation. And by understanding galaxy formation, we can understand more about what the universe is made of. Understanding where we come from and how we're situated in the universe ultimately hinges on understanding the origins of stars."

Read more at Science Daily

Supermassive black holes devour gas just like their petite counterparts

 On Sept. 9, 2018, astronomers spotted a flash from a galaxy 860 million light years away. The source was a supermassive black hole about 50 million times the mass of the sun. Normally quiet, the gravitational giant suddenly awoke to devour a passing star in a rare instance known as a tidal disruption event. As the stellar debris fell toward the black hole, it released an enormous amount of energy in the form of light.

Researchers at MIT, the European Southern Observatory, and elsewhere used multiple telescopes to keep watch on the event, labeled AT2018fyk. To their surprise, they observed that as the supermassive black hole consumed the star, it exhibited properties that were similar to that of much smaller, stellar-mass black holes.

The results, published today in the Astrophysical Journal, suggest that accretion, or the way black holes evolve as they consume material, is independent of their size.

"We've demonstrated that, if you've seen one black hole, you've seen them all, in a sense," says study author Dheeraj "DJ" Pasham, a research scientist in MIT's Kavli Institute for Astrophysics and Space Research. "When you throw a ball of gas at them, they all seem to do more or less the same thing. They're the same beast in terms of their accretion."

Pasham's co-authors include principal research scientist Ronald Remillard and former graduate student Anirudh Chiti at MIT, along with researchers at the European Southern Observatory, Cambridge University, Leiden University, New York University, the University of Maryland, Curtin University, the University of Amsterdam, and the NASA Goddard Space Flight Center.

A stellar wake-up

When small stellar-mass black holes with a mass about 10 times our sun emit a burst of light, it's often in response to an influx of material from a companion star. This outburst of radiation sets off a specific evolution of the region around the black hole. From quiescence, a black hole transitions into a "soft" phase dominated by an accretion disk as stellar material is pulled into the black hole. As the amount of material influx drops, it transitions again to a "hard" phase where a white-hot corona takes over. The black hole eventually settles back into a steady quiescence, and this entire accretion cycle can last a few weeks to months.

Physicists have observed this characteristic accretion cycle in multiple stellar-mass black holes for several decades. But for supermassive black holes, it was thought that this process would take too long to capture entirely, as these goliaths are normally grazers, feeding slowly on gas in the central regions of a galaxy.

"This process normally happens on timescales of thousands of years in supermassive black holes," Pasham says. "Humans cannot wait that long to capture something like this."

But this entire process speeds up when a black hole experiences a sudden, huge influx of material, such as during a tidal disruption event, when a star comes close enough that a black hole can tidally rip it to shreds.

"In a tidal disruption event, everything is abrupt," Pasham says. "You have a sudden chunk of gas being thrown at you, and the black hole is suddenly woken up, and it's like, 'whoa, there's so much food -- let me just eat, eat, eat until it's gone.' So, it experiences everything in a short timespan. That allows us to probe all these different accretion stages that people have known in stellar-mass black holes."

A supermassive cycle

In September 2018, the All-Sky Automated Survey for Supernovae (ASASSN) picked up signals of a sudden flare. Scientists subsequently determined that the flare was the result of a tidal disruption event involving a supermassive black hole, which they labeled TDE AT2018fyk. Wevers, Pasham, and their colleagues jumped at the alert and were able to steer multiple telescopes, each trained to map different bands of the ultraviolet and X-ray spectrum, toward the system.

The team collected data over two years, using X-ray space telescopes XMM-Newton and the Chandra X-Ray Observatory, as well as NICER, the X-ray-monitoring instrument aboard the International Space Station, and the Swift Observatory, along with radio telescopes in Australia.

"We caught the black hole in the soft state with an accretion disk forming, and most of the emission in ultraviolet, with very few in the X-ray," Pasham says. "Then the disk collapses, the corona gets stronger, and now it's very bright in X-rays. Eventually there's not much gas to feed on, and the overall luminosity drops and goes back to undetectable levels."

The researchers estimate that the black hole tidally disrupted a star about the size of our sun. In the process, it generated an enormous accretion disk, about 12 billion kilometers wide, and emitted gas that they estimated to be about 40,000 Kelvin, or more than 70,000 degrees Fahrenheit. As the disk became weaker and less bright, a corona of compact, high-energy X-rays took over as the dominant phase around the black hole before eventually fading away.

"People have known this cycle to happen in stellar-mass black holes, which are only about 10 solar masses. Now we are seeing this in something 5 million times bigger," Pasham says.

"The most exciting prospect for the future is that such tidal disruption events provide a window into the formation of complex structures very close to the supermassive black hole such as the accretion disk and the corona," says lead author Thomas Wevers, a fellow at the European Southern Observatory. "Studying how these structures form and interact in the extreme environment following the destruction of a star, we can hopefully start to better understand the fundamental physical laws that govern their existence."

In addition to showing that black holes experience accretion in the same way, regardless of their size, the results represent only the second time that scientists have captured the formation of a corona from beginning to end.

"A corona is a very mysterious entity, and in the case of supermassive black holes, people have studied established coronas but don't know when or how they formed," Pasham says. "We've demonstrated you can use tidal disruption events to capture corona formation. I'm excited about using these events in the future to figure out what exactly is the corona."

Read more at Science Daily

Engineers harvest WiFi signals to power small electronics

With the rise of the digital age, the amount of WiFi sources to transmit information wirelessly between devices has grown exponentially. This results in the widespread use of the 2.4GHz radio frequency that WiFi uses, with excess signals available to be tapped for alternative uses.

To harness this under-utilised source of energy, a research team from the National University of Singapore (NUS) and Japan's Tohoku University (TU) has developed a technology that uses tiny smart devices known as spin-torque oscillators (STOs) to harvest and convert wireless radio frequencies into energy to power small electronics. In their study, the researchers had successfully harvested energy using WiFi-band signals to power a light-emitting diode (LED) wirelessly, and without using any battery.

"We are surrounded by WiFi signals, but when we are not using them to access the Internet, they are inactive, and this is a huge waste. Our latest result is a step towards turning readily-available 2.4GHz radio waves into a green source of energy, hence reducing the need for batteries to power electronics that we use regularly. In this way, small electric gadgets and sensors can be powered wirelessly by using radio frequency waves as part of the Internet of Things. With the advent of smart homes and cities, our work could give rise to energy-efficient applications in communication, computing, and neuromorphic systems," said Professor Yang Hyunsoo from the NUS Department of Electrical and Computer Engineering, who spearheaded the project.

The research was carried out in collaboration with the research team of Professor Guo Yong Xin, who is also from the NUS Department of Electrical and Computer Engineering, as well as Professor Shunsuke Fukami and his team from TU. The results were published in Nature Communications on 18 May 2021.

Converting WiFi signals into usable energy

Spin-torque oscillators are a class of emerging devices that generate microwaves, and have applications in wireless communication systems. However, the application of STOs is hindered due to a low output power and broad linewidth.

While mutual synchronisation of multiple STOs is a way to overcome this problem, current schemes, such as short-range magnetic coupling between multiple STOs, have spatial restrictions. On the other hand, long-range electrical synchronisation using vortex oscillators is limited in frequency responses of only a few hundred MHz. It also requires dedicated current sources for the individual STOs, which can complicate the overall on-chip implementation.

To overcome the spatial and low frequency limitations, the research team came up with an array in which eight STOs are connected in series. Using this array, the 2.4 GHz electromagnetic radio waves that WiFi uses was converted into a direct voltage signal, which was then transmitted to a capacitor to light up a 1.6-volt LED. When the capacitor was charged for five seconds, it was able to light up the same LED for one minute after the wireless power was switched off.

In their study, the researchers also highlighted the importance of electrical topology for designing on-chip STO systems, and compared the series design with the parallel one. They found that the parallel configuration is more useful for wireless transmission due to better time-domain stability, spectral noise behaviour, and control over impedance mismatch. On the other hand, series connections have an advantage for energy harvesting due to the additive effect of the diode-voltage from STOs.

Commenting on the significance of their results, Dr Raghav Sharma, the first author of the paper, shared, "Aside from coming up with an STO array for wireless transmission and energy harvesting, our work also demonstrated control over the synchronising state of coupled STOs using injection locking from an external radio-frequency source. These results are important for prospective applications of synchronised STOs, such as fast-speed neuromorphic computing."

Next steps


To enhance the energy harvesting ability of their technology, the researchers are looking to increase the number of STOs in the array they had designed. In addition, they are planning to test their energy harvesters for wirelessly charging other useful electronic devices and sensors.

Read more at Science Daily

Proteins that predict future dementia, Alzheimer's risk, identified

The development of dementia, often from Alzheimer's disease, late in life is associated with abnormal blood levels of dozens of proteins up to five years earlier, according to a new study led by researchers at the Johns Hopkins Bloomberg School of Public Health. Most of these proteins were not known to be linked to dementia before, suggesting new targets for prevention therapies.

The findings are based on new analyses of blood samples of over ten thousand middle-aged and elderly people -- samples that were taken and stored during large-scale studies decades ago as part of an ongoing study. The researchers linked abnormal blood levels of 38 proteins to higher risks of developing Alzheimers within five years. Of those 38 proteins, 16 appeared to predict Alzheimer's risk two decades in advance.

Although most of these risk markers may be only incidental byproducts of the slow disease process that leads to Alzheimer's, the analysis pointed to high levels of one protein, SVEP1, as a likely causal contributor to that disease process.

The study was published May 14 in Nature Aging.

"This is the most comprehensive analysis of its kind to date, and it sheds light on multiple biological pathways that are connected to Alzheimer's," says study senior author Josef Coresh, MD, PhD, MHS, George W. Comstock Professor in the Department of Epidemiology at the Bloomberg School. "Some of these proteins we uncovered are just indicators that disease might occur, but a subset may be causally relevant, which is exciting because it raises the possibility of targeting these proteins with future treatments."

More than six million Americans are estimated to have Alzheimer's, the most common type of dementia, an irreversible fatal condition that leads to loss of cognitive and physical function. Despite decades of intensive study, there are no treatments that can slow the disease process, let alone stop or reverse it. Scientists widely assume that the best time to treat Alzheimer's is before dementia symptoms develop.

Efforts to gauge people's Alzheimer's risk before dementia arises have focused mainly on the two most obvious features of Alzheimer's brain pathology: clumps of amyloid beta protein known as plaques, and tangles of tau protein. Scientists have shown that brain imaging of plaques, and blood or cerebrospinal fluid levels of amyloid beta or tau, have some value in predicting Alzheimer's years in advance.

But humans have tens of thousands of other distinct proteins in their cells and blood, and techniques for measuring many of these from a single, small blood sample have advanced in recent years. Would a more comprehensive analysis using such techniques reveal other harbingers of Alzheimer's? That's the question Coresh and colleagues sought to answer in this new study.

The researchers' initial analysis covered blood samples taken during 2011-13 from more than 4,800 late-middle-aged participants in the Atherosclerosis Risk in Communities (ARIC) study, a large epidemiological study of heart disease-related risk factors and outcomes that has been running in four U.S. communities since 1985. Collaborating researchers at a laboratory technology company called SomaLogic used a technology they recently developed, SomaScan, to record levels of nearly 5,000 distinct proteins in the banked ARIC samples.

The researchers analyzed the results and found 38 proteins whose abnormal levels were significantly associated with a higher risk of developing Alzheimer's in the five years following the blood draw.

They then used SomaScan to measure protein levels from more than 11,000 blood samples taken from much younger ARIC participants in 1993-95. They found that abnormal levels of 16 of the 38 previously identified proteins were associated with the development of Alzheimer's in the nearly two decades between that blood draw and a follow-up clinical evaluation in 2011-13.

To verify these findings in a different patient population, the scientists reviewed the results of an earlier SomaScan of blood samples taken in 2002-06 during an Icelandic study. That study had assayed proteins including 13 of the 16 proteins identified in the ARIC analyses. Of those 13 proteins, six were again associated with Alzheimer's risk over a roughly 10-year follow-up period.

In a further statistical analysis, the researchers compared the identified proteins with data from past studies of genetic links to Alzheimer's. The comparison suggested strongly that one of the identified proteins, SVEP1, is not just an incidental marker of Alzheimer's risk but is involved in triggering or driving the disease.

SVEP1 is a protein whose normal functions remain somewhat mysterious, although in a study published earlier this year it was linked to the thickened artery condition, atherosclerosis, which underlies heart attacks and strokes.

Other proteins associated with Alzheimer's risk in the new study included several key immune proteins -- which is consistent with decades of findings linking Alzheimer's to abnormally intense immune activity in the brain.

The researchers plan to continue using techniques like SomaScan to analyze proteins in banked blood samples from long-term studies to identify potential Alzheimer's-triggering pathways -- a potential strategy to suggest new approaches for Alzheimer's treatments.

The scientists have also been studying how protein levels in the ARIC samples are linked to other diseases such as vascular (blood vessel-related) disease in the brain, heart and the kidney.

First author Keenan Walker, PhD, worked on this analysis while on faculty at the Johns Hopkins University School of Medicine and the Bloomberg School's Welch Center for Prevention, Epidemiology and Clinical Research. He is currently an investigator with the National Institute of Aging's Intramural Research Program.

Read more at Science Daily

May 17, 2021

Trace gases from ocean are source of particles accelerating Antarctic climate change

Scientists exploring the drivers of Antarctic climate change have discovered a new and more efficient pathway for the creation of natural aerosols and clouds which contribute significantly to temperature increases.

The Antarctic Peninsula has shown some of the largest global increases in near-surface air temperature over the last 50 years, but experts have struggled to predict temperatures because little was known about how natural aerosols and clouds affect the amount of sunlight absorbed by the Earth and energy radiated back into space.

Studying data from seas around the Peninsula, experts have discovered that most new particles are formed in air masses arriving from the partially ice-covered Weddell Sea -- a significant source of the sulphur gases and alkylamines responsible for 'seeding' the particles.

A new study shows that increased concentrations of sulphuric acid and alkylamines are essential for the formation of new particles around the northern Antarctic Peninsula. High concentrations of other acids and oxygenated organics coincided with high levels of sulphuric acid, but by themselves did not lead to measurable particle formation and growth.

An international team of researchers from the University of Birmingham; Institute of Marine Science, Barcelona, Spain; and King Abdulaziz University, Jeddah, Saudi Arabia studied summertime open ocean and coastal new particle formation in the region, based on data from ship and land stations, and today published its findings in Nature Geoscience.

The researchers revealed that the newly discovered pathway is more efficient than the ion-induced sulphuric acid-ammonia pathway previously observed in Antarctica and can occur rapidly under neutral conditions.

Study co-author Roy Harrison OBE, Professor of Environmental Health at the University of Birmingham, commented: "New particle formation is globally one of the major sources of aerosol particles and cloud condensation nuclei. This previously overlooked pathway to natural aerosol formation could prove a key tool in predicting the future climate of polar regions.

"The key to unlocking Antarctica's climate change lies in examining particles created in the atmosphere by the chemical reaction of gases. These particles start tiny and grow bigger, becoming cloud condensation nuclei leading to more reflective clouds which direct outgoing terrestrial radiation back to earth and warm the lower atmosphere."

New particle formation is globally one of the major sources of aerosol particles and cloud condensation nuclei. Existing research suggests that natural aerosols contribute disproportionately to global warming, whilst sulphuric acid is thought to be responsible for most aerosol seeding observed in the atmosphere.

The research team identified numerous sulphuric acid-amine cluster peaks during new particle formation events -- providing evidence that alkylamines provided the basis for sulphuric acid nucleation.

Read more at Science Daily

Two biodiversity refugia identified in the Eastern Bering Sea

Scientists from Hokkaido University have used species survey and climate data to identify two marine biodiversity refugia in the Eastern Bering Sea -- regions where species richness, community stability and climate stability are high.

Marine biodiversity, the diversity of life in the seas and oceans, supports ecosystem services of immense societal benefits. However, climate change and human activities have been adversely affecting marine biodiversity for many decades, resulting in population decline, community shifts, and species loss and extinction. Developing effective means to mitigate this rapid biodiversity loss is vital.

Scientists from Hokkaido University have identified and characterised regions in the Eastern Bering Sea where biodiversity has been protected from the effects of climate change. Their work was published in the journal Global Change Biology.

Conservation is one of the many approaches by which we have been able to protect biodiversity in various environments from climate change, pollution and human encroachment. Conservation hinges on the identification of areas where the maximum amount of biodiversity is preserved. One such area are refugia, regions that are relatively buffered from the impacts of ongoing climatic changes, which provide favorable habitats for species when the surrounding environment becomes inhospitable.

The scientists tracked the distribution of 159 marine species in the Eastern Bering Sea, off the coast of Alaska, from data collected by the National Oceanic and Atmospheric Administration (NOAA) between 1990 and 2018. Using statistical analysis, they attempted to find regions where there existed persistently high species richness and a stable marine community over a longer period of time. They also, separately, mapped out the changes in climate across the Eastern Bering Sea over the same period.

The scientists identified two distinct refugia in the fishery-rich waters of the Eastern Bering Sea. These regions covered less than 10% of the total study area but harbored 91% of the species analyzed. Most significantly, among the species sheltered in these refugia, commercially important fish and crabs were present in high numbers -- indicating that these refugia conserved high-value resources in addition to supporting high species diversity and community stability. Moreover, these refugia overlapped with regions of high climatic stability over time, where trends in seasonal sea surface temperatures and winter sea ice conditions remained largely unchanged.

Read more at Science Daily

The incredible return of Griffon Vulture to Bulgaria's Eastern Balkan Mountains

Fifty years after presumably becoming extinct as a breeding species in Bulgaria, the Griffon Vulture, one of the largest birds of prey in Europe, is back in the Eastern Balkan Mountains. Since 2009, three local conservation NGOs -- Green Balkans -- Stara Zagora, the Fund for Wild Flora and Fauna and the Birds of Prey Protection Society, have been working on a long-term restoration programme to bring vultures back to their former breeding range in Bulgaria. The programme is supported by the Vulture Conservation Foundation, the Government of Extremadura, Spain, and EuroNatur. Its results have been described in the open-access, peer-reviewed Biodiversity Data Journal.

Two large-scale projects funded by the EU's LIFE tool, one of them ongoing, facilitate the import of captive-bred or recovered vultures from Spain, France and zoos and rehabilitation centres across Europe. Birds are then accommodated in special acclimatization aviaries, individually tagged and released into the wild from five release sites in Bulgaria. Using this method, a total of 153 Griffon Vultures were released between 2009 and 2020 from two adaptation aviaries in the Kotlenska Planina Special Protection Area and the Sinite Kamani Nature Park in the Eastern Balkan Mountains of Bulgaria.

After some 50 years of absence, the very first successful reproduction in the area was reported as early as 2016. Now, as of December 2020, the local population consists of more than 80 permanently present individuals, among them about 25 breeding pairs, and has already produced a total of 31-33 chicks successfully fledged into the wild.

"Why vultures of all creatures? Because they were exterminated, yet provide an amazing service for people and healthy ecosystems," Elena Kmetova-Biro, initial project manager for the Green Balkans NGO explains.

"We have lost about a third of the vultures set free in that site, mostly due to electrocution shortly after release. The birds predominantly forage on feeding sites, where the team provides dead domestic animals collected from local owners and slaughterhouses," the researchers say.

 Read more at Science Daily

How plankton hold secrets to preventing pandemics

Whether it's plankton exposed to parasites or people exposed to pathogens, a host's initial immune response plays an integral role in determining whether infection occurs and to what degree it spreads within a population, new University of Colorado Boulder research suggests.

The findings, published May 13 in The American Naturalist, provide valuable insight for understanding and preventing the transmission of disease within and between animal species. From parasitic flatworms transmitted by snails into humans in developing nations, to zoonotic spillover events from mammals and insects to humans -- which have caused global pandemics like COVID-19 and West Nile virus -- an infected creature's immune response is a vital variable to consider in calculating what happens next.

"One of the biggest patterns that we're seeing in disease ecology and epidemiology is the fact that not all hosts are equal," said Tara Stewart Merrill, lead author of the paper and a postdoctoral fellow in ecology. "In infectious disease research, we want to build host immunity into our understanding of how disease spreads."

Invertebrates are common vectors for disease, which means they can transmit infectious pathogens between humans or from animals to humans. Vector-borne diseases, like malaria, account for almost 20% of all infectious diseases worldwide and are responsible for more than 700,000 deaths each year.

Yet epidemiological studies have rarely considered invertebrate immunity and recovery in creatures that are vectors for human disease. They assume that once exposed to a pathogen, the invertebrate host will become infected.

But what if it was possible for invertebrates to fight off these diseases, and break the link in the chain that passes them on to humans?

While observing a tiny species of zooplankton (Daphnia dentifera) throughout its lifecycle and exposure to a fungal parasite (Metschnikowia bicuspidata), the researchers saw this potential in action. Some of the plankton were good at stopping fungal spores from entering their bodies, and others cleared the infection within a limited window of time after ingesting the spores.

"Our results show that there are several defenses that invertebrates can use to reduce the likelihood of infection, and that we really need to understand those immune defenses to understand infection patterns," said Stewart Merrill.

Unexpected recovery

Stewart Merrill started this work in her first year as a doctoral student at the University of Illinois, studying this little plankton and its collection of defenses. It's a gruesome process if the plankton fails to ward off the parasite: Its fungal spores attack the plankton's gut, fill its body and grow until they are released when the host finally dies.

But she noticed something that had not been recorded before: Some of the doomed plankton recovered. Several years later, she has found that when faced with identical levels of exposure, the success or failure of these infections depends on the strength of the host's internal defenses during this early limited window of opportunity.

Based on their observations of these individual outcomes, the researchers developed a simple probabilistic model for measuring host immunity that can be applied across wildlife systems, with important applications for diseases transmitted to humans by invertebrates.

"When immune responses are good, they act as a filter that reduces transmission," said Stewart Merrill. "But any environmental change that degrades immunity can actually amplify transmission, because it will let all of that exposure go through and ultimately become infectious."

It's a model that can also apply to COVID-19, as research from CU Boulder has shown that not all hosts are the same in transmitting the coronavirus, and exposure does not directly determine infection.

COVID-19 is also believed to be the result of a zoonotic spillover, an infection that moved from animals into people, and similar probabilistic models could be advantageous in predicting the occurrence and spread of future spillover events, said Stewart Merrill.

Understanding prevention of infection

Stewart Merrill hopes that a better understanding of infections in a simple animal like plankton can be applied more broadly to invertebrates that matter for human health.

In Africa, Southeast Asia, as well as South and Central America, 200 million people suffer from infections caused by schistosomes -- invertebrates more commonly known as parasitic flatworms. They cause illness and death, and significant economic and public health consequences, so much so that the World Health Organization considers them the second-most socioeconomically devastating parasitic disease after malaria.

They're just one of many neglected tropical diseases transmitted to people by invertebrate hosts such as snails, mosquitoes and biting flies. These diseases infect a large portion of a population but occur in areas with low levels of sanitation that don't have the economic resources to address those diseases, said Stewart Merrill.

Schistosomes live in freshwater environments that people use for their drinking water, laundry and bathing. So even though there are treatments, the next day a person can easily get reinfected just by accessing the water they need. By better understanding how the flatworms themselves succumb to or fight off infection, scientists like Stewart Merrill help us get closer to stopping the chain of transmission into humans.

"We really need to work on understanding prevention of infection, and what that risk is in those aquatic systems, rather than just cures for infection," she said.

The good news is we can learn from the same invertebrates which infect us. In invertebrate hosts that suffer or die from their infections, there is a good incentive to learn how to build an immune response and fight it off. Some snails have even shown the ability to retain an immunological memory: If they get infected once and survive, then they might never get infected again.

Read more at Science Daily

May 16, 2021

Quantum machine learning hits a limit

A new theorem from the field of quantum machine learning has poked a major hole in the accepted understanding about information scrambling.

"Our theorem implies that we are not going to be able to use quantum machine learning to learn typical random or chaotic processes, such as black holes. In this sense, it places a fundamental limit on the learnability of unknown processes," said Zoe Holmes, a post-doc at Los Alamos National Laboratory and coauthor of the paper describing the work published today in Physical Review Letters.

"Thankfully, because most physically interesting processes are sufficiently simple or structured so that they do not resemble a random process, the results don't condemn quantum machine learning, but rather highlight the importance of understanding its limits," Holmes said.

In the classic Hayden-Preskill thought experiment, a fictitious Alice tosses information such as a book into a black hole that scrambles the text. Her companion, Bob, can still retrieve it using entanglement, a unique feature of quantum physics. However, the new work proves that fundamental constraints on Bob's ability to learn the particulars of a given black hole's physics means that reconstructing the information in the book is going to be very difficult or even impossible.

"Any information run through an information scrambler such as a black hole will reach a point where the machine learning algorithm stalls out on a barren plateau and thus becomes untrainable. That means the algorithm can't learn scrambling processes," said Andrew Sornborger a computer scientist at Los Alamos and coauthor of the paper. Sornborger is Director of Quantum Science Center at Los Alamos and leader of the Center's algorithms and simulation thrust. The Center is a multi-institutional collaboration led by Oak Ridge National Laboratory.

Barren plateaus are regions in the mathematical space of optimization algorithms where the ability to solve the problem becomes exponentially harder as the size of the system being studied increases. This phenomenon, which severely limits the trainability of large scale quantum neural networks, was described in a recent paper by a related Los Alamos team.

"Recent work has identified the potential for quantum machine learning to be a formidable tool in our attempts to understand complex systems," said Andreas Albrecht, a co-author of the research. Albrecht is Director of the Center for Quantum Mathematics and Physics (QMAP) and Distinguished Professor, Department of Physics and Astronomy, at UC Davis. "Our work points out fundamental considerations that limit the capabilities of this tool."

In the Hayden-Preskill thought experiment, Alice attempts to destroy a secret, encoded in a quantum state, by throwing it into nature's fastest scrambler, a black hole. Bob and Alice are the fictitious quantum dynamic duo typically used by physicists to represent agents in a thought experiment.

"You might think that this would make Alice's secret pretty safe," Holmes said, "but Hayden and Preskill argued that if Bob knows the unitary dynamics implemented by the black hole, and share a maximally entangled state with the black hole, it is possible to decode Alice's secret by collecting a few additional photons emitted from the black hole. But this prompts the question, how could Bob learn the dynamics implemented by the black hole? Well, not by using quantum machine learning, according to our findings."

A key piece of the new theorem developed by Holmes and her coauthors assumes no prior knowledge of the quantum scrambler, a situation unlikely to occur in real-world science.

Read more at Science Daily