Jun 19, 2021

Scientists detect signatures of life remotely

Left hands and right hands are almost perfect mirror images of each other. But whatever way they are twisted and turned, they cannot be superimposed onto each other. This is why the left glove simply won't fit the right hand as well as it fits the left. In science, this property is referred to as chirality.

Just like hands are chiral, molecules can be chiral, too. In fact, most molecules in the cells of living organisms, such as DNA, are chiral. Unlike hands, however, that usually come in pairs of left and right, the molecules of life almost exclusively occur in either their "left-handed" or their "right-handed" version. They are homochiral, as researchers say. Why that is, is still not clear. But this molecular homochirality is a characteristic property of life, a so-called biosignature.

As part of the MERMOZ project (see info box), an international team led by the University of Bern and the National Centre of Competence in Research NCCR PlanetS, has now succeeded in detecting this signature from a distance of 2 kilometers and at a velocity of 70 kph. Jonas Kühn, MERMOZ project manager of the University of Bern and co-author of the study that has just been published in the journal Astronomy and Astrophysics, says: "The significant advance is that these measurements have been performed in a platform that was moving, vibrating and that we still detected these biosignatures in a matter of seconds."

An instrument that recognizes living matter

"When light is reflected by biological matter, a part of the light's electromagnetic waves will travel in either clockwise or counterclockwise spirals. This phenomenon is called circular polarization and is caused by the biological matter's homochirality. Similar spirals of light are not produced by abiotic non-living nature," says the first author of the study Lucas Patty, who is a MERMOZ postdoctoral researcher at the University of Bern and member of the NCCR PlanetS,

Measuring this circular polarization, however, is challenging. The signal is quite faint and typically makes up less than one percent of the light that is reflected. To measure it, the team developed a dedicated device called a spectropolarimeter. It consists of a camera equipped with special lenses and receivers capable of separating the circular polarization from the rest of the light.

Yet even with this elaborate device, the new results would have been impossible until recently. "Just 4 years ago, we could detect the signal only from a very close distance, around 20 cm, and needed to observe the same spot for several minutes to do so," as Lucas Patty recalls. But the upgrades to the instrument he and his colleagues made, allow a much faster and stable detection, and the strength of the signature in circular polarisation persists even with distance. This rendered the instrument fit for the first ever aerial circular polarization measurements.

Useful measurements on earth and in space


Using this upgraded instrument, dubbed FlyPol, they demonstrated that within mere seconds of measurements they could differentiate between grass fields, forests and urban areas from a fast moving helicopter. The measurements readily show living matter exhibiting the characteristic polarization signals, while roads, for example, do not show any significant circular polarization signals. With the current setup, they are even capable of detecting signals coming from algae in lakes.

After their successful tests, the scientists now look to go even further. "The next step we hope to take, is to perform similar detections from the International Space Station (ISS), looking down at the Earth. That will allow us to assess the detectability of planetary-scale biosignatures. This step will be decisive to enable the search for life in and beyond our Solar System using polarization," says MERMOZ principal investigator and co-author Brice-Olivier Demory, professor of astrophysics at the University of Bern and member of the NCCR PlanetS says.

Read more at Science Daily

Unraveling the origin of Alzheimer's disease

Case Western Reserve University researchers studying prions -- misfolded proteins that cause lethal incurable diseases -- have identified for the first time surface features of human prions responsible for their replication in the brain.

The ultimate goal of the research is to help design a strategy to stop prion disease in humans -- and, ultimately, to translate new approaches to work on Alzheimer's and other neurodegenerative diseases.

Scientists have yet to discover the exact cause of Alzheimer's disease, but largely agree that protein issues play a role in its emergence and progression. Alzheimer's disease afflicts more than 6 million people in the U.S., and the Alzheimer's Association estimates that their care will cost an estimated $355 billion this year.

Research was done at the Safar Laboratory in the Department of Pathology and the Center for Proteomics and Bioinformatics at Case Western Reserve University School of Medicine, and at Case Western Reserve's Center for Synchrotron Bioscience at Brookhaven Laboratories in New York. Jiri Safar, professor of pathology, neurology and neurosciences at the Case Western Reserve School of Medicine, leads the work. The report, "Structurally distinct external domains drive replication of major human prions," was published in the June 17 issue of PLOS Pathogens.

Prions were first discovered in the late 1980s as a protein-containing biological agent that could replicate itself in living cells without nucleic acid. The public health impact of medically transmitted human prion diseases -- and also animal transmissions of bovine spongiform encephalopathy (BSE, "mad cow disease") prions -- dramatically accelerated the development of a new scientific concept of self-replicating protein.

Human prions can bind to neighboring normal proteins in the brain, and cause microscopic holes. In essence, they turn brains into sponge-like structures and lead to dementia and death. These discoveries led to the ongoing scientific debate on whether prion-like mechanisms may be involved in the origin and spread of other neurodegenerative disorders in humans.

"Human prion diseases are conceivably the most heterogenous neurodegenerative disorders, and a growing body of research indicates that they are caused by distinct strains of human prions," Safar said. "However, the structural studies of human prions have lagged behind the recent progress in rodent laboratory prions, in part because of their complex molecular characteristics and prohibitive biosafety requirements necessary for investigating disease which is invariably fatal and has no treatment."

The researchers developed a new three-step process to study human prions:
 

  • Human brain-derived prions were first exposed to a high-intensity synchrotron X-ray beam. That beam created hydroxyl radical species which, with short bursts of light, selectively and progressively changed the prion's surface chemical composition. The unique properties of this type of light source include its enormous intensity; it can be millions of times brighter than light from the sun to the Earth.
  • The rapid chemical modifications of prions by short bursts of light were monitored with anti-prion antibodies. The antibodies recognize the prion surface features, and mass spectrometry that identifies exact sites of prion-specific, strain-based differences, providing an even more precise description of the prion's defects.
  • Illuminated prions were then allowed to replicate in a test tube. The progressive loss of their replication activity as the synchrotron modifies them helped identify key structural elements responsible for prions' replication and propagation in the brain.


"The work is a critical first step for identifying sites of structural importance that reflect differences between prions of different diagnosis and aggressiveness," said Mark Chance, vice dean for research at the School of Medicine and a co-investigator on the work. "Thus, we can now envision designing small molecules to bind to these sites of nucleation and replication and block progression of human prion disease in patients."

Read more at Science Daily

First evidence that medieval plague victims were buried individually with 'considerable care'

In the mid-14th century Europe was devastated by a major pandemic -- the Black Death -- which killed between 40 and 60 per cent of the population. Later waves of plague then continued to strike regularly over several centuries.

Plague kills so rapidly it leaves no visible traces on the skeleton, so archaeologists have previously been unable to identify individuals who died of plague unless they were buried in mass graves.

Whilst it has long been suspected that most plague victims received individual burial, this has been impossible to confirm until now.

By studying DNA from the teeth of individuals who died at this time, researchers from the After the Plague project, based at the Department of Archaeology, University of Cambridge, have identified the presence of Yersinia Pestis, the pathogen that causes plague.

These include people who received normal individual burials at a parish cemetery and friary in Cambridge and in the nearby village of Clopton.

Lead author Craig Cessford of the University of Cambridge said, "These individual burials show that even during plague outbreaks individual people were being buried with considerable care and attention. This is shown particularly at the friary where at least three such individuals were buried within the chapter house. Cambridge Archaeological Unit conducted excavations on this site on behalf of the University in 2017."

"The individual at the parish of All Saints by the Castle in Cambridge was also carefully buried; this contrasts with the apocalyptic language used to describe the abandonment of this church in 1365 when it was reported that the church was partly ruinous and 'the bones of dead bodies are exposed to beasts'."

The study also shows that some plague victims in Cambridge did, indeed, receive mass burials.

Yersinia Pestis was identified in several parishioners from St Bene't's, who were buried together in a large trench in the churchyard excavated by the Cambridge Archaeological Unit on behalf of Corpus Christi College.

This part of the churchyard was soon afterwards transferred to Corpus Christi College, which was founded by the St Bene't's parish guild to commemorate the dead including the victims of the Black Death. For centuries, the members of the College would walk over the mass burial every day on the way to the parish church.

Read more at Science Daily

Jun 18, 2021

Hubble data confirms galaxies lacking dark matter

The most accurate distance measurement yet of ultra-diffuse galaxy (UDG) NGC1052-DF2 (DF2) confirms beyond any shadow of a doubt that it is lacking in dark matter. The newly measured distance of 22.1 +/-1.2 megaparsecs was obtained by an international team of researchers led by Zili Shen and Pieter van Dokkum of Yale University and Shany Danieli, a NASA Hubble Fellow at the Institute for Advanced Study.

"Determining an accurate distance to DF2 has been key in supporting our earlier results," stated Danieli. "The new measurement reported in this study has crucial implications for estimating the physical properties of the galaxy, thus confirming its lack of dark matter."

The results, published in Astrophysical Journal Letters on June 9, 2021, are based on 40 orbits of NASA's Hubble Space Telescope, with imaging by the Advanced Camera for Surveys and a "tip of the red giant branch" (TRGB) analysis, the gold standard for such refined measurements. In 2019, the team published results measuring the distance to neighboring UDG NGC1052-DF4 (DF4) based on 12 Hubble orbits and TRGB analysis, which provided compelling evidence of missing dark matter. This preferred method expands on the team's 2018 studies that relied on "surface brightness fluctuations" to gauge distance. Both galaxies were discovered with the Dragonfly Telephoto Array at the New Mexico Skies observatory.

"We went out on a limb with our initial Hubble observations of this galaxy in 2018," van Dokkum said. "I think people were right to question it because it's such an unusual result. It would be nice if there were a simple explanation, like a wrong distance. But I think it's more fun and more interesting if it actually is a weird galaxy."

In addition to confirming earlier distance findings, the Hubble results indicated that the galaxies were located slightly farther away than previously thought, strengthening the case that they contain little to no dark matter. If DF2 were closer to Earth, as some astronomers claim, it would be intrinsically fainter and less massive, and the galaxy would need dark matter to account for the observed effects of the total mass.

Dark matter is widely considered to be an essential ingredient of galaxies, but this study lends further evidence that its presence may not be inevitable. While dark matter has yet to be directly observed, its gravitational influence is like a glue that holds galaxies together and governs the motion of visible matter. In the case of DF2 and DF4, researchers were able to account for the motion of stars based on stellar mass alone, suggesting a lack or absence of dark matter. Ironically, the detection of galaxies deficient in dark matter will likely help to reveal its puzzling nature and provide new insights into galactic evolution.

While DF2 and DF4 are both comparable in size to the Milky Way galaxy, their total masses are only about one percent of the Milky Way's mass. These ultra-diffuse galaxies were also found to have a large population of especially luminous globular clusters.

This research has generated a great deal of scholarly interest, as well as energetic debate among proponents of alternative theories to dark matter, such as Modified Newtonian dynamics (MOND). However, with the team's most recent findings -- including the relative distances of the two UDGs to NGC1052 -- such alternative theories seem less likely. Additionally, there is now little uncertainty in the team's distance measurements given the use of the TRGB method. Based on fundamental physics, this method depends on the observation of red giant stars that emit a flash after burning through their helium supply that always happens at the same brightness.

"There's a saying that extraordinary claims require extraordinary evidence, and the new distance measurement strongly supports our previous finding that DF2 is missing dark matter," stated Shen. "Now it's time to move beyond the distance debate and focus on how such galaxies came to exist."

 Read more at Science Daily

A new rapid assessment to promote climate-informed conservation and nature-based solutions

A new article, published as a Perspective in the journal Conservation Science and Practice, introduces a rapid assessment framework that can be used as a guide to make conservation and nature-based solutions more robust to future climate.

Climate change poses risks to conservation efforts, if practitioners assume a future climate similar to the past or present. For example, more frequent and intense disturbances, such as wildfire or drought-induced tree mortality, can threaten projects that are designed to enhance habitat for forest-dependent species and sequester carbon. Overlooking such climate-related risks can result in failed conservation investments and negative outcomes for people, biodiversity, and ecosystem integrity as well as lead to carbon-sink reversal. Drawing from lessons learned from a decade of funding over 100 adaptation initiatives through the WCS Climate Adaptation Fund, the authors offer a simple framework that enables users to rapidly assess how -- and by what means -- climate change will require innovation beyond business-as-usual conservation practice.

This tractable assessment encourages practitioners and funders to use the "what, when, where, why, and who" -- or the "5Ws" -- of climate-informed action as a tool in project design and implementation. The "what," for example, means considering whether climate variability and projected changes will require taking new actions or modifying existing actions. The "who" asks users to consider: by whom, with whom, who benefits and who might bear potential harm or tradeoffs from project implementation and anticipated outcomes.

Using the 5Ws in practice can result in doing conservation differently in the warming world and help practitioners achieve their desired objectives. They use available science and local knowledge to address climate risks to traditional investments in reforestation, fire management, watershed restoration, and habitat protection. Take reforestation as an example: a traditional approach might aim to enhance habitat and carbon sequestration using seed or seedlings from historically-dominant tree species. Tree mortality due to unsuitable climate conditions could then lead to unexpected habitat degradation and reductions in carbon sequestration. A climate-informed approach favors native species that are expected to thrive under future climate. Seed or seedlings can be sourced from warmer and/or drier locations to assist migration to climatically-suitable areas. The 5Ws facilitates this process of figuring out what, if anything, should be done differently from the status quo.

Read more at Science Daily

First months decisive for immune system development

Many diseases caused by a dysregulated immune system, such as allergies, asthma and autoimmunity, can be traced back to events in the first few months after birth. To date, the mechanisms behind the development of the immune system have not been fully understood. Now, researchers at Karolinska Institutet show a connection between breast milk, beneficial gut bacteria and the development of the immune system. The study is published in Cell.

"A possible application of our results is a preventative method for reducing the risk of allergies, asthma and autoimmune disease later in life by helping the immune system to establish its regulatory mechanisms," says the paper's last author Petter Brodin, paediatrician and researcher at the Department of Women's and Children's Health, Karolinska Institutet. "We also believe that certain mechanisms that the study identifies can eventually lead to other types of treatment for such diseases, not just a prophylactic."

The incidence of autoimmune diseases such as asthma, type 1 diabetes and Crohn's disease is increasing in children and adolescents in parts of the world. These diseases are debilitating, but not as common in low-income countries as they are in Europe and the USA.

It has long been known that the risk of developing these diseases is largely determined by early life events; for instance, there is a correlation between the early use of antibiotics and a higher risk of asthma. It is also known that breastfeeding protects against most of these disorders.

There is a link between specific, protective bacteria on the skin and in the airways and gut and a lower risk of immunological diseases. However, there is still much to learn about how these bacteria form the immune system.

Researchers at Karolinska Institutet, Evolve Biosystems, Inc, the University of California Davis, University of Nebraska, Lincoln, and University of Nevada, Reno studied how the neonatal immune system adapts to and is shaped by the many bacteria, viruses, nutrients and other environmental factors to which the baby is exposed during the first few months of life.

Earlier research has shown that bifidobacteria are common in breastfed babies in countries with a low incidence of autoimmune diseases.

Breast milk is rich in HMOs (Human milk oligosaccharides), which babies are unable to metabolise on their own. The production of these complex sugars are instead associated with the evolutionary advantage of nourishing specific gut bacteria that play an important part in their immune system. Bifidobacteria are one such bacterial class.

"We found that babies whose intestinal flora can break down HMOs have less inflammation in the blood and gut," says professor Brodin. "This is probably because of the uniquely good ability of the bifidobacteria to break down HMOs, to expand in nursing babies and to have a beneficial effect on the developing immune system early in life."

Babies who were breastfed and received additional bifidobacteria had higher intestinal levels of the molecules ILA and Galectin-1. ILA (indole-3-lactic acid) is needed to convert HMO molecules into nutrition; Galectin-1 is central to the activation of the immune response to threats and attacks.

According to the researchers, Galectin-1 is a newly discovered and critical mechanism for preserving bacteria with beneficial, anti-inflammatory properties in the intestinal flora.

The results are based on 208 breastfed babies born at Karolinska University Hospital between 2014 and 2019. The researchers also used novel methods to analyse the immune system even from small blood samples. Additionally, a second cohort developed by the University of California in which infants were exclusively breastfed and half were fed B. infantis supplement were analyzed for enteric inflammation.

One limitation of the study is that the researchers were unable to study the immune system direct in the gut and had to resort to blood samples. Not all aspects of the gut immune system can be seen in the blood, but it is not ethically defensible to take intestinal biopsies from healthy neonates.

The researchers now hope to follow the participant babies for a longer time to see which ones develop atopic eczema, asthma and allergies.

Read more at Science Daily

Most cancer cells grown in a dish have little in common with cancer cells in people, research finds

In a bid to find or refine laboratory research models for cancer that better compare with what happens in living people, Johns Hopkins Medicine scientists report they have developed a new computer-based technique showing that human cancer cells grown in culture dishes are the least genetically similar to their human sources.

The finding, they say, should help focus more resources on cancer research models such as genetically engineered mice and 3D balls of human tissue known as "tumoroids" to better evaluate human cancer biology and treatments, and the genetic errors responsible for cancer growth and progress.

"It may not be a surprise to scientists that cancer cell lines are genetically inferior to other models, but we were surprised that genetically engineered mice and tumoroids performed so very well by comparison," says Patrick Cahan, Ph.D., associate professor of biomedical engineering at The Johns Hopkins University and the Johns Hopkins University School of Medicine and lead investigator of the new study.

The new technique, dubbed CancerCellNet, uses computer models to compare the RNA sequences of a research model with data from a cancer genome atlas to compare how closely the two sets match up.

The researchers found that, on average, genetically engineered mice and tumoroids have RNA sequences most closely aligned with the genome atlas baseline data in 4 out of every 5 tumor types they tested, including breast, lung and ovarian cancers.

The investigators say their work adds to evidence that cancer cell lines grown in the laboratory have less parity with their human source because of the complex differences between a human cell's natural environment and a laboratory growth environment. "Once you take tumors out of their natural environment, cell lines start to change," says Cahan.

Scientists worldwide rely on a range of research models to improve their understanding of cancer and other disease biology and develop treatments for conditions. Among the most widely used cancer research models are cell lines created by extracting cells from human tumors and growing them with various nutrients in laboratory flasks.

Researchers also use mice that have been genetically engineered to develop cancer. In other cases, they implant human tumors into mice, a process called xenografting, or use tumoroids.

To evaluate how well any of these research models align with what may be happening in people, scientists often transplant lab-cultured cells or cells from tumoroids or xenografts into mice and see if the cells behave as they should -- that is, grow and spread and retain the genetic hallmarks of cancer. However, the Johns Hopkins researchers say this process is expensive, time-consuming and scientifically challenging.

The goal of the new work was to develop a computational approach to evaluating research models in a less cumbersome and accurate way. A report on the work was published April 29 in Genome Medicine, and the researchers have filed for a provisional patent on what they named CancerCellNet.

The new technique is based on genetic information about cellular RNA, a molecular string of chemicals similar to DNA and an intermediate set of instructions cells used to translate DNA into the manufacture of proteins.

"RNA is a pretty good surrogate for cell type and cell identity, which are key to determining whether lab-developed cells resemble their human counterparts," says Cahan. "RNA expression data is very standardized and available to researchers, and less subject to technical variation that can confound a study's results."

First, Cahan and his team had to choose a standard set of data that acted as a baseline to compare the research models. Data from The Cancer Genome Atlas served as the so-called "training" data, which includes RNA expression information of hundreds of patient tumor samples, and their corresponding stage, grade and other tumor information.

They also tested their CancerCellNet tool by applying it to data where the tumor type was already known, such as from the International Human Genome Sequencing Consortium.

Members of the research team combed through The Cancer Genome Atlas data to determine 22 types of tumors to study. They used the genome atlas data as the baseline for comparing RNA expression data from 657 cancer cell lines grown in labs worldwide, some of which were established decades ago, 415 xenografts, 26 genetically engineered mouse models and 131 tumoroids.

In one example from the study, prostate cancer cells from a line called PC3 start to look genetically more like bladder cancer, he notes. It's also possible, he says, that the cell line was originally labeled incorrectly or it could have actually been derived from bladder cancer. But the bottom line was that from a genetic standpoint, the prostate cancer cell line was not a representative surrogate for what happens in a typical human with prostate cancer.

Read more at Science Daily

Jun 17, 2021

How a supermassive black hole originates

Supermassive black holes, or SMBHs, are black holes with masses that are several million to billion times the mass of our sun. The Milky Way hosts an SMBH with mass a few million times the solar mass. Surprisingly, astrophysical observations show that SMBHs already existed when the universe was very young. For example, a billion solar mass black holes are found when the universe was just 6% of its current age, 13.7 billion years. How do these SMBHs in the early universe originate?

A team led by a theoretical physicist at the University of California, Riverside, has come up with an explanation: a massive seed black hole that the collapse of a dark matter halo could produce.

Dark matter halo is the halo of invisible matter surrounding a galaxy or a cluster of galaxies. Although dark matter has never been detected in laboratories, physicists remain confident this mysterious matter that makes up 85% of the universe's matter exists. Were the visible matter of a galaxy not embedded in a dark matter halo, this matter would fly apart.

"Physicists are puzzled why SMBHs in the early universe, which are located in the central regions of dark matter halos, grow so massively in a short time," said Hai-Bo Yu, an associate professor of physics and astronomy at UC Riverside, who led the study that appears in Astrophysical Journal Letters. "It's like a 5-year-old child that weighs, say, 200 pounds. Such a child would astonish us all because we know the typical weight of a newborn baby and how fast this baby can grow. Where it comes to black holes, physicists have general expectations about the mass of a seed black hole and its growth rate. The presence of SMBHs suggests these general expectations have been violated, requiring new knowledge. And that's exciting."

A seed black hole is a black hole at its initial stage -- akin to the baby stage in the life of a human.

"We can think of two reasons," Yu added. "The seed -- or 'baby' -- black hole is either much more massive or it grows much faster than we thought, or both. The question that then arises is what are the physical mechanisms for producing a massive enough seed black hole or achieving a fast enough growth rate?"

"It takes time for black holes to grow massive by accreting surrounding matter," said co-author Yi-Ming Zhong, a postdoctoral researcher at the Kavli Institute for Cosmological Physics at the University of Chicago. "Our paper shows that if dark matter has self-interactions then the gravothermal collapse of a halo can lead to a massive enough seed black hole. Its growth rate would be more consistent with general expectations."

In astrophysics, a popular mechanism used to explain SMBHs is the collapse of pristine gas in protogalaxies in the early universe.

"This mechanism, however, cannot produce a massive enough seed black hole to accommodate newly observed SMBHs -- unless the seed black hole experienced an extremely fast growth rate," Yu said. "Our work provides an alternative explanation: a self-interacting dark matter halo experiences gravothermal instability and its central region collapses into a seed black hole."

The explanation Yu and his colleagues propose works in the following way:

Dark matter particles first cluster together under the influence of gravity and form a dark matter halo. During the evolution of the halo, two competing forces -- gravity and pressure -- operate. While gravity pulls dark matter particles inward, pressure pushes them outward. If dark matter particles have no self-interactions, then, as gravity pulls them toward the central halo, they become hotter, that is, they move faster, the pressure increases effectively, and they bounce back. However, in the case of self-interacting dark matter, dark matter self-interactions can transport the heat from those "hotter" particles to nearby colder ones. This makes it difficult for the dark matter particles to bounce back.

Yu explained that the central halo, which would collapse into a black hole, has angular momentum, meaning, it rotates. The self-interactions can induce viscosity, or "friction," that dissipates the angular momentum. During the collapse process, the central halo, which has a fixed mass, shrinks in radius and slows down in rotation due to viscosity. As the evolution continues, the central halo eventually collapses into a singular state: a seed black hole. This seed can grow more massive by accreting surrounding baryonic -- or visible -- matter such as gas and stars.

"The advantage of our scenario is that the mass of the seed black hole can be high since it is produced by the collapse of a dark matter halo," Yu said. "Thus, it can grow into a supermassive black hole in a relatively short timescale."

The new work is novel in that the researchers identify the importance of baryons -- ordinary atomic and molecular particles -- for this idea to work.

"First, we show the presence of baryons, such as gas and stars, can significantly speed up the onset of the gravothermal collapse of a halo and a seed black hole could be created early enough," said Wei-Xiang Feng, Yu's graduate student and a co-author on the paper. "Second, we show the self-interactions can induce viscosity that dissipates the angular momentum remnant of the central halo. Third, we develop a method to examine the condition for triggering general relativistic instability of the collapsed halo, which ensures a seed black hole could form if the condition is satisfied."

Over the past decade, Yu has explored novel predictions of dark matter self-interactions and their observational consequences. His work has shown that self-interacting dark matter can provide a good explanation for the observed motion of stars and gas in galaxies.

Read more at Science Daily

Study of young chaotic star system reveals planet formation secrets

A team of scientists using the Atacama Large Millimeter/submillimeter Array (ALMA) to study the young star Elias 2-27 have confirmed that gravitational instabilities play a key role in planet formation, and have for the first time directly measured the mass of protoplanetary disks using gas velocity data, potentially unlocking one of the mysteries of planet formation. The results of the research are published today in two papers in The Astrophysical Journal.

Protoplanetary disks -- planet-forming disks made of gas and dust that surround newly formed young stars -- are known to scientists as the birthplace of planets. The exact process of planet formation, however, has remained a mystery. The new research, led by Teresa Paneque-Carreño -- a recent graduate of the Universidad de Chile and PhD student at the University of Leiden and the European Southern Observatory, and the primary author on the first of the two papers -- focuses on unlocking the mystery of planet formation.

During observations, scientists confirmed that the Elias 2-27 star system -- a young star located less than 400 light-years away from Earth in the constellation Ophiuchus -- was exhibiting evidence of gravitational instabilities which occur when planet-forming disks carry a large fraction of the system's stellar mass. "How exactly planets form is one of the main questions in our field. However, there are some key mechanisms that we believe can accelerate the process of planet formation," said Paneque-Carreño. "We found direct evidence for gravitational instabilities in Elias 2-27, which is very exciting because this is the first time that we can show kinematic and multi-wavelength proof of a system being gravitationally unstable. Elias 2-27 is the first system that checks all of the boxes."

Elias 2-27's unique characteristics have made it popular with ALMA scientists for more than half a decade. In 2016, a team of scientists using ALMA discovered a pinwheel of dust swirling around the young star. The spirals were believed to be the result of density waves, commonly known to produce the recognizable arms of spiral galaxies -- like the Milky Way Galaxy -- but at the time, had never before been seen around individual stars.

"We discovered in 2016 that the Elias 2-27 disk had a different structure from other already studied systems, something not observed in a protoplanetary disk before: two large-scale spiral arms. Gravitational instabilities were a strong possibility, but the origin of these structures remained a mystery and we needed further observations," said Laura Pérez, Assistant Professor at the Universidad de Chile and the principal investigator on the 2016 study. Together with collaborators, she proposed further observations in multiple ALMA bands that were analyzed with Paneque-Carreño as a part of her M.Sc. thesis at Universidad de Chile.

In addition to confirming gravitational instabilities, scientists found perturbations -- or disturbances -- in the star system above and beyond theoretical expectations. "There may still be new material from the surrounding molecular cloud falling onto the disk, which makes everything more chaotic," said Paneque-Carreño, adding that this chaos has contributed to interesting phenomena that have never been observed before, and for which scientists have no clear explanation. "The Elias 2-27 star system is highly asymmetric in the gas structure. This was completely unexpected, and it is the first time we've observed such vertical asymmetry in a protoplanetary disk."

Cassandra Hall, Assistant Professor of Computational Astrophysics at the University of Georgia, and a co-author on the research, added that the confirmation of both vertical asymmetry and velocity perturbations -- the first large-scale perturbations linked to spiral structure in a protoplanetary disk -- could have significant implications for planet formation theory. "This could be a 'smoking gun' of gravitational instability, which may accelerate some of the earliest stages of planet formation. We first predicted this signature in 2020, and from a computational astrophysics point of view, it's exciting to be right."

Paneque-Carreño added that while the new research has confirmed some theories, it has also raised new questions. "While gravitational instabilities can now be confirmed to explain the spiral structures in the dust continuum surrounding the star, there is also an inner gap, or missing material in the disk, for which we do not have a clear explanation."

One of the barriers to understanding planet formation was the lack of direct measurement of the mass of planet-forming disks, a problem addressed in the new research. The high sensitivity of ALMA Band 6, paired with Bands 3 and 7, allowed the team to more closely study the dynamical processes, density, and even the mass of the disk. "Previous measurements of protoplanetary disk mass were indirect and based only on dust or rare isotopologues. With this new study, we are now sensitive to the entire mass of the disk," said Benedetta Veronesi -- a graduate student at the University of Milan and postdoctoral researcher at École normale supérieure de Lyon, and the lead author on the second paper. "This finding lays the foundation for the development of a method to measure disk mass that will allow us to break down one of the biggest and most pressing barriers in the field of planet formation. Knowing the amount of mass present in planet-forming disks allows us to determine the amount of material available for the formation of planetary systems, and to better understand the process by which they form."

Read more at Science Daily

A quarter of adults don't want children -- and they're still happy

Parenting is one of life's greatest joys, right? Not for everyone. New research from Michigan State University psychologists examines characteristics and satisfaction of adults who don't want children.

As more people acknowledge they simply don't want to have kids, Jennifer Watling Neal and Zachary Neal, both associate professors in MSU's department of psychology, are among the first to dive deeper into how these "child-free" individuals differ from others.

"Most studies haven't asked the questions necessary to distinguish 'child-free' individuals -- those who choose not to have children -- from other types of nonparents," Jennifer Watling Neal said. "Nonparents can also include the 'not-yet-parents' who are planning to have kids, and 'childless' people who couldn't have kids due to infertility or circumstance. Previous studies simply lumped all nonparents into a single category to compare them to parents."

The study -- published June 16 in PLOS ONE -- used a set of three questions to identify child-free individuals separately from parents and other types of nonparents. The researchers used data from a representative sample of 1,000 adults who completed MSU's State of the State Survey, conducted by the university's Institute for Public Policy and Social Research.

"After controlling for demographic characteristics, we found no differences in life satisfaction and limited differences in personality traits between child-free individuals and parents, not-yet-parents, or childless individuals," Zachary Neal said. "We also found that child-free individuals were more liberal than parents, and that people who aren't child-free felt substantially less warm toward child-free individuals."

Beyond findings related to life satisfaction and personality traits, the research unveiled additional unexpected findings.

"We were most surprised by how many child-free people there are," Jennifer Watling Neal said. "We found that more than one in four people in Michigan identified as child-free, which is much higher than the estimated prevalence rate in previous studies that relied on fertility to identify child-free individuals. These previous studies placed the rate at only 2% to 9%. We think our improved measurement may have been able to better capture individuals who identify as child-free."

Read more at Science Daily

How long-known genes continue to surprise researchers

Alternative splicing can lead to the formation of numerous protein variants. For the first time, alternative splicing has now been systematically analysed for the family of glutamate receptors.

Proteins are encoded by genes -- however, this information is divided into small coding sections, which are only assembled during a process called splicing. Various combinations are possible, some of which are still unknown. Dr. Robin Herbrechter and Professor Andreas Reiner from the junior research group Cellular Neurobiology at Ruhr-Universität Bochum (RUB) now systematically analysed alternative splicing in the family of ionotropic glutamate receptors (iGluRs), which is essential for signal processing in the brain. These findings were published in the journal Cellular and Molecular Life Sciences on 8 June 2021.

Huge splicing diversity in the brain

The human genome was sequenced around 20 years ago. Since then, the sequence information encoding our proteins is known -- at least in principle. However, this information is not continuously stored in the individual genes, but is divided into smaller coding sections. These coding sections, also known as exons, are assembled in a process called splicing. Depending on the gene, different exon combinations are possible, which is why they are referred to as different or alternative splicing combinations.

Almost all 20,000 human genes can be alternatively spliced. A particularly huge variety of different splice variants is found in the brain, which allows for creating a huge diversity and allows to adapt the proteins to specific requirements. "However, it is not easy to determine, which protein variants are actually present," says Andreas Reiner. "Sequencing of already-spliced messenger RNAs (mRNAs), so-called RNA-Seq data, which are now increasingly being obtained with high-throughput approaches, offers a way out." Robin Herbrechter and Andreas Reiner now used such data to obtain an overview of all ionotropic glutamate receptor splice variants.

New glutamate receptor variants detected


Using bioinformatic methods, the researchers aligned billions of mRNA sequence snippets to the genome to reconstruct the frequency of individual splice events. This method also enabled them to detect new, previously unknown splice variants. There were quite a few surprises: the systematic analysis showed that some variants found in the previously studied model organisms mouse and rat do not occur in humans at all, or are much less abundant than previously assumed.

 Read more at Science Daily

New method could reveal what genes we might have inherited from Neanderthals

Thousands of years ago, archaic humans such as Neanderthals and Denisovans went extinct. But before that, they interbred with the ancestors of present-day humans, who still to this day carry genetic mutations from the extinct species.

Over 40 percent of the Neanderthal genome is thought to have survived in different present-day humans of non-African descent, but spread out so that any individual genome is only composed of up to two percent Neanderthal material. Some human populations also carry genetic material from Denisovans -- a mysterious group of archaic humans that may have lived in Eastern Eurasia and Oceania thousands of years ago.

The introduction of beneficial genetic material into our gene pool, a process known as adaptive introgression, often happened because it was advantageous to humans after they expanded across the globe. To name a few examples, scientists believe some of the mutations affected skin development and metabolism. But many mutations are yet still undiscovered.

Now, researchers from GLOBE Institute at the University of Copenhagen have developed a new method using deep learning techniques to search the human genome for undiscovered mutations.

"We developed a deep learning method called 'genomatnn' that jointly models introgression, which is the transfer of genetic information between species, and natural selection. The model was developed in order to identify regions in the human genome where this introgression could have happened," says Associate Professor Fernando Racimo, GLOBE Institute, corresponding author of the new study.

"Our method is highly accurate and outcompetes previous approaches in power. We applied it to various human genomic datasets and found several candidate beneficial gene variants that were introduced into the human gene pool," he says.

The new method is based on a so-called convolutional neural network (CNN), which is a type of deep learning framework commonly used in image and video recognition.

Using hundreds of thousands of simulations, the researchers at the University of Copenhagen trained the CNN to identify patterns in images of the genome that would be produced by adaptive introgression with archaic humans.

Besides confirming already suggested genetic mutations from adaptive introgression, the researchers also discovered possible mutations that were not known to be introgressed.

"We recovered previously identified candidates for adaptive introgression in modern humans, as well as several candidates which have not previously been described," says postdoc Graham Gower, first author of the new study.

Some of the previously undescribed mutations are involved in core pathways in human metabolism and immunity.

"In European genomes, we found two strong candidates for adaptive introgression from Neanderthals in regions of the genome that affect phenotypes related to blood, including blood cell counts. In Melanesian genomes, we found candidate variants introgressed from Denisovans that potentially affected a wide range of traits, such as blood-related diseases, tumor suppression, skin development, metabolism, and various neurological diseases. It's not clear how such traits are affected in present-day carriers of the archaic variants, e.g. neutrally, positively or negatively, although historically the introgressed genetic material is assumed to have had a positive effect on those individuals carrying them," he explains.

The next stage for the research team is to adapt the method to more complex demographic and selection scenarios to understand the overall fate of Neanderthal genetic material. Graham Gower points out that the team aims to follow up on the function of the candidate variants in the genome that they found in this study.

Looking forward, it remains a challenge to search the human genome for genetic material from as yet unsampled populations, so-called ghost populations. However, the researchers are hopeful that they can further train the neural network to recognize mutations from these unsampled populations.

Read more at Science Daily

Jun 15, 2021

Lightning impacts edge of space in ways not previously observed

Solar flares jetting out from the sun and thunderstorms generated on Earth impact the planet's ionosphere in different ways, which have implications for the ability to conduct long range communications.

A team of researchers working with data collected by the Incoherent Scatter Radar (ISR) at the Arecibo Observatory, satellites, and lightning detectors in Puerto Rico have for the first time examined the simultaneous impacts of thunderstorms and solar flares on the ionospheric D-region (often referred to as the edge of space).

In the first of its kind analysis, the team determined that solar flares and lightning from thunderstorms trigger unique changes to that edge of space, which is used for long-range communications such the GPS found in vehicles and airplanes.

The work, led by New Mexico Tech assistant professor of physics Caitano L. da Silva was published recently in the journal Scientific Reports, a journal of the Nature Publishing Group.

"These are really exciting results," says da Silva. "One of the key things we showed in the paper is that lightning- and solar flare-driven signatures are completely different. The first tends to create electron density depletions, while the second enhancements (or ionization)."

While the AO radar used in the study is no longer available because of the collapse of AO's telescope in December of 2020, scientists believe that the data they collected and other AO historical data will be instrumental in advancing this work.

"This study helps emphasize that, in order to fully understand the coupling of atmospheric regions, energy input from below (from thunderstorms) into the lower ionosphere needs to be properly accounted for," da Silva says. "The wealth of data collected at AO over the years will be a transformative tool to quantify the effects of lightning in the lower ionosphere."

Better understanding the impact on the Earth's ionosphere will help improve communications.

da Silva worked with a team of researchers at the Arecibo Observatory (AO) in Puerto Rico, a National Science Foundation facility managed by the University of Central Florida under a cooperative agreement. The co-authors are AO Senior Scientist Pedrina Terra, Assistant Director of Science Operations Christiano G. M. Brum and Sophia D. Salazar a student at NMT who spent her 2019 summer at the AO as part of the NSF- supported Research Undergraduate Experience. Salazar completed the initial analysis of the data as part of her internship with the senior scientists' supervision.

"The Arecibo Observatory REU is hands down one of the best experiences I've had so far," says the 21-year-old. "The support and encouragement provided by the AO staff and REU students made the research experience everything that it was. There were many opportunities to network with scientists at AO from all over the world, many of which I would likely never have met without the AO REU."

Read more at Science Daily

Dark matter is slowing the spin of the Milky Way's galactic bar

The spin of the Milky Way's galactic bar, which is made up of billions of clustered stars, has slowed by about a quarter since its formation, according to a new study by researchers at University College London (UCL) and the University of Oxford.

For 30 years, astrophysicists have predicted such a slowdown, but this is the first time it has been measured.

The researchers say it gives a new type of insight into the nature of dark matter, which acts like a counterweight slowing the spin.

In the study, published in the Monthly Notices of the Royal Astronomical Society, researchers analysed Gaia space telescope observations of a large group of stars, the Hercules stream, which are in resonance with the bar -- that is, they revolve around the galaxy at the same rate as the bar's spin.

These stars are gravitationally trapped by the spinning bar. The same phenomenon occurs with Jupiter's Trojan and Greek asteroids, which orbit Jupiter's Lagrange points (ahead and behind Jupiter). If the bar's spin slows down, these stars would be expected to move further out in the galaxy, keeping their orbital period matched to that of the bar's spin.

The researchers found that the stars in the stream carry a chemical fingerprint -- they are richer in heavier elements (called metals in astronomy), proving that they have travelled away from the galactic centre, where stars and star-forming gas are about 10 times as rich in metals compared to the outer galaxy.

Using this data, the team inferred that the bar -- made up of billions of stars and trillions of solar masses -- had slowed down its spin by at least 24% since it first formed.

Co-author Dr Ralph Schoenrich (UCL Physics & Astronomy) said: "Astrophysicists have long suspected that the spinning bar at the centre of our galaxy is slowing down, but we have found the first evidence of this happening.

"The counterweight slowing this spin must be dark matter. Until now, we have only been able to infer dark matter by mapping the gravitational potential of galaxies and subtracting the contribution from visible matter.

"Our research provides a new type of measurement of dark matter -- not of its gravitational energy, but of its inertial mass (the dynamical response), which slows the bar's spin."

Co-author and PhD student Rimpei Chiba, of the University of Oxford, said: "Our finding offers a fascinating perspective for constraining the nature of dark matter, as different models will change this inertial pull on the galactic bar.

"Our finding also poses a major problem for alternative gravity theories -- as they lack dark matter in the halo, they predict no, or significantly too little slowing of the bar."

The Milky Way, like other galaxies, is thought to be embedded in a 'halo' of dark matter that extends well beyond its visible edge.

Dark matter is invisible and its nature is unknown, but its existence is inferred from galaxies behaving as if they were shrouded in significantly more mass than we can see. There is thought to be about five times as much dark matter in the Universe as ordinary, visible matter.

Alternative gravity theories such as modified Newtonian dynamics reject the idea of dark matter, instead seeking to explain discrepancies by tweaking Einstein's theory of general relativity.

The Milky Way is a barred spiral galaxy, with a thick bar of stars in the middle and spiral arms extending through the disc outside the bar. The bar rotates in the same direction as the galaxy.

Read more at Science Daily

Introducing play to higher education reduces stress and forms deeper connection material

A new study found higher education students are more engaged and motivated when they are taught using playful pedagogy rather than the traditional lecture-based method. The study was conducted by University of Colorado Denver counseling researcher Lisa Forbes and was published in the Journal of Teaching and Learning.

While many educators in higher education believe play is a method that is solely used for elementary education, Forbes argues that play is important in post-secondary education to enhance student learning outcomes.

Throughout the spring 2020 semester, Forbes observed students who were enrolled in three of her courses between the ages of 23-43. To introduce playful pedagogy, Forbes included games and play, not always tied to the content of that day's lesson, at the start of each class. She then provided many opportunities for role-play to practice counseling skills, and designed competitions within class activities.

During the study, students mentioned they saw more opportunities for growth while learning in a highly interactive environment. Students also described that the hands-on nature of learning through play established a means for skill acquisition, and they were able to retain the content more effectively.

"As we grow older, we're conditioned to believe that play is trivial, childish, and a waste of time," said Forbes. "This social script about play leads to it being excluded from higher education. A more interactive learning approach leads to a deeper and more rigorous connection to the material."

To maintain what Forbes described as "rigor" within higher education, the most common approach tends to be lecture-based learning. However, according to Forbes, this mode of education is counter to the very outcomes educators set out to achieve.

The results of the study suggest there is a unique and powerful classroom experience when play is valued and used in the learning process. According to Forbes, students who participated in this study also indicated that play increased positive emotions and connections with other students and the professor in the course.

"I also saw that when I introduced play, it helped students let their guard down and allowed them to reduce their stress, fear, or anxiety," said Forbes. "Play even motivated students to be vulnerably engaged, take risks, and feel more connected to the content."

Read more at Science Daily

One step towards a daily-use deep UV light source for sterilization and disinfection

Researchers from the Graduate School of Engineering and the Center for Quantum Information and Quantum Biology at Osaka University unveiled a new solid state second-harmonic generation (SHG) device that converts infrared radiation into blue light. This work may lead to a practical daily-use deep ultraviolet light source for sterilization and disinfection.

Recently, deep ultraviolet (DUV) light sources have been attracting much attention in sterilization and disinfection. In order to realize a bactericidal effect while ensuring user safety, a wavelength range of 220-230 nm is desirable. But DUV light sources in this wavelength range that are both durable and highly efficient have not yet been developed. Although wavelength conversion devices are promising candidates, conventional ferroelectric wavelength conversion materials cannot be applied to DUV devices due to absorption edge.

Since nitride semiconductors such as gallium nitride and aluminum nitride have relatively high optical nonlinearity, they can be applied to wavelength conversion devices. Due to its transparency to 210 nm, aluminum nitride is particularly suitable for DUV wavelength conversion devices. However, realizing structures with periodically inverted polarity like conventional ferroelectric wavelength conversion devices has proven quite difficult.

The researchers proposed a novel monolithic microcavity wavelength conversion device without a polarity-inverted structure. A fundamental wave is enhanced significantly in the microcavity with two distributed Bragg reflectors (DBR), and counter-propagating second harmonic waves are efficiently emitted in phase from the one side. As the first step towards a practical DUV light source, a gallium nitride microcavity device was fabricated via microfabrication technology, including dry etching and anisotropic wet etching for vertical and smooth DBR sidewalls. By obtaining a blue SH wave, the effectiveness of the proposed concept was successfully demonstrated.

"Our device can be adapted to use a broader range of materials. They can be applied to deep ultraviolet light emission or even broadband photon pair generation," senior author Masahiro Uemukai says. The researchers hope that because this approach does not rely on materials or periodically inverted structures, it will make future nonlinear optical devices easier to construct.

From Science Daily

New evidence of early SARS-CoV-2 infections in the United States

A new antibody testing study examining samples originally collected through the National Institutes of Health's All of Us Research Program found evidence of SARS-CoV-2 infections in five states earlier than had initially been reported. These findings were published in the journal Clinical Infectious Diseases. The results expand on findings from a Centers for Disease Control and Prevention study that suggested SARS-CoV-2, the virus that causes COVID-19, was present in the U.S. as far back as December 2019.

In the All of Us study, researchers analyzed more than 24,000 stored blood samples contributed by program participants across all 50 states between Jan. 2 and March 18, 2020. Researchers detected antibodies against SARS-CoV-2 using two different serology tests in nine participants' samples. These participants were from outside the major urban hotspots of Seattle and New York City, believed to be key points of entry of the virus in the U.S. The positive samples came as early as Jan. 7 from participants in Illinois, Massachusetts, Mississippi, Pennsylvania and Wisconsin. Most positive samples were collected prior to the first reported cases in those states, demonstrating the importance of expanding testing as quickly as possible in an epidemic setting.

"This study allows us to uncover more information about the beginning of the U.S. epidemic and highlights the real-world value of longitudinal research in understanding dynamics of emerging diseases like COVID-19," said Josh Denny, M.D., M.S., chief executive officer of All of Us and an author of the study. "Our participants come from diverse communities across the U.S. and give generously of themselves to drive a wide range of biomedical discoveries, which are vital for informing public health strategies and preparedness."

In studies like these, false positives are a concern, particularly when the prevalence of viral infections is low, as was the case in the early days of the U.S. epidemic. Researchers in this study followed CDC guidance to use sequential testing on two separate platforms to minimize false positive results.

All of Us worked with Quest Diagnostics to test samples on the Abbott Architect SARS-CoV-2 IgG ELISA and the EUROIMMUN SARS-CoV-2 ELISA (IgG) platforms. For a sample to be considered "positive" by the research team, it had to have positive results on both platforms, which target antibodies that bind to different parts of the virus. Both tests have emergency use authorization from the FDA.

"Antibody testing of blood samples helps us better understand the spread of SARS-CoV-2 in the U.S. in the early days of the U.S. epidemic, when testing was restricted and public health officials could not see that the virus had already spread outside of recognized initial points of entry," said Keri N. Althoff, Ph.D., lead author and associate professor of epidemiology at the Johns Hopkins Bloomberg School of Public Health, Baltimore. "This study also demonstrates the importance of using multiple serology platforms, as recommended by the CDC."

Antibodies are proteins produced in the blood in response to an infection, such as a virus. They play a critical role in fighting infections and are helpful signs that a person may have been exposed to an infection in the past, even if they didn't show symptoms. In the All of Us study, researchers looked in participant samples for a type of antibodies called IgG. These antibodies do not appear until about two weeks after a person has been infected, indicating that participants with these antibodies were exposed to the virus at least several weeks before their sample was taken. In this study, the first positive samples came from participants in Illinois and Massachusetts on Jan. 7 and 8, 2020, respectively, suggesting that the virus was present in those states in late December.

The study authors noted several limitations to their study. While the study included samples from across the U.S., the number of samples from many states was low. In addition, the authors do not know whether the participants with positive samples became infected during travel or while in their own communities. Ideally, this study could be replicated in other populations with samples collected in the initial months of the U.S. epidemic and with multiple testing platforms to compare results.

All of Us expects to release more information following further analysis, and will offer participants whose samples were included in the study an opportunity to receive their individual results. The presence of antibodies in one's blood sample does not guarantee that a person is protected from the infection (has immunity), or that any such protection will last.

Read more at Science Daily

Jun 14, 2021

The sun's clock

Not only the very concise 11-year cycle, but also all other periodic solar activity fluctuations can be clocked by planetary attractive forces. This is the conclusion drawn by Dr. Frank Stefani and his colleagues from the Institute of Fluid Dynamics at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) and from the Institute of Continuous Media Mechanics in Perm, Russia. With new model calculations, they are proposing a comprehensive explanation of all important known sun cycles for the first time. They also reveal the longest fluctuations in activity over thousands of years as a chaotic process. Despite the planetary timing of short and medium cycles, long-term forecasts of solar activity thus become impossible, as the researchers in the scientific journal Solar Physics assert.

Solar physicists around the world have long been searching for satisfactory explanations for the sun's many cyclical, overlapping activity fluctuations. In addition to the most famous, approximately 11-year "Schwabe cycle," the sun also exhibits longer fluctuations, ranging from hundreds to thousands of years. It follows, for example, the "Gleissberg cycle" (about 85 years), the "Suess-de Vries cycle" (about 200 years) and the quasi-cycle of "Bond events" (about 1500 years), each named after their discoverers. It is undisputed that the solar magnetic field controls these activity fluctuations.

Explanations and models in expert circles partly diverge widely as to why the magnetic field changes at all. Is the sun controlled externally or does the reason for the many cycles lie in special peculiarities of the solar dynamo itself? HZDR researcher Frank Stefani and his colleagues have been searching for answers for years -- mainly to the very controversial question as to whether the planets play a role in solar activity.

Rosette-shaped movement of the sun can produce a 193-year cycle

The researchers have most recently taken a closer look at the sun's orbital movement. The sun does not remain fixed at the center of the solar system: It performs a kind of dance in the common gravitational field with the massive planets Jupiter and Saturn -- at a rate of 19.86 years. We know from the Earth that spinning around in its orbit triggers small motions in the Earth's liquid core. Something similar also occurs within the sun, but this has so far been neglected with regard to its magnetic field.

The researchers came up with the idea that part of the sun's angular orbital momentum could be transferred to its rotation and thus affect the internal dynamo process that produces the solar magnetic field. Such coupling would be sufficient to change the extremely sensitive magnetic storage capacity of the tachocline, a transition region between different types of energy transport in the sun's interior. "The coiled magnetic fields could then more easily snap to the sun's surface," says Stefani.

The researchers integrated one such rhythmic perturbation of the tachocline into their previous model calculations of a typical solar dynamo, and they were thus able to reproduce several cyclical phenomena that were known from observations. What was most remarkable was that, in addition to the 11.07-year Schwabe cycle they had already modeled in previous work, the strength of the magnetic field now also changed at a rate of 193 years -- this could be the sun's Suess-de Vries cycle, which from observations has been reported to be 180 to 230 years. Mathematically, the 193 years arise as what is known as a beat period between the 19.86-year cycle and the twofold Schwabe cycle, also called the Hale cycle. The Suess-de Vries cycle would thus be the result of a combination of two external "clocks": the planets' tidal forces and the sun's own movement in the solar system's gravitational field.

Planets as a metronome

For the 11.07-year cycle, Stefani and his researchers had previously found strong statistical evidence that it must follow an external clock. They linked this "clock" to the tidal forces of the planets Venus, Earth and Jupiter. Their effect is greatest when the planets are aligned: a constellation that occurs every 11.07 years. As for the 193-year cycle, a sensitive physical effect was also decisive here in order to trigger a sufficient effect of the weak tidal forces of the planets on the solar dynamo.

After initial skepticism toward the planetary hypothesis, Stefani now assumes that these connections are not coincidental. "If the sun was playing a trick on us here, then it would be with incredible perfection. Or, in fact, we have a first inkling of a complete picture of the short and long solar activity cycles." In fact, the current results also retroactively reaffirm that the 11-year cycle must be a timed process. Otherwise, the occurrence of a beat period would be mathematically impossible.

Tipping into chaos: 1000-2000-year collapses are not more accurately predictable

In addition to the rather shorter activity cycles, the sun also exhibits long-term trends in the thousand-year range. These are characterized by prolonged drops in activity, known as "minima," such as the most recent "Maunder Minimum," which occurred between 1645 and 1715 during the "Little Ice Age." By statistically analyzing the observed minima, the researchers could show that these are not cyclical processes, but that their occurrence at intervals of approximately one to two thousand years follows a mathematical random process.

Read more at Science Daily

Black holes help with star birth

Research combining systematic observations with cosmological simulations has found that, surprisingly, black holes can help certain galaxies form new stars. On scales of galaxies, the role of supermassive black holes for star formation had previously been seen as destructive -- active black holes can strip galaxies of the gas that galaxies need to form new stars. The new results, published in the journal Nature, showcase situations where active black holes can, instead, "clear the way" for galaxies that orbit inside galaxy groups or clusters, keeping those galaxies from having their star formation disrupted as they fly through the surrounding intergalactic gas.

Active black holes are primarily thought to have a destructive influence on their surroundings. As they blast energy into their host galaxy, they heat up and eject that galaxy's gas, making it more difficult for the galaxy to produce new stars. But now, researchers have found that the same activity can actually help with star formation -- at least for the satellite galaxies that orbit the host galaxy.

The counter-intuitive result came out of a collaboration sparked by a lunchtime conversation between astronomers specializing in large-scale computer simulations and observers. As such, it is a good example for the kind of informal interaction that has become more difficult under pandemic conditions.

Astronomical observations that include taking a distant galaxy's spectrum -- the rainbow-like separation of a galaxy's light into different wavelengths -- allow for fairly direct measurements of the rate at which that galaxy is forming new stars.

Going by such measurements, some galaxies are forming stars at rather sedate rates. In our own Milky Way galaxy, only one or two new stars are born each year. Others undergo brief bursts of excessive star formation activity, called "star bursts," with hundreds of stars born per year. In yet other galaxies, star formation appears to be suppressed, or "quenched," as astronomers say: Such galaxies have virtually stopped forming new stars.

A special kind of galaxy, specimens of which are frequently -- almost half of the time -- found to be in such a quenched state, are so-called satellite galaxies. These are part of a group or cluster of galaxies, their mass is comparatively low, and they orbit a much more massive central galaxy similar to the way satellites orbit the Earth.

Such galaxies typically form very few new stars, if at all, and since the 1970s, astronomers have suspected that something very much akin to headwind might be to blame: Groups and clusters of galaxies not only contain galaxies, but also rather hot thin gas filling the intergalactic space.

As a satellite galaxy orbits through the cluster at a speed of hundreds of kilometers per second, the thin gas would make it feel the same kind of "headwind" that someone riding a fast bike, or motor-bike, will feel. The satellite galaxy's stars are much too compact to be affected by the steady stream of oncoming intergalactic gas.

But the satellite galaxy's own gas is not: It would be stripped away by the oncoming hot gas in a process known as "ram pressure stripping." On the other hand, a fast-moving galaxy has no chance of pulling in a sufficient amount of intergalactic gas, to replenish its gas reservoir. The upshot is that such satellite galaxies lose their gas almost completely -- and with it the raw material needed for star formation. As a result, star-formation activity would be quenched.

The processes in question take place over millions or even billions of years, so we cannot watch them happening directly. But even so, there are ways for astronomers to learn more. They can utilize computer simulations of virtual universes, programmed so as to follow the relevant laws of physics -- and compare the results with what we actually observe. And they can look for tell-tale clues in the comprehensive "snapshot" of cosmic evolution that is provided by astronomical observations.

Annalisa Pillepich, a group leader at the Max Planck Institute for Astronomy (MPIA), specializes in simulations of this kind. The IllustrisTNG suite of simulations, which Pillepich has co-led, provides the most detailed virtual universes to date -- universes in which researchers can follow the movement of gas around on comparatively small scales.

IllustrisTNG provides some extreme examples of satellite galaxies that have freshly been stripped by ram pressure: so-called "jellyfish galaxies," that are trailing the remnants of their gas like jellyfish are trailing their tentacles. In fact, identifying all the jellyfish in the simulations is a recently launched citizen science project on the Zooniverse platform, where volunteers can help with the research into that kind of freshly quenched galaxy.

But, while jellyfish galaxies are relevant, they are not where the present research project started. Over lunch in November 2019, Pillepich recounted a different one of her IllustrisTNG results to Ignacio Martín-Navarro, an astronomer specializing in observations, who was at MPIA on a Marie Curie fellowship. A result about the influence of supermassive black holes that reached beyond the host galaxy, into intergalactic space.

Such supermassive black holes can be found in the center of all galaxies. Matter falling onto such a black hole typically becomes part of a rotating so-called accretion disk surrounding the black hole, before falling into the black hole itself. This fall onto the accretion disk liberates an enormous amount of energy in the form of radiation, and oftentimes also in the form of two jets of quickly moving particles, which accelerate away from the black hole at right angles to the accretion disk. A supermassive black hole that is emitting energy in this way is called an Active Galactic Nucleus, AGN for short.

While IllustrisTNG is not detailed enough to include black hole jets, it does contain physical terms that simulate how an AGN is adding energy to the surrounding gas. And as the simulation showed, that energy injection will lead to gas outflows, which in turn will orient themselves along a path of least resistance: in the case of disk galaxies similar to our own Milky Way, perpendicular to the stellar disk; for so-called elliptical galaxies, perpendicular to a suitable preferred plane defined by the arrangement of the galaxy's stars.

Over time, the bipolar gas outflows, perpendicular to the disk or preferred plane, will go so far as to affect the intergalactic environment -- the thin gas surrounding the galaxy. They will push the intergalactic gas away, each outflow creating a gigantic bubble. It was this account that got Pillepich and Martín-Navarro thinking: If a satellite galaxy were to pass through that bubble -- would it be affected by the outflow, and would its star formation activity be quenched even further?

Martín-Navarro took up this question within his own domain. He had extensive experience in working with data from one of the largest systematic surveys to date: the Sloan Digital Sky Survey (SDSS), which provides high-quality images of a large part of the Northern hemisphere. In the publicly available data from that survey's 10th data, he examined 30,000 galaxy groups and clusters, each containing a central galaxy and on average 4 satellite galaxies.

In a statistical analysis of those thousands of systems, he found a small, but marked difference between satellite galaxies that were close to the central galaxy's preferred plane and satellites that were markedly above and below. But the difference was in the opposite direction the researchers had expected: Satellites above and below the plane, within the thinner bubbles, were on average not more likely, but about 5% less likely to have had their star formation activity quenched.

With that surprising result, Martín-Navarro went back to Annalisa Pillepich, and the two performed the same kind of statistical analysis in the virtual universe of the IllustrisTNG simulations. In simulations of that kind, after all, cosmic evolution is not put in "by hand" by the researchers. Instead, the software includes rules that model the rules of physics for that virtual universe as naturally as possible, and which also include suitable initial conditions that correspond to the state of our own universe shortly after the Big Bang.

That is why simulations like that leave room for the unexpected -- in this particular case, for re-discovering the on-plane, off-plane distribution of quenched satellite galaxies: The virtual universe showed the same 5% deviation for the quenching of satellite galaxies! Evidently, the researchers were on to something.

In time, Pillepich, Martín-Navarro and their colleagues came up with a hypothesis for the physical mechanism behind the quenching variation. Consider a satellite galaxy travelling through one of the thinned-out bubbles the central black hole has blown into the surrounding intergalactic medium. Due to the lower density, that satellite galaxy experiences less headwind, less ram pressure, and is thus less likely to have its gas stripped away.

Then, it is down to statistics. For satellite galaxies that have orbited the same central galaxies several times already, traversing bubbles but also the higher-density regions in between, the effect will not be noticeable. Such galaxies will have lost their gas long ago.

But for satellite galaxies that have joined the group, or cluster, rather recently, location will make a difference: If those satellites happen to land in a bubble first, they are less likely to lose their gas then if they happen to land outside a bubble. This effect could account for the statistical difference for the quenched satellite galaxies.

Read more at Science Daily

From milk protein, a plastic foam that gets better in a tough environment

A new high-performance plastic foam developed from whey proteins can withstand extreme heat better than many common thermoplastics made from petroleum. A research team in Sweden reports that the material, which may be used for example in catalysts for cars, fuel filters or packaging foam, actually improves its mechanical performance after days of exposure to high temperatures.

Reporting in Advanced Sustainable Systems, researchers from KTH Royal Institute of Technology in Stockholm say the research opens the door to using protein-based foam materials in potentially tough environments, such as filtration, thermal insulation and fluid absorption.

The basic building blocks of the material are protein nanofibrils, or PNFs, which are self-assembled from hydrolyzed whey proteins -- a product from cheese-processing -- under specific temperature and pH conditions.

In tests the foams improved with aging. After one month of exposure to a temperature of 150C, the material became stiffer, tougher and stronger, says the study's co-author, Mikael Hedenqvist , professor in the Division of Polymeric Materials at KTH.

"This material only gets stronger with time," he says. "If we compare with petroleum-based, commercial foam materials made of polyethylene and polystyrene, they melt instantly and decompose under the same harsh conditions."

Proteins are often water-soluble, which poses a challenge when developing protein-based materials. Despite this, the material proved water-resistant after the aging process, which polymerized the protein, creating new covalent bonds that stabilized the foams. The foam also resisted even more aggressive substances -- such as surfactants and reducing agents -- that normally decompose or dissolve proteins. The crosslinking also made the foam unaffected by diesel fuel or hot oil.

The material also showed better fire resistance than commonly used polyurethane thermoset.

"This biodegradable, sustainable material can be a viable option for use in aggressive environments where fire resistance is important," Hedenqvist says.

Potential applications include providing support for catalytic metals that operate at higher temperatures, such as platinum catalysts for automobiles. The material could conceivably work as a fuel filter, too.

Read more at Science Daily

Earliest memories can start from the age of two-and-a-half

On average the earliest memories that people can recall point back to when they were just two-and-a-half years old, a new study suggests.

The findings, published in peer-reviewed journal Memory, pushes back the previous conclusions of the average age of earliest memories by a whole year. They are presented in a new 21-year study, which followed on from a review of already-existing data.

"When one's earliest memory occurs, it is a moving target rather than being a single static memory," explains childhood amnesia expert and lead author Dr Carole Peterson, from Memorial University of Newfoundland.

"Thus, what many people provide when asked for their earliest memory is not a boundary or watershed beginning, before which there are no memories. Rather, there seems to be a pool of potential memories from which both adults and children sample.

"And, we believe people remember a lot from age two that they don't realize they do.

"That's for two reasons. First, it's very easy to get people to remember earlier memories simply by asking them what their earliest memory is, and then asking them for a few more. Then they start recalling even earlier memories -- sometimes up to a full year earlier. It's like priming a pump; once you get them started its self-prompting.

"Secondly, we've documented those early memories are systematically misdated. Over and over again we find people think they were older than they actually were in their early memories."

For more than 20 years Dr Peterson has conducted studies on memory, with a particular focus on the ability of children and adults to recall their earliest years.

This latest research reviewed 10 of her research articles on childhood amnesia followed by analyses of both published and unpublished data collected in Dr Peterson's laboratory since 1999. It featured a total of 992 participants, and memories of 697 participants were then compared to the recollections of their parents.

Overall, it shows that children's earliest memories come before when they think it happened, as confirmed by their parents.

In some of the research reviewed by Peterson, the evidence to move our potential memory clock is "compelling." For example, when reviewing a study which interviewed children after two and eight years had passed since their earliest memory they were able to recall the same memory, however in the subsequent interviews gave a later age as to when they occurred.

"Eight years later many believed they were a full year older. So, the children, as they age, keep moving how old they thought they were at the time of those early memories," says Dr Peterson, from the Department of Psychology at Memorial University.

And she believes that the finding is due to something in memory dating called 'telescoping'.

"When you look at things that happened long ago, it's like looking through a lens.

"The more remote a memory is, the telescoping effect makes you see it as closer. It turns out they move their earliest memory forward a year to about three and a half years of age. But we found that when the child or adult is remembering events from age four and up, this doesn't happen."

She says, after combing through all of the data, it clearly demonstrates people remember a lot more of their early childhood and a lot farther back than they think they do, and it's relatively easy to help them access those memories.

"When you look at one study, sometimes things don't become clear, but when you start putting together study after study and they all come up with the same conclusions, it becomes pretty convincing."

It's this lack of clarity which Dr Peterson states is a limitation of the research and, indeed, all research done to-date in the subject area.

"What is needed now in childhood amnesia research are independently confirmed or documented external dates against which personally derived dates can be compared, as this would prevent telescoping errors and potential dating errors by parents," Dr Peterson says.

Read more at Science Daily

Young adults who lost and then restored heart health had lower risk of heart attack, stroke

Preserving good cardiovascular health during young adulthood is one of the best ways to reduce risks of premature heart attack or stroke, according to new research published today in the American Heart Association's flagship journal Circulation.

The number of premature deaths from cardiovascular disease is increasing in many countries including the U.S. While there is a wealth of information available on maintaining good heart health during and after midlife to reduce the risks of heart attack and stroke, data about cardiovascular health during young adulthood is scarce.

"Most people lose ideal cardiovascular health before they reach midlife, yet few young people have immediate health concerns and many do not usually seek medical care until approaching midlife," says the study's senior author Hyeon Chang Kim, M.D., Ph.D., a professor in the department of preventive medicine at Yonsei University College of Medicine in Seoul, South Korea. "We need strategies to help preserve or restore heart health in this population because we know poor heart health in young adults is linked to premature cardiovascular disease."

Using the Korean National Health Insurance Services, a nationwide health insurer database, Kim and colleagues analyzed information collected from more than 3.5 million adults who completed routine health exams in 2003 and 2004. A subgroup of approximately 2.9 million participants underwent a follow-up health examination between 2005 and 2008. Patients' ages ranged from 20 to 39 at the time of the first exam, and 65.5% of the study participants were male.

Participants were categorized according to ideal cardiovascular health (CVH) scores based on the American Heart Association's Life's Simple 7® metrics. Patients received "one point" towards a cardiovascular health (CVH) score for each of the following measures from Life's Simple 7: well-maintained blood pressure, low total cholesterol, acceptable blood sugar levels, an active lifestyle, healthy weight and not smoking. Of note: healthy nutrition and diet, the final measure of Life's Simple 7, was not included in this analysis because dietary information was not collected from participants in this database.

Researchers evaluated the total number of first hospitalizations or death from a heart attack, stroke or heart failure by December 31, 2019 to define outcomes. The researchers found: 

  • Rates of premature (younger than 55) cardiovascular events were highest among patients with a CVH score of zero.
  • A higher CVH score by one point led to reduced risks for heart attack by 42%, heart failure by 30%, cardiovascular death by 25% and stroke by 24%.
  • While people who improved their CVH score over time reduced their risk of hospitalizations or death from a heart attack, stroke or heart failure, people who began with and maintained a higher CVH score ultimately had the least chance of hospitalization or death from a heart attack or stroke during the study period.
  • Timely and consistent monitoring of heart health among young adults is important to prevent premature onset of heart disease and reduce the risk of cardiovascular events.


The study's findings may be limited because data was routine health screening data, therefore, it may not be as robust as data collected primarily for a specific study. The study also lacks data on the participants' eating patterns, so researchers modified CVH score metrics to exclude diet. In addition, participants in this study were of Korean ancestry, so the results may not be generalizable to people from other diverse racial or ethnic groups.

From Science Daily

Jun 13, 2021

Asteroid 16 Psyche might not be what scientists expected

The widely studied metallic asteroid known as 16 Psyche was long thought to be the exposed iron core of a small planet that failed to form during the earliest days of the solar system. But new University of Arizona-led research suggests that the asteroid might not be as metallic or dense as once thought, and hints at a much different origin story.

Scientists are interested in 16 Psyche because if its presumed origins are true, it would provide an opportunity to study an exposed planetary core up close. NASA is scheduled to launch its Psyche mission in 2022 and arrive at the asteroid in 2026.

UArizona undergraduate student David Cantillo is lead author of a new paper published in The Planetary Science Journal that proposes 16 Psyche is 82.5% metal, 7% low-iron pyroxene and 10.5% carbonaceous chondrite that was likely delivered by impacts from other asteroids. Cantillo and his collaborators estimate that 16 Psyche's bulk density -- also known as porosity, which refers to how much empty space is found within its body -- is around 35%.

These estimates differ from past analyses of 16 Psyche's composition that led researchers to estimate it could contain as much as 95% metal and be much denser.

"That drop in metallic content and bulk density is interesting because it shows that 16 Psyche is more modified than previously thought," Cantillo said.

Rather than being an intact exposed core of an early planet, it might actually be closer to a rubble pile, similar to another thoroughly studied asteroid -- Bennu. UArizona leads the science mission team for NASA's OSIRIS-REx mission, which retrieved a sample from Bennu's surface that is now making its way back to Earth.

"Psyche as a rubble pile would be very unexpected, but our data continues to show low-density estimates despite its high metallic content," Cantillo said.

Asteroid 16 Psyche is about the size of Massachusetts, and scientists estimate it contains about 1% of all asteroid belt material. First spotted by an Italian astronomer in 1852, it was the 16th asteroid ever discovered.

"Having a lower metallic content than once thought means that the asteroid could have been exposed to collisions with asteroids containing the more common carbonaceous chondrites, which deposited a surface layer that we are observing," Cantillo said. This was also observed on asteroid Vesta by the NASA Dawn spacecraft.

Asteroid 16 Psyche has been estimated to been worth $10,000 quadrillion (that's $10,000 followed by 15 more zeroes), but the new findings could slightly devalue the iron-rich asteroid.

"This is the first paper to set some specific constraints on its surface content. Earlier estimates were a good start, but this refines those numbers a bit more," Cantillo said.

The other well-studied asteroid, Bennu, contains a lot of carbonaceous chondrite material and has porosity of over 50%, which is a classic characteristic of a rubble pile.

Such high porosity is common for relatively small and low-mass objects such as Bennu -- which is only as large as the Empire State Building -- because a weak gravitational field prevents the object's rocks and boulders from being packed together too tightly. But for an object the size of 16 Psyche to be so porous is unexpected.

"The opportunity to study an exposed core of a planetesimal is extremely rare, which is why they're sending the spacecraft mission there," Cantillo said, "but our work shows that 16 Psyche is a lot more interesting than expected."

Past estimates of 16 Psyche's composition were done by analyzing the sunlight reflected off its surface. The pattern of light matched that of other metallic objects. Cantillo and his collaborators instead recreated 16 Psyche's regolith -- or loose rocky surface material -- by mixing different materials in a lab and analyzing light patterns until they matched telescope observations of the asteroid. There are only a few labs in the world practicing this technique, including the UArizona Lunar and Planetary Laboratory and the Johns Hopkins Applied Physics Laboratory in Maryland, where Cantillo worked while in high school.

"I've always been interested in space," said Cantillo, who is also president of the UArizona Astronomy Club. "I knew that astronomy studies would be heavy on computers and observation, but I like to do more hands-on kind of work, so I wanted to connect my studies to geology somehow. I'm majoring geology and minoring in planetary science and math."

"David's paper is an example of the cutting-edge research work done by our undergraduate students," said study co-author Vishnu Reddy, an associate professor of planetary sciences who heads up the lab in which Cantillo works. "It is also a fine example of the collaborative effort between undergraduates, graduate students, postdoctoral fellows and staff in my lab."

Read more at Science Dily

Could all your digital photos be stored as DNA?

On Earth right now, there are about 10 trillion gigabytes of digital data, and every day, humans produce emails, photos, tweets, and other digital files that add up to another 2.5 million gigabytes of data. Much of this data is stored in enormous facilities known as exabyte data centers (an exabyte is 1 billion gigabytes), which can be the size of several football fields and cost around $1 billion to build and maintain.

Many scientists believe that an alternative solution lies in the molecule that contains our genetic information: DNA, which evolved to store massive quantities of information at very high density. A coffee mug full of DNA could theoretically store all of the world's data, says Mark Bathe, an MIT professor of biological engineering.

"We need new solutions for storing these massive amounts of data that the world is accumulating, especially the archival data," says Bathe, who is also an associate member of the Broad Institute of MIT and Harvard. "DNA is a thousandfold denser than even flash memory, and another property that's interesting is that once you make the DNA polymer, it doesn't consume any energy. You can write the DNA and then store it forever."

Scientists have already demonstrated that they can encode images and pages of text as DNA. However, an easy way to pick out the desired file from a mixture of many pieces of DNA will also be needed. Bathe and his colleagues have now demonstrated one way to do that, by encapsulating each data file into a 6-micrometer particle of silica, which is labeled with short DNA sequences that reveal the contents.

Using this approach, the researchers demonstrated that they could accurately pull out individual images stored as DNA sequences from a set of 20 images. Given the number of possible labels that could be used, this approach could scale up to 1020 files.

Bathe is the senior author of the study, which appears today in Nature Materials. The lead authors of the paper are MIT senior postdoc James Banal, former MIT research associate Tyson Shepherd, and MIT graduate student Joseph Berleant.

Stable storage

Digital storage systems encode text, photos, or any other kind of information as a series of 0s and 1s. This same information can be encoded in DNA using the four nucleotides that make up the genetic code: A, T, G, and C. For example, G and C could be used to represent 0 while A and T represent 1.

DNA has several other features that make it desirable as a storage medium: It is extremely stable, and it is fairly easy (but expensive) to synthesize and sequence. Also, because of its high density -- each nucleotide, equivalent to up to two bits, is about 1 cubic nanometer -- an exabyte of data stored as DNA could fit in the palm of your hand.

One obstacle to this kind of data storage is the cost of synthesizing such large amounts of DNA. Currently it would cost $1 trillion to write one petabyte of data (1 million gigabytes). To become competitive with magnetic tape, which is often used to store archival data, Bathe estimates that the cost of DNA synthesis would need to drop by about six orders of magnitude. Bathe says he anticipates that will happen within a decade or two, similar to how the cost of storing information on flash drives has dropped dramatically over the past couple of decades.

Aside from the cost, the other major bottleneck in using DNA to store data is the difficulty in picking out the file you want from all the others.

"Assuming that the technologies for writing DNA get to a point where it's cost-effective to write an exabyte or zettabyte of data in DNA, then what? You're going to have a pile of DNA, which is a gazillion files, images or movies and other stuff, and you need to find the one picture or movie you're looking for," Bathe says. "It's like trying to find a needle in a haystack."

Currently, DNA files are conventionally retrieved using PCR (polymerase chain reaction). Each DNA data file includes a sequence that binds to a particular PCR primer. To pull out a specific file, that primer is added to the sample to find and amplify the desired sequence. However, one drawback to this approach is that there can be crosstalk between the primer and off-target DNA sequences, leading unwanted files to be pulled out. Also, the PCR retrieval process requires enzymes and ends up consuming most of the DNA that was in the pool.

"You're kind of burning the haystack to find the needle, because all the other DNA is not getting amplified and you're basically throwing it away," Bathe says.

File retrieval

As an alternative approach, the MIT team developed a new retrieval technique that involves encapsulating each DNA file into a small silica particle. Each capsule is labeled with single-stranded DNA "barcodes" that correspond to the contents of the file. To demonstrate this approach in a cost-effective manner, the researchers encoded 20 different images into pieces of DNA about 3,000 nucleotides long, which is equivalent to about 100 bytes. (They also showed that the capsules could fit DNA files up to a gigabyte in size.)

Each file was labeled with barcodes corresponding to labels such as "cat" or "airplane." When the researchers want to pull out a specific image, they remove a sample of the DNA and add primers that correspond to the labels they're looking for -- for example, "cat," "orange," and "wild" for an image of a tiger, or "cat," "orange," and "domestic" for a housecat.

The primers are labeled with fluorescent or magnetic particles, making it easy to pull out and identify any matches from the sample. This allows the desired file to be removed while leaving the rest of the DNA intact to be put back into storage. Their retrieval process allows Boolean logic statements such as "president AND 18th century" to generate George Washington as a result, similar to what is retrieved with a Google image search.

"At the current state of our proof-of-concept, we're at the 1 kilobyte per second search rate. Our file system's search rate is determined by the data size per capsule, which is currently limited by the prohibitive cost to write even 100 megabytes worth of data on DNA, and the number of sorters we can use in parallel. If DNA synthesis becomes cheap enough, we would be able to maximize the data size we can store per file with our approach," Banal says.

For their barcodes, the researchers used single-stranded DNA sequences from a library of 100,000 sequences, each about 25 nucleotides long, developed by Stephen Elledge, a professor of genetics and medicine at Harvard Medical School. If you put two of these labels on each file, you can uniquely label 1010 (10 billion) different files, and with four labels on each, you can uniquely label 1020 files.

Bathe envisions that this kind of DNA encapsulation could be useful for storing "cold" data, that is, data that is kept in an archive and not accessed very often. His lab is spinning out a startup, Cache DNA, that is now developing technology for long-term storage of DNA, both for DNA data storage in the long-term, and clinical and other preexisting DNA samples in the near-term.

Read more at Science Daily