May 1, 2021

Small galaxies likely played important role in evolution of the Universe

 A new study led by University of Minnesota astrophysicists shows that high-energy light from small galaxies may have played a key role in the early evolution of the Universe. The research gives insight into how the Universe became reionized, a problem that astronomers have been trying to solve for years.

The research is published in The Astrophysical Journal, a peer-reviewed scientific journal of astrophysics and astronomy.

After the Big Bang, when the Universe was formed billions of years ago, it was in an ionized state. This means that the electrons and protons floated freely throughout space. As the Universe expanded and started cooling down, it changed to a neutral state when the protons and electrons combined into atoms, akin to water vapor condensing into a cloud.

Now however, scientists have observed that the Universe is back in an ionized state. A major endeavor in astronomy is figuring out how this happened. Astronomers have theorized that the energy for reionization must have come from galaxies themselves. But, it's incredibly hard for enough high energy light to escape a galaxy due to hydrogen clouds within it that absorb the light, much like clouds in the Earth's atmosphere absorb sunlight on an overcast day.

Astrophysicists from the Minnesota Institute for Astrophysics in the University of Minnesota's College of Science and Engineering may have found the answer to that problem. Using data from the Gemini telescope, the researchers have observed the first ever galaxy in a "blow-away" state, meaning that the hydrogen clouds have been removed, allowing the high energy light to escape. The scientists suspect that the blow-away was caused by many supernovas, or dying stars, exploding in a short period of time.

"The star-formation can be thought of as blowing up the balloon," explained Nathan Eggen, the paper's lead author who recently received his master's degree in astrophysics from the University of Minnesota. "If, however, the star-formation was more intense, then there would be a rupture or hole made in the surface of the balloon to let out some of that energy. In the case of this galaxy, the star-formation was so powerful that the balloon was torn to pieces, completely blown-away."

The galaxy, named Pox 186, is so small that it could fit inside the Milky Way. The researchers suspect that its compact size, coupled with its large population of stars -- which amount to a hundred thousand times the mass of the sun -- made the blow-away possible.

The findings confirm that a blow-away is possible, furthering the idea that small galaxies were primarily responsible for the reionization of the Universe and giving more insight into how the Universe became what it is today.

Read more at Science Daily

Move over CRISPR, the Retrons are coming

Researchers have created a new gene editing tool called Retron Library Recombineering (RLR) that can generate up to millions of mutations simultaneously, and 'barcodes' mutant bacterial cells so that the entire pool can be screened at once. It can be used in contexts where CRISPR is toxic or not feasible, and results in better editing rates.

While the CRISPR-Cas9 gene editing system has become the poster child for innovation in synthetic biology, it has some major limitations. CRISPR-Cas9 can be programmed to find and cut specific pieces of DNA, but editing the DNA to create desired mutations requires tricking the cell into using a new piece of DNA to repair the break. This bait-and-switch can be complicated to orchestrate, and can even be toxic to cells because Cas9 often cuts unintended, off-target sites as well.

Alternative gene editing techniques called recombineering instead perform this bait-and-switch by introducing an alternate piece of DNA while a cell is replicating its genome, efficiently creating genetic mutations without breaking DNA. These methods are simple enough that they can be used in many cells at once to create complex pools of mutations for researchers to study. Figuring out what the effects of those mutations are, however, requires that each mutant be isolated, sequenced, and characterized: a time-consuming and impractical task.

Researchers at the Wyss Institute for Biologically Inspired Engineering at Harvard University and Harvard Medical School (HMS) have created a new gene editing tool called Retron Library Recombineering (RLR) that makes this task easier. RLR generates up to millions of mutations simultaneously, and "barcodes" mutant cells so that the entire pool can be screened at once, enabling massive amounts of data to be easily generated and analyzed. The achievement, which has been accomplished in bacterial cells, is described in a recent paper in PNAS.

"RLR enabled us to do something that's impossible to do with CRISPR: we randomly chopped up a bacterial genome, turned those genetic fragments into single-stranded DNA in situ, and used them to screen millions of sequences simultaneously," said co-first author Max Schubert, Ph.D., a postdoc in the lab of Wyss Core Faculty member George Church, Ph.D. "RLR is a simpler, more flexible gene editing tool that can be used for highly multiplexed experiments, which eliminates the toxicity often observed with CRISPR and improves researchers' ability to explore mutations at the genome level."

Retrons: from enigma to engineering tool

Retrons are segments of bacterial DNA that undergo reverse transcription to produce fragments of single-stranded DNA (ssDNA). Retrons' existence has been known for decades, but the function of the ssDNA they produce flummoxed scientists from the 1980s until June 2020, when a team finally figured out that retron ssDNA detects whether a virus has infected the cell, forming part of the bacterial immune system.

While retrons were originally seen as simply a mysterious quirk of bacteria, researchers have become more interested in them over the last few years because they, like CRISPR, could be used for precise and flexible gene editing in bacteria, yeast, and even human cells.

"For a long time, CRISPR was just considered a weird thing that bacteria did, and figuring out how to harness it for genome engineering changed the world. Retrons are another bacterial innovation that might also provide some important advances," said Schubert. His interest in retrons was piqued several years ago because of their ability to produce ssDNA in bacteria -- an attractive feature for use in a gene editing process called oligonucleotide recombineering.

Recombination-based gene editing techniques require integrating ssDNA containing a desired mutation into an organism's DNA, which can be done in one of two ways. Double-stranded DNA can be physically cut (with CRISPR-Cas9, for example) to induce the cell to incorporate the mutant sequence into its genome during the repair process, or the mutant DNA strand and a single-stranded annealing protein (SSAP) can be introduced into a cell that is replicating so that the SSAP incorporates the mutant strand into the daughter cells' DNA.

"We figured that retrons should give us the ability to produce ssDNA within the cells we want to edit rather than trying to force them into the cell from the outside, and without damaging the native DNA, which were both very compelling qualities," said co-first author Daniel Goodman, Ph.D., a former Graduate Research Fellow at the Wyss Institute who is now a Jane Coffin Childs Postdoctoral Fellow at UCSF.

Another attraction of retrons is that their sequences themselves can serve as "barcodes" that identify which individuals within a pool of bacteria have received each retron sequence, enabling dramatically faster, pooled screens of precisely-created mutant strains.

To see if they could actually use retrons to achieve efficient recombineering with retrons, Schubert and his colleagues first created circular plasmids of bacterial DNA that contained antibiotic resistance genes placed within retron sequences, as well as an SSAP gene to enable integration of the retron sequence into the bacterial genome. They inserted these retron plasmids into E. coli bacteria to see if the genes were successfully integrated into their genomes after 20 generations of cell replication. Initially, less than 0.1% of E. coli bearing the retron recombineering system incorporated the desired mutation.

To improve this disappointing initial performance, the team made several genetic tweaks to the bacteria. First, they inactivated the cells' natural mismatch repair machinery, which corrects DNA replication errors and could therefore be "fixing" the desired mutations before they were able to be passed on to the next generation. They also inactivated two bacterial genes that code for exonucleases -- enzymes that destroy free-floating ssDNA. These changes dramatically increased the proportion of bacteria that incorporated the retron sequence, to more than 90% of the population.

Name tags for mutants

Now that they were confident that their retron ssDNA was incorporated into their bacteria's genomes, the team tested whether they could use the retrons as a genetic sequencing "shortcut," enabling many experiments to be performed in a mixture. Because each plasmid had its own unique retron sequence that can function as a "name tag," they reasoned that they should be able to sequence the much shorter retron rather than the whole bacterial genome to determine which mutation the cells had received.

First, the team tested whether RLR could detect known antibiotic resistance mutations in E coli. They found that it could -- retron sequences containing these mutations were present in much greater proportions in their sequencing data compared with other mutations. The team also determined that RLR was sensitive and precise enough to measure small differences in resistance that result from very similar mutations. Crucially, gathering these data by sequencing barcodes from the entire pool of bacteria rather than isolating and sequencing individual mutants, dramatically speeds up the process.

Then, the researchers took RLR one step further to see if it could be used on randomly-fragmented DNA, and find out how many retrons they could use at once. They chopped up the genome of a strain of E. coli highly resistant to another antibiotic, and used those fragments to build a library of tens of millions of genetic sequences contained within retron sequences in plasmids. "The simplicity of RLR really shone in this experiment, because it allowed us to build a much bigger library than what we can currently use with CRISPR, in which we have to synthesize both a guide and a donor DNA sequence to induce each mutation," said Schubert.

This library was then introduced into the RLR-optimized E coli strain for analysis. Once again, the researchers found that retrons conferring antibiotic resistance could be easily identified by the fact that they were enriched relative to others when the pool of bacteria was sequenced.

"Being able to analyze pooled, barcoded mutant libraries with RLR enables millions of experiments to be performed simultaneously, allowing us to observe the effects of mutations across the genome, as well as how those mutations might interact with each other," said senior author George Church, who leads the Wyss Institute's Synthetic Biology Focus Area and is also a Professor of Genetics at HMS. "This work helps establish a road map toward using RLR in other genetic systems, which opens up many exciting possibilities for future genetic research."

Another feature that distinguishes RLR from CRISPR is that the proportion of bacteria that successfully integrate a desired mutation into their genome increases over time as the bacteria replicate, whereas CRISPR's "one shot" method tends to either succeed or fail on the first try. RLR could potentially be combined with CRISPR to improve its editing performance, or could be used as an alternative in the many systems in which CRISPR is toxic.

More work remains to be done on RLR to improve and standardize editing rate, but excitement is growing about this new tool. RLR's simple, streamlined nature could enable the study of how multiple mutations interact with each other, and the generation of a large number of data points that could enable the use of machine learning to predict further mutational effects.

"This new synthetic biology tool brings genome engineering to an even higher levels of throughput, which will undoubtedly lead to new, exciting, and unexpected innovations," said Don Ingber, M.D., Ph.D., the Wyss Institute's Founding Director. Ingber is also the Judah Folkman Professor of Vascular Biology at HMS and Boston Children's Hospital, and Professor of Bioengineering at the Harvard John A. Paulson School of Engineering and Applied Sciences.

Read more at Science Daily

Apr 29, 2021

Asteroid that hit Botswana in 2018 likely came from Vesta, scientists say

An international team of researchers searched for pieces of a small asteroid tracked in space and then observed to impact Botswana on June 2, 2018. Guided by SETI Institute meteor astronomer Peter Jenniskens, they found 23 meteorites deep inside the Central Kalahari Game Reserve and now have published their findings online in the journal Meteoritics and Planetary Science.

"Combining the observations of the small asteroid in space with information gleaned from the meteorites shows it likely came from Vesta, second largest asteroid in our Solar System and target of NASA's DAWN mission," said Jenniskens. "Billions of years ago, two giant impacts on Vesta created a family of larger, more dangerous asteroids. The newly recovered meteorites gave us a clue on when those impacts might have happened."

The asteroid

The small asteroid that impacted Botswana, called 2018 LA, was first spotted by the University of Arizona's Catalina Sky Survey as a faint point of light moving among the stars. The Catalina Sky Survey searches for Earth-crossing asteroids as part of NASA's Planetary Defense program.

"Small meter-sized asteroids are no danger to us, but they hone our skills in detecting approaching asteroids," said Eric Christensen, director of the Catalina Sky Survey program.

The team recovered archival data from the SkyMapper Southern Survey program in Australia that showed the asteroid spinning in space, rotating once every 4 minutes, alternatingly presenting a broad and a narrow side to us while reflecting the sunlight.

On its journey to Earth, cosmic rays bombarded the asteroid and created radioactive isotopes. By analyzing those isotopes, the researchers determined that 2018 LA was a solid rock about 1.5 m in size, which reflected about 25% of sunlight.

The recovery

"This is only the second time we have spotted an asteroid in space before it hit Earth over land," said Jenniskens. "The first was asteroid 2008 TC3 in Sudan ten years earlier." Jenniskens also guided the search for fragments of 2008 TC3.

This time, fewer observations led to more uncertainty in the asteroid's position in its orbit. Davide Farnocchia of NASA JPL's Center for Near-Earth Object Studies combined astronomical observations of the asteroid with US Government Satellite data of the fireball to calculate the fall area. Esko Lyytinen of the Ursa Finnish Fireball Network made a parallel effort.

"When Jenniskens first arrived in Maun, he needed our help narrowing down the fall area," says Oliver Moses of the Okavango Research Institute. "We subsequently tracked down more video records in Rakops and Maun and were able to triangulate the position of the fireball."

After confirming the fall area, Moses and Jenniskens joined geologist Alexander Proyer of the Botswana International University of Science and Technology (BIUST) in Palapye and geoscientist Mohutsiwa Gabadirwe of the Botswana Geoscience Institute (BGI) in Lobatse and their colleagues to search for the meteorites.

"On the fifth day, our last day of searching, Lesedi Seitshiro of BIUST found the first meteorite only 30 meters from camp," said Jenniskens. "It was 18 grams and about 3 cm in size."

The search area was in the Central Kalahari Game Reserve, home to diverse wildlife, including leopards and lions. Researchers were kept safe by the staff of the Botswana Department of Wildlife and National Parks. BGI coordinated the search with the Department of National Museum and Monuments in Botswana.

"The meteorite is named 'Motopi Pan' after a local watering hole," said Gabadirwe, now the curator of this rare sample of an asteroid observed in space before impacting Earth. "This meteorite is a national treasure of Botswana."

The meteorite type

Non-destructive analysis at the University of Helsinki, Finland, showed that Motopi Pan belongs to the group of Howardite-Eucrite-Diogenite (HED) meteorites, known to have likely originated from the giant asteroid Vesta, which was recently studied in detail by NASA's DAWN mission.

"We managed to measure metal content as well as secure a reflectance spectrum and X-ray elemental analysis from a thinly crusted part of the exposed meteorite interior," said Tomas Kohout of the University of Helsinki. "All the measurements added well together and pointed to values typical for HED type meteorites."

Dynamical studies show that the orbit of 2018 LA is consistent with an origin from the inner part of the asteroid belt where Vesta is located. The asteroid was delivered into an Earth-impacting orbit via the resonance situated in the asteroid belt's inner side.

"Another HED meteorite fall we investigated in Turkey in 2015, called Sariçiçek, impacted on a similarly short orbit and produced mostly smallish 2 to 5-gram meteorites," said Jenniskens.

When Jenniskens returned to Botswana in October of 2018, the team found 22 more small meteorites. Gabadirwe was the first to spot another out-of-this-world rock. Surprisingly, subsequent meteorite finds showed a lot of diversity in their outward appearance.

"We studied the petrography and mineral chemistry of five of these meteorites and confirmed that they belong to the HED group," said Roger Gibson of Witts University in Johannesburg, South Africa. "Overall, we classified the material that asteroid 2018 LA contained as being Howardite, but some individual fragments had more affinity to Diogenites and Eucrites."

Other studies also confirmed the surprising diversity of the team's finds, such as reflection spectroscopy and the content of polyaromatic hydrocarbons in the sample. The asteroid was a breccia, a mixture of cemented rock pieces from different parts on Vesta.

Origin of the meteorites

A previous hypothesis proposed that Sariçiçek originated from Vesta in the collision that created the Antonia impact crater imaged by DAWN. Still showing a visible ejecta blanket, that young crater was formed about 22 million years ago. One-third of all HED meteorites that fall on Earth were ejected 22 million years ago. Did Motopi Pan originate from the same crater?

"Noble gas isotopes measurements at ETH in Zürich, Switzerland, and radioactive isotopes measured at Purdue University showed that this meteorite too had been in space as a small object for about 23 million years," said Kees Welten of UC Berkeley, "but give or take 4 million years, so it could be from the same source crater on Vesta."

Researchers found Motopi Pan and Sariçiçek to be similar in some ways but different in others. Like Motopi Pan, Sariçiçek exploded at 27.8 km altitude, but produced less light in that breakup.

"The infrasound shockwave measured in South Africa was not as strong as expected from US Government sensor detections of the bright light," said Peter Brown of the University of Western Ontario, Canada.

From lead isotopes in zircon minerals, researchers found that both Sariçiçek and Motopi Pan solidified at Vesta's surface about 4563 million years ago. But phosphate grains in Motopi Pan experienced another melting event more recently. Sariçiçek did not.

"About 4234 million years ago, the material in Motopi Pan was close to the center of a large impact event," said Qing-zhu Yin of UC Davis, "Sariçiçek was not."

Vesta experienced two significant impact events that created the Rheasilvia impact basin and the underlying, and therefore older, Veneneia impact basin.

"We now suspect that Motopi Pan was heated by the Veneneia impact, while the subsequent Rheasilvia impact scattered this material around," said Jenniskens. " If so, that would date the Veneneia impact to about 4234 million years ago. On top of Rheasilvia impact ejecta is the 10.3-km diameter Rubria impact crater, slightly smaller than the 16.7-km Antonia crater, and slightly younger at 19 +/- 3 million years, but a good candidate for the origin crater of Motopi Pan."

Read more at Science Daily

Molecular biologists travel back in time 3 billion years

A research group working at Uppsala University has succeeded in studying 'translation factors' -- important components of a cell's protein synthesis machinery -- that are several billion years old. By studying these ancient 'resurrected' factors, the researchers were able to establish that they had much broader specificities than their present-day, more specialised counterparts.

In order to survive and grow, all cells contain an in-house protein synthesis factory. This consists of ribosomes and associated translation factors that work together to ensure that the complex protein production process runs smoothly. While almost all components of the modern translational machinery are well known, until now scientists did not know how the process evolved.

The new study, published in the journal Molecular Biology and Evolution, took the research group led by Professor Suparna Sanyal of the Department of Cell and Molecular Biology on an epic journey back into the past. A previously published study used a special algorithm to predict DNA sequences of ancestors of an important translation factor called elongation factor thermo-unstable, or EF-Tu, going back billions of years. The Uppsala research group used these DNA sequences to resurrect the ancient bacterial EF-Tu proteins and then to study their properties.

The researchers looked at several nodes in the evolutionary history of EF-Tu. The oldest proteins they created were approximately 3.3 billion years old.

"It was amazing to see that the ancestral EF-Tu proteins matched the geological temperatures prevailing on Earth in their corresponding time periods. It was much warmer 3 billion years ago and those proteins functioned well at 70°C, while 300 million year old proteins were only able to withstand 50°C," says Suparna Sanyal.

The researchers were able to demonstrate that the ancient elongation factors are compatible with various types of ribosome and therefore can be classified as 'generalists', whereas their modern descendants have evolved to fulfil 'specialist' functions. While this makes them more efficient, they require specific ribosomes in order to function properly. The results also suggest that ribosomes probably evolved their RNA core before the other associated translation factors.

"The fact that we now know how protein synthesis evolved up to this point makes it possible for us to model the future. If the translation machinery components have already evolved to such a level of specialisation, what will happen in future, for example, in the case of new mutations?" ponders Suparna Sanyal.

Read more at Science Daily

New law of physics helps humans and robots grasp the friction of touch

Although robotic devices are used in everything from assembly lines to medicine, engineers have a hard time accounting for the friction that occurs when those robots grip objects -- particularly in wet environments. Researchers have now discovered a new law of physics that accounts for this type of friction, which should advance a wide range of robotic technologies.

"Our work here opens the door to creating more reliable and functional haptic and robotic devices in applications such as telesurgery and manufacturing," says Lilian Hsiao, an assistant professor of chemical and biomolecular engineering at North Carolina State University and corresponding author of a paper on the work.

At issue is something called elastohydrodynamic lubrication (EHL) friction, which is the friction that occurs when two solid surfaces come into contact with a thin layer of fluid between them. This would include the friction that occurs when you rub your fingertips together, with the fluid being the thin layer of naturally occurring oil on your skin. But it could also apply to a robotic claw lifting an object that has been coated with oil, or to a surgical device that is being used inside the human body.

One reason friction is important is because it helps us hold things without dropping them.

"Understanding friction is intuitive for humans -- even when we're handling soapy dishes," Hsiao says. "But it is extremely difficult to account for EHL friction when developing materials that controls grasping capabilities in robots."

To develop materials that help control EHL friction, engineers would need a framework that can be applied uniformly to a wide variety of patterns, materials and dynamic operating conditions. And that is exactly what the researchers have discovered.

"This law can be used to account for EHL friction, and can be applied to many different soft systems -- as long as the surfaces of the objects are patterned," Hsiao says.

In this context, surface patterns could be anything from the slightly raised surfaces on the tips of our fingers to grooves in the surface of a robotic tool.

The new physical principle, developed jointly by Hsiao and her graduate student Yunhu Peng, makes use of four equations to account for all of the physical forces at play in understanding EHL friction. In the paper, the research team demonstrated the law in three systems: human fingers; a bio-inspired robotic fingertip; and a tool called a tribo-rheometer, which is used to measure frictional forces. Peng is first author of the paper.

Read more at Science Daily

How does the brain flexibly process complex information?

Human decision-making depends on the flexible processing of complex information, but how the brain may adapt processing to momentary task demands has remained unclear. In a new article published in the journal Nature Communications, researchers from the Max Planck Institute for Human Development have now outlined several crucial neural processes revealing that our brain networks may rapidly and flexibly shift from a rhythmic to a "noisy" state when the need to process information increases.

Driving a car, deliberating over different financial options, or even pondering different life paths requires us to process an overwhelming amount of information. But not all decisions pose equal demands. In some situations, decisions are easier because we already know which pieces of information are relevant. In other situations, uncertainty about which information is relevant for our decision requires us to get a broader picture of all available information sources. The mechanisms by which the brain flexibly adapts information processing in such situations were previously unknown.

To reveal these mechanisms, researchers from the Lifespan Neural Dynamics Group (LNDG) at the Max Planck Institute for Human Development and the Max Planck UCL Centre for Computational Psychiatry and Ageing Research designed a visual task. Participants were asked to view a moving cloud of small squares that differed from each other along the four visual dimensions: color, size, brightness, and movement direction. Participants were then asked a question about one of the four visual dimensions. For example, "Were more squares moving to the left, or right?." Prior to seeing the squares, the study authors manipulated "uncertainty" by informing participants which feature(s) they could be asked about; the more features that were relevant, the more uncertain participants were expected to become about which features to focus upon. Throughout the task, brain activity was measured using electroencephalography (EEG) and functional magnetic resonance imaging (fMRI).

First, the authors found that when participants were more uncertain about the relevant feature in the upcoming choice, participants' EEG signals shifted from a rhythmic mode (present when participants could focus on a single feature) to a more arrhythmic, "noisy" mode. "Brain rhythms may be particularly useful when we need to select relevant over irrelevant inputs, while increased neural 'noise' could make our brains more receptive to multiple sources of information. Our results suggest that the ability to shift back and forth between these rhythmic and 'noisy' states may enable flexible information processing in the human brain," says Julian Q. Kosciessa, LNDG post-doc and the article's first author.

Additionally, the authors found that the extent to which participants shifted from a rhythmic to a noisy mode in their EEG signals was dominantly coupled with increased fMRI activity in the thalamus, a deep brain structure largely inaccessible by EEG. The thalamus is often thought of primarily as an interface for sensory and motor signals, while its potential role in flexibility has remained elusive. The findings of the study may thus have broad implications for our current understanding of the brain structures required for us to adapt to an ever-changing world. "When neuroscientists think about how the brain enables behavioral flexibility, we often focus exclusively on networks in the cortex, while the thalamus is traditionally considered a simple relay for sensorimotor information. Instead, our results argue that the thalamus may support neural dynamics in general and could optimize brain states according to environmental demands, allowing us to make better decisions," says Douglas Garrett, senior author of the study and LNDG group leader.

Read more at Science Daily

Apr 28, 2021

Black hole-neutron star collisions may help settle dispute over Universe's expansion

Studying the violent collisions of black holes and neutron stars may soon provide a new measurement of the Universe's expansion rate, helping to resolve a long-standing dispute, suggests a new simulation study led by researchers at UCL (University College London).

Our two current best ways of estimating the Universe's rate of expansion - measuring the brightness and speed of pulsating and exploding stars, and looking at fluctuations in radiation from the early Universe - give very different answers, suggesting our theory of the Universe may be wrong.

A third type of measurement, looking at the explosions of light and ripples in the fabric of space caused by black hole-neutron star collisions, should help to resolve this disagreement and clarify whether our theory of the Universe needs rewriting.

The new study, published in Physical Review Letters, simulated 25,000 scenarios of black holes and neutron stars colliding, aiming to see how many would likely be detected by instruments on Earth in the mid- to late-2020s.

The researchers found that, by 2030, instruments on Earth could sense ripples in space-time caused by up to 3,000 such collisions, and that for around 100 of these events, telescopes would also see accompanying explosions of light.

They concluded that this would be enough data to provide a new, completely independent measurement of the Universe's rate of expansion, precise and reliable enough to confirm or deny the need for new physics.

Lead author Dr Stephen Feeney (UCL Physics & Astronomy) said: "A neutron star is a dead star, created when a very large star explodes and then collapses, and it is incredibly dense - typically 10 miles across but with a mass up to twice that of our Sun. Its collision with a black hole is a cataclysmic event, causing ripples of space-time, known as gravitational waves, that we can now detect on Earth with observatories like LIGO and Virgo.

"We have not yet detected light from these collisions. But advances in the sensitivity of equipment detecting gravitational waves, together with new detectors in India and Japan, will lead to a huge leap forward in terms of how many of these types of events we can detect. It is incredibly exciting and should open up a new era for astrophysics."

To calculate the Universe's rate of expansion, known as the Hubble constant, astrophysicists need to know the distance of astronomical objects from Earth as well as the speed at which they are moving away. Analysing gravitational waves tells us how far away a collision is, leaving only the speed to be determined.

To tell how fast the galaxy hosting a collision is moving away, we look at the "redshift" of light - that is, how the wavelength of light produced by a source has been stretched by its motion. Explosions of light that may accompany these collisions would help us pinpoint the galaxy where the collision happened, allowing researchers to combine measurements of distance and measurements of redshift in that galaxy.

Dr Feeney said: "Computer models of these cataclysmic events are incomplete and this study should provide extra motivation to improve them. If our assumptions are correct, many of these collisions will not produce explosions that we can detect - the black hole will swallow the star without leaving a trace. But in some cases a smaller black hole may first rip apart a neutron star before swallowing it, potentially leaving matter outside the hole that emits electromagnetic radiation."

Co-author Professor Hiranya Peiris (UCL Physics & Astronomy and Stockholm University) said: "The disagreement over the Hubble constant is one of the biggest mysteries in cosmology. In addition to helping us unravel this puzzle, the spacetime ripples from these cataclysmic events open a new window on the universe. We can anticipate many exciting discoveries in the coming decade."

Gravitational waves are detected at two observatories in the United States (the LIGO Labs), one in Italy (Virgo), and one in Japan (KAGRA). A fifth observatory, LIGO-India, is now under construction.

Our two best current estimates of the Universe's expansion are 67 kilometres per second per megaparsec (3.26 million light years) and 74 kilometres per second per megaparsec. The first is derived from analysing the cosmic microwave background, the radiation left over from the Big Bang, while the second comes from comparing stars at different distances from Earth - specifically Cepheids, which have variable brightness, and exploding stars called type Ia supernovae.

Dr Feeney explained: "As the microwave background measurement needs a complete theory of the Universe to be made but the stellar method does not, the disagreement offers tantalising evidence of new physics beyond our current understanding. Before we can make such claims, however, we need confirmation of the disagreement from completely independent observations - we believe these can be provided through black hole-neutron star collisions."

Read more at Science Daily

Major advance enables study of genetic mutations in any tissue

For the first time, scientists are able to study changes in the DNA of any human tissue, following the resolution of long-standing technical challenges by scientists at the Wellcome Sanger Institute. The new method, called nanorate sequencing (NanoSeq), makes it possible to study how genetic changes occur in human tissues with unprecedented accuracy.

The study, published today (28 April) in Nature, represents a major advance for research into cancer and ageing. Using NanoSeq to study samples of blood, colon, brain and muscle, the research also challenges the idea that cell division is the main mechanism driving genetic changes. The new method is also expected to allow researchers to study the effect of carcinogens on healthy cells, and to do so more easily and on a much larger scale than has been possible up until now.

The tissues in our body are composed of dividing and non-dividing cells. Stem cells renew themselves throughout our lifetimes and are responsible for supplying non-dividing cells to keep the body running. The vast majority of cells in our bodies are non-dividing or divide only rarely. They include granulocytes in our blood, which are produced in the billions every day and live for a very short time, or neurons in our brain, which live for much longer.

Genetic changes, known as somatic mutations, occur in our cells as we age. This is a natural process, with cells acquiring around 15-40 mutations per year. Most of these mutations will be harmless, but some of them can start a cell on the path to cancer.

Since the advent of genome sequencing in the late twentieth century, cancer researchers have been able to better understand the formation of cancers and how to treat them by studying somatic mutations in tumour DNA. In recent years, new technologies have also enabled scientists to study mutations in stem cells taken from healthy tissue.

But until now, genome sequencing has not been accurate enough to study new mutations in non-dividing cells, meaning that somatic mutation in the vast majority of our cells has been impossible to observe accurately.

In this new study, researchers at the Wellcome Sanger Institute sought to refine an advanced sequencing method called duplex sequencing1. The team searched for errors in duplex sequence data and realised that they were concentrated at the ends of DNA fragments, and had other features suggesting flaws in the process used to prepare DNA for sequencing.

They then implemented improvements to the DNA preparation process, such as using specific enzymes to cut DNA more cleanly, as well as improved bioinformatics methods. Over the course of four years, accuracy was improved until they achieved fewer than five errors per billion letters of DNA.

Dr Robert Osborne, an alumnus of the Wellcome Sanger Institute who led the development of the method, said: "Detecting somatic mutations that are only present in one or a few cells is incredibly technically challenging. You have to find a single letter change among tens of millions of DNA letters and previous sequencing methods were simply not accurate enough. Because NanoSeq makes only a few errors per billion DNA letters, we are now able to accurately study somatic mutations in any tissue."

The team took advantage of NanoSeq's improved sensitivity to compare the rates and patterns of mutation in both stem cells and non-dividing cells in several human tissue types.

Surprisingly, analysis of blood cells found a similar number of mutations in slowly dividing stem cells and more rapidly dividing progenitor cells2. This suggested that cell division is not the dominant process causing mutations in blood cells. Analysis of non-dividing neurons and rarely dividing cells from muscle also revealed that mutations accumulate throughout life in cells without cell division, and at a similar pace to cells in the blood.

Dr Federico Abascal, the first author of the paper from the Wellcome Sanger Institute, said: "It is often assumed that cell division is the main factor in the occurrence of somatic mutations, with a greater number of divisions creating a greater number of mutations. But our analysis found that blood cells that had divided many times more than others featured the same rates and patterns of mutation. This changes how we think about mutagenesis and suggests that other biological mechanisms besides cell division are key."

The ability to observe mutation in all cells opens up new avenues of research into cancer and ageing, such as studying the effects of known carcinogens like tobacco or sun exposure, as well as discovering new carcinogens. Such research could greatly improve our understanding of how lifestyles choices and exposures to carcinogens can lead to cancer.

A further benefit of the NanoSeq method is the relative ease with which samples can be collected. Rather than taking biopsies of tissue, cells can be collected non-invasively, such as by scraping the skin or swabbing the throat.

Read more at Science Daily

Two compounds can make chocolate smell musty and moldy

Chocolate is a beloved treat, but sometimes the cocoa beans that go into bars and other sweets have unpleasant flavors or scents, making the final products taste bad. Surprisingly, only a few compounds associated with these stinky odors are known. Now, researchers reporting in ACS' Journal of Agricultural and Food Chemistry have identified the two compounds that cause musty, moldy scents in cocoa -- work that can help chocolatiers ensure the quality of their products.

Cocoa beans, when fermented correctly, have a pleasant smell with sweet and floral notes. But they can have an off-putting scent when fermentation goes wrong, or when storage conditions aren't quite right and microorganisms grow on them. If these beans make their way into the manufacturing process, the final chocolate can smell unpleasant, leading to consumer complaints and recalls. So, sensory professionals smell fermented cocoa beans before they are roasted, detecting any unwanted musty, moldy, smoky or mushroom-like odors. Even with this testing in place, spoiled beans can evade human noses and ruin batches of chocolate, so a more objective assessment is needed for quality control. In previous studies, researchers used molecular techniques to identify the compounds that contribute to undesirable smoky flavors, but a similar method has not clarified other volatile scent compounds. So, Martin Steinhaus and colleagues wanted to determine the principal compounds that cause musty and moldy odors in tainted cocoa beans.

The researchers identified 57 molecules that made up the scent profiles of both normal and musty/moldy smelling cocoa beans using gas chromatography in combination with olfactometry and mass spectrometry. Of these compounds, four had higher concentrations in off-smelling samples. Then, these four compounds were spiked into unscented cocoa butter, and the researchers conducted smell tests with 15-20 participants. By comparing the results of these tests with the molecular content of nine samples of unpleasant fermented cocoa beans and cocoa liquors, the team determined that (-)-geosmin -- associated with moldy and beetroot odors -- and 3-methyl-1H-indole -- associated with fecal and mothball odors -- are the primary contributors to the musty and moldy scents of cocoa beans. Finally, they found that (-)-geosmin was mostly in the beans' shells, which are removed during processing, while 3-methyl-1H-indole was primarily in the bean nib that is manufactured into chocolate. The researchers say that measuring the amount of these compounds within cocoa beans could be an objective way to detect off-putting scents and flavors, keeping future batches of chocolate smelling sweet.

From Science Daily

Stress slows the immune response in sick mice

The neurotransmitter noradrenaline, which plays a key role in the fight-or-flight stress response, impairs immune responses by inhibiting the movements of various white blood cells in different tissues, researchers report April 28th in the journal Immunity. The fast and transient effect occurred in mice with infections and cancer, but for now, it's unclear whether the findings generalize to humans with various health conditions.

"We found that stress can cause immune cells to stop moving and prevents immune cells from protecting against disease," says senior study author University of Melbourne's Scott Mueller (@SMuellerLab) of the Peter Doherty Institute for Infection and Immunity (Doherty Institute). "This is novel because it was not known that stress signals can stop immune cells from moving about in the body and performing their job."

One main function of the sympathetic nervous system (SNS) is to coordinate the fight-or-flight stress response -- a group of changes that prepare the body to fight or take flight in stressful or dangerous situations to protect itself from possible harm. Most tissues, including the lymph nodes and spleen, are innervated by SNS fibers. Stress-induced activation of the SNS can suppress immune responses, but the underlying mechanisms have been poorly characterized. "We hypothesized that SNS signals might modify the movement of T cells in tissues and lead to compromised immunity," Mueller says.

White blood cells, also known as leukocytes, travel constantly throughout the body and are highly motile within tissues, where they locate and eradicate pathogens and tumors. Although the movement of leukocytes is critical for immunity, it has not been clear how these cells integrate various signals to navigate within tissues. "We also speculated that neurotransmitter signals might be a rapid way to modulate leukocyte behavior in tissues, in particular during acute stress that involves increased activation of the SNS," Mueller says.

To test this idea, the researchers used advanced imaging to track the movements of T cells in mouse lymph nodes. Within minutes of being exposed to noradrenaline, T cells that had been rapidly moving stopped in their tracks and retracted their arm-like protrusions. This effect was transient, lasting between 45 and 60 minutes. Localized administration of noradrenaline in the lymph nodes of live mice also rapidly halted the cells. Similar effects were observed in mice that received noradrenaline infusions, which are used to treat patients with septic shock -- a life-threatening condition that occurs when infection leads to dangerously low blood pressure. This finding suggests that therapeutic treatment with noradrenaline might impair leukocyte functions.

"We were very surprised that stress signals had such a rapid and dramatic effect on how immune cells move," Mueller says. "Since movement is central to how immune cells can get to the right parts of the body and fight infections or tumors, this rapid movement off-switch was unexpected."

Other experiments revealed that SNS signals inhibit the migration of distinct immune cells, including B cells and dendritic cells, exerting these effects in different tissues such as skin and liver. Additional results suggest that the effects of SNS activation on cell motility may be mediated by the constriction of blood vessels, reduced blood flow, and oxygen deprivation in tissues, resulting in an increase in calcium signaling in leukocytes.

"Our results reveal that an unanticipated consequence of modulation of blood flow in response to SNS activity is the rapid sensing of changes in oxygen by leukocytes and the inhibition of motility," Mueller says. "Such rapid paralysis of leukocyte behavior identifies a physiological consequence of SNS activity that explains, at least in part, the widely observed relationship between stress and impaired immunity."

Moreover, SNS signals impaired protective immunity against pathogens and tumors in various mouse models, decreasing the proliferation and expansion of T cells in the lymph nodes and spleen. For example, treatment with SNS-stimulating molecules rapidly stopped the movements of T cells and dendritic cells in mice infected with herpes simplex virus 1 and reduced virus-specific T cell recruitment to the site of the skin infection. Similar effects were observed in mice with melanoma and in mice infected with a malarial parasite.

"Our data suggest that SNS activity in tissues could impact immune outcomes in diverse diseases," Mueller says. "Further insight into the impact of adrenergic receptor signals on cellular functions in tissues may inform the development of improved treatments for infections and cancer."

The degree to which SNS activation affects leukocyte behavior or disease outcomes in humans remains to be determined. Notably, increased SNS activity is prominent in patients with obesity and heart failure, while psychological stress can cause blood vessel constriction in patients with heart disease. An unappreciated impact of increased SNS activity, particularly in individuals with underlying health conditions, might be impaired leukocyte behavior and functions. The findings may also have important health implications for patients who use SNS-activating drugs to treat diseases such as heart failure, sepsis, asthma, and allergic reactions.

Read more at Science Daily

Apr 27, 2021

Icy clouds could have kept early Mars warm enough for rivers and lakes

One of the great mysteries of modern space science is neatly summed up by the view from NASA's Perseverance, which just landed on Mars: Today it's a desert planet, and yet the rover is sitting right next to an ancient river delta.

The apparent contradiction has puzzled scientists for decades, especially because at the same time that Mars had flowing rivers, it was getting less than a third as much sunshine as we enjoy today on Earth.

But a new study led by University of Chicago planetary scientist Edwin Kite, an assistant professor of geophysical sciences and an expert on climates of other worlds, uses a computer model to put forth a promising explanation: Mars could have had a thin layer of icy, high-altitude clouds that caused a greenhouse effect.

"There's been an embarrassing disconnect between our evidence, and our ability to explain it in terms of physics and chemistry," said Kite. "This hypothesis goes a long way toward closing that gap."

Of the multiple explanations scientists had previously put forward, none have ever quite worked. For example, some suggested that a collision from a huge asteroid could have released enough kinetic energy to warm the planet. But other calculations showed this effect would only last for a year or two -- and the tracks of ancient rivers and lakes show that the warming likely persisted for at least hundreds of years.

Kite and his colleagues wanted to revisit an alternate explanation: High-altitude clouds, like cirrus on Earth. Even a small amount of clouds in the atmosphere can significantly raise a planet's temperature, a greenhouse effect similar to carbon dioxide in the atmosphere.

The idea had first been proposed in 2013, but it had largely been set aside because, Kite said, "It was argued that it would only work if the clouds had implausible properties." For example, the models suggested that water would have to linger for a long time in the atmosphere -- much longer than it typically does on Earth -- so the whole prospect seemed unlikely.

Using a 3D model of the entire planet's atmosphere, Kite and his team went to work. The missing piece, they found, was the amount of ice on the ground. If there was ice covering large portions of Mars, that would create surface humidity that favors low-altitude clouds, which aren't thought to warm planets very much (or can even cool them, because clouds reflect sunlight away from the planet.)

But if there are only patches of ice, such as at the poles and at the tops of mountains, the air on the ground becomes much drier. Those conditions favor a high layer of clouds -- clouds that tend to warm planets more easily.

The model results showed that scientists may have to discard some crucial assumptions based on our own particular planet.

"In the model, these clouds behave in a very un-Earth-like way," said Kite. "Building models on Earth-based intuition just won't work, because this is not at all similar to Earth's water cycle, which moves water quickly between the atmosphere and the surface."

Here on Earth, where water covers almost three-quarters of the surface, water moves quickly and unevenly between ocean and atmosphere and land -- moving in swirls and eddies that mean some places are mostly dry (the Sahara) and others are drenched (the Amazon). In contrast, even at the peak of its habitability, Mars had much less water on its surface. When water vapor winds up in the atmosphere, in Kite's model, it lingers.

"Our model suggests that once water moved into the early Martian atmosphere, it would stay there for quite a long time -- closer to a year -- and that creates the conditions for long-lived high-altitude clouds," said Kite.

NASA's newly landed Perseverance rover should be able to test this idea in multiple ways, too, such as by analyzing pebbles to reconstruct past atmospheric pressure on Mars.

Understanding the full story of how Mars gained and lost its warmth and atmosphere can help inform the search for other habitable worlds, the scientists said.

Read more at Science Daily

Astronomers detect hydroxyl molecule signature in an exoplanet atmosphere

 An international collaboration of astronomers led by a researcher from the Astrobiology Center and Queen's University Belfast, and including researchers from Trinity College Dublin, has detected a new chemical signature in the atmosphere of an extrasolar planet (a planet that orbits a star other than our Sun).

The hydroxyl radical (OH) was found on the dayside of the exoplanet WASP-33b. This planet is a so-called 'ultra-hot Jupiter', a gas-giant planet orbiting its host star much closer than Mercury orbits the Sun and therefore reaching atmospheric temperatures of more than 2,500° C (hot enough to melt most metals).

The lead researcher based at the Astrobiology Center and Queen's University Belfast, Dr Stevanus Nugroho, said: "This is the first direct evidence of OH in the atmosphere of a planet beyond the Solar System. It shows not only that astronomers can detect this molecule in exoplanet atmospheres, but also that they can begin to understand the detailed chemistry of this planetary population."

In the Earth's atmosphere, OH is mainly produced by the reaction of water vapour with atomic oxygen. It is a so-called 'atmospheric detergent' and plays a crucial role in the Earth's atmosphere to purge pollutant gasses that can be dangerous to life (e.g., methane, carbon monoxide).

In a much hotter and bigger planet like WASP-33b, where astronomers have previously detected signs of iron and titanium oxide gas) OH plays a key role in determining the chemistry of the atmosphere through interactions with water vapour and carbon monoxide. Most of the OH in the atmosphere of WASP-33b is thought to have been produced by the destruction of water vapour due to the extremely high temperature.

"We see only a tentative and weak signal from water vapour in our data, which would support the idea that water is being destroyed to form hydroxyl in this extreme environment," explained Dr Ernst de Mooij from Queen's University Belfast, a co-author on this study.

To make this discovery, the team used the InfraRed Doppler (IRD) instrument at the 8.2-meter diameter Subaru Telescope located in the summit area of Maunakea in Hawai`i (about 4,200 m above sea level). This new instrument can detect atoms and molecules through their 'spectral fingerprints,' unique sets of dark absorption features superimposed on the rainbow of colours (or spectrum) that are emitted by stars and planets.

As the planet orbits its host star, its velocity relative to the Earth changes with time. Just like the siren of an ambulance or the roar of a racing car's engine changes pitch while speeding past us, the frequencies of light (e.g., colour) of these spectral fingerprints change with the velocity of the planet. This allows us to separate the planet's signal from its bright host star, which normally overwhelms such observations, despite modern telescopes being nowhere near powerful enough to take direct images of such 'hot Jupiter' exoplanets.

Dr Neale Gibson, Assistant Professor at Trinity College Dublin and co-author of this work, said: "The science of extrasolar planets is relatively new, and a key goal of modern astronomy is to explore these planets' atmospheres in detail and eventually to search for 'Earth-like' exoplanets -- planets like our own. Every new atmospheric species discovered further improves our understanding of exoplanets and the techniques required to study their atmospheres, and takes us closer to this goal."

By taking advantage of the unique capabilities of IRD, the astronomers were able to detect the tiny signal from hydroxyl in the planet's atmosphere. "IRD is the best instrument to study the atmosphere of an exoplanet in the infrared," adds Professor Motohide Tamura, one of the principal investigators of IRD, Director of the Astrobiology Center, and co-author of this work.

"These techniques for atmospheric characterisation of exoplanets are still only applicable to very hot planets, but we would like to further develop instruments and techniques that enable us to apply these methods to cooler planets, and ultimately, to a second Earth," says Dr Hajime Kawahara, assistant professor at the University of Tokyo and co-author of this work.

Read more at Science Daily

Anesthesia doesn't simply turn off the brain, it changes its rhythms

In a uniquely deep and detailed look at how the commonly used anesthetic propofol causes unconsciousness, a collaboration of labs at The Picower Institute for Learning and Memory at MIT shows that as the drug takes hold in the brain, a wide swath of regions become coordinated by very slow rhythms that maintain a commensurately languid pace of neural activity. Electrically stimulating a deeper region, the thalamus, restores synchrony of the brain's normal higher frequency rhythms and activity levels, waking the brain back up and restoring arousal.

"There's a folk psychology or tacit assumption that what anesthesia does is simply 'turn off' the brain," said Earl Miller, Picower Professor of Neuroscience and co-senior author of the study in eLife. "What we show is that propofol dramatically changes and controls the dynamics of the brain's rhythms."

Conscious functions, such as perception and cognition, depend on coordinated brain communication, in particular between the thalamus and the brain's surface regions, or cortex, in a variety of frequency bands ranging from 4 to 100 Hz. Propofol, the study shows, seems to bring coordination among the thalamus and cortical regions down to frequencies around just 1 Hz.

Miller's lab, led by postdoc Andre Bastos and former graduate student Jacob Donoghue, collaborated with that of co-senior author Emery N. Brown, who is Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience and an anesthesiologist at Massachusetts General Hospital. The collaboration therefore powerfully unified the Miller lab's expertise on how neural rhythms coordinate the cortex to produce conscious brain function with the Brown lab's expertise in the neuroscience of anesthesia and statistical analysis of neural signals.

Brown said studies that show how anesthetics change brain rhythms can directly improve patient safety because these rhythms are readily visible on the EEG in the operating room. The study's main finding of a signature of very slow rhythms across the cortex offers a model for directly measuring when subjects have entered unconsciousness after propofol administration, how deeply they are being maintained in that state, and how quickly they may wake up once propofol dosing ends.

"Anesthesiologists can use this as a way to better take care of patients," Brown said.

Brown has long studied how brain rhythms are affected in humans under general anesthesia by making and analyzing measurements of rhythms using scalp EEG electrodes and to a limited extent, cortical electrodes in epilepsy patients. Because the new study was conducted in animal models of those dynamics, the team was able to implant electrodes that could directly measure the activity or "spiking" of many individual neurons and rhythms in the cortex and thalamus. Brown said the results therefore significantly deepen and extend his findings in people.

For instance, the same neurons that they measured chattering away with spikes of voltage 7-10 times a second during wakefulness routinely fired only once a second or less during propofol-induced unconsciousnesss, a notable slowing called a "down state." In all, the scientists made detailed simultaneous measurements of rhythms and spikes in five regions: two in the front of the cortex, two toward the back, and the thalamus.

"What's so compelling is we are getting data down to the level of spikes," Brown said. "The slow oscillations modulate the spiking activity across large parts of the cortex."

As much as the study explains how propofol generates unconsciousness, it also helps to explain the unified experience of consciousness, Miller said.

"All the cortex has to be on the same page to produce consciousness," Miller said. "One theory about how this works is through thalamo-cortical loops that allow the cortex to synchronize. Propofol may be breaking the normal operation of those loops by hyper synchronizing them in prolonged down states. It disrupts the ability of the cortex to communicate."

For instance, by making measurements in distinct layers of the cortex, the team found that higher frequency "gamma" rhythms, which are normally associated with new sensory information like sights and sounds, were especially reduced in superficial layers. Lower frequency "alpha" and "beta" waves, which Miller has shown tend to regulate the processing of the information carried by gamma rhythms, were especially reduced in deeper layers.

In addition to the prevailing synchrony at very slow frequencies, the team noted other signatures of unconsciousness in the data. As Brown and others have observed in humans before, alpha and beta rhythm power was notably higher in posterior regions of the cortex during wakefulness, but after loss of consciousness power at those rhythms flipped to being much higher in anterior regions.

Read more at Science Daily

Exposure to high heat neutralizes SARS-CoV-2 in less than one second, study finds

Arum Han, professor in the Department of Electrical and Computer Engineering at Texas A&M University, and his collaborators have designed an experimental system that shows exposure of SARS-CoV-2 to a very high temperature, even if applied for less than a second, can be sufficient to neutralize the virus so that it can no longer infect another human host.

Applying heat to neutralize COVID-19 has been demonstrated before, but in previous studies temperatures were applied from anywhere from one to 20 minutes. This length of time is not a practical solution, as applying heat for a long period of time is both difficult and costly. Han and his team have now demonstrated that heat treatment for less than a second completely inactivates the coronavirus -- providing a possible solution to mitigating the ongoing spread of COVID-19, particularly through long-range airborne transmission.

The Medistar Corporation approached leadership and researchers from the College of Engineering in the spring of 2020 to collaborate and explore the possibility of applying heat for a short amount of time to kill COVID-19. Soon after, Han and his team got to work, and built a system to investigate the feasibility of such a procedure.

Their process works by heating one section of a stainless-steel tube, through which the coronavirus-containing solution is run, to a high temperature and then cooling the section immediately afterward. This experimental setup allows the coronavirus running through the tube to be heated only for a very short period of time. Through this rapid thermal process, the team found the virus to be completely neutralized in a significantly shorter time than previously thought possible. Their initial results were released within two months of proof-of-concept experiments.

Han said if the solution is heated to nearly 72 degrees Celsius for about half a second, it can reduce the virus titer, or quantity of the virus in the solution, by 100,000 times which is sufficient to neutralize the virus and prevent transmission.

"The potential impact is huge," Han said. "I was curious of how high of temperatures we can apply in how short of a time frame and to see whether we can indeed heat-inactivate the coronavirus with only a very short time. And, whether such a temperature-based coronavirus neutralization strategy would work or not from a practical standpoint. The biggest driver was, 'Can we do something that can mitigate the situation with the coronavirus?'"

Their research was featured on the cover of the May issue of the journal Biotechnology and Bioengineering.

Not only is this sub-second heat treatment a more efficient and practical solution to stopping the spread of COVID-19 through the air, but it also allows for the implementation of this method in existing systems, such as heating, ventilation and air conditioning systems.

It also can lead to potential applications with other viruses, such as the influenza virus, that are also spread through the air. Han and his collaborators expect that this heat-inactivation method can be broadly applied and have a true global impact.

"Influenza is less dangerous but still proves deadly each year, so if this can lead to the development of an air purification system, that would be a huge deal, not just with the coronavirus, but for other airborne viruses in general," Han said.

In their future work, the investigators will build a microfluidic-scale testing chip that will allow them to heat-treat viruses for much shorter periods of time, for example, tens of milliseconds, with the hope of identifying a temperature that will allow the virus to be inactivated even with such a short exposure time.

The lead authors of the work are electrical engineering postdoctoral researchers, Yuqian Jiang and Han Zhang. Other collaborators on this project are Professor Julian L. Leibowitz, and Associate Professor Paul de Figueiredo from the College of Medicine; biomedical postdoctoral researcher Jose A. Wippold; Jyotsana Gupta, associate research scientist in microbial pathogenesis and immunology; and Jing Dai, electrical engineering assistant research scientist.

Read more at Science Daily

Analysis of famous fossil helps unlock when humans and apes diverged

 A long-awaited, high-tech analysis of the upper body of famed fossil "Little Foot" opens a window to a pivotal period when human ancestors diverged from apes, new USC research shows.

Little Foot's shoulder assembly proved key to interpreting an early branch of the human evolutionary tree. Scientists at the Keck School of Medicine of USC focused on its so-called pectoral girdle, which includes collarbones, shoulder blades and joints.

Although other parts of Little Foot, especially its legs, show humanlike traits for upright walking, the shoulder components are clearly apelike, supporting arms surprisingly well suited for suspending from branches or shimmying up and down trees rather than throwing a projectile or dangling astride the torso like humans.

The Little Foot fossil provides the best evidence yet of how human ancestors used their arms more than 3 million years ago, said Kristian J. Carlson, lead author of the study and associate professor of clinical integrative anatomical sciences at the Keck School of Medicine.

"Little Foot is the Rosetta stone for early human ancestors," he said. "When we compare the shoulder assembly with living humans and apes, it shows that Little Foot's shoulder was probably a good model of the shoulder of the common ancestor of humans and other African apes like chimpanzees and gorillas."

The apelike characteristics will likely attract considerable intrigue as science teams around the world have been examining different parts of the skeleton to find clues to human origins. The USC-led study, which also involved researchers at the University of Wisconsin, the University of Liverpool and the University of the Witwatersrand in South Africa, among others, was published today in the Journal of Human Evolution.

The journal devoted a special issue to Little Foot analyses from a global research group, which looked at other parts of the creature's skeleton. The process is somewhat akin to the story of blind men and the elephant, each examining one part in coordination with others to explain the whole of something that's not fully understood.

The Little Foot fossil is a rare specimen because it's a near-complete skeleton of an Australopithecus individual much older than most other human ancestors. The creature, probably an old female, stood about 4 feet tall with long legs suitable for bipedal motion when it lived some 3.67 million years ago. Called "Little Foot" because the first bones recovered consisted of a few small foot bones, the remains were discovered in a cave in South Africa in the 1990s. Researchers have spent years excavating it from its rock encasement and subjecting it to high-tech analysis.

While not as widely known as the Lucy skeleton, another Australopithecus individual unearthed in East Africa in the 1970s, Carlson said Little Foot is older and more complete.

The USC-led research team zeroed in on the shoulder assemblies because Little Foot provides the oldest and most intact example of this anatomy ever found. Those bones provide telltale clues of how an animal moves. In human evolution, he said, these parts had to change form before our ancestors could live life free of trees, walk the open savannah and use their arms for functions other than supporting the weight of the individual.

The scientists compared the creature's shoulder parts to apes, hominins and humans. Little Foot was a creature adapted to living in trees because the pectoral girdle suggests a creature that climbed trees, hung below branches and used its hands overhead to support its weight.

For example, the scapula, or shoulder blade, has a big, high ridge to attach heavy muscles similar to gorillas and chimpanzees. The shoulder joint, where the humerus connects, sits at an oblique angle, useful for stabilizing the body and lessening tensile loads on shoulder ligaments when an ape hangs beneath branches. The shoulder also has a sturdy, apelike reinforcing structure, the ventral bar. And the collarbone has a distinctive S-shaped curve commonly found in apes.

Those conclusions mean that the structural similarities in the shoulder between humans and African apes are much more recent, and persisted much longer, than has been proposed, Carlson said.

"We see incontrovertible evidence in Little Foot that the arm of our ancestors at 3.67 million years ago was still being used to bear substantial weight during arboreal movements in trees for climbing or hanging beneath branches," he said. "In fact, based on comparisons with living humans and apes, we propose that the shoulder morphology and function of Little Foot is a good model for that of the common ancestor of humans and chimpanzees 7 million to 8 million years ago."

The scientists were able to achieve remarkably clear images of the fossils. That's because the bones, painstakingly excavated for many years, are in good condition and uniquely complete. The scientists examined them using micro-CT scans, which can detect minute features on the surface of an object, peer deep inside a bone, measure the density of an object and generate a 3D model without harming the fossil.

Read more at Science Daily

Apr 26, 2021

Seismicity on Mars full of surprises, in first continuous year of data

The SEIS seismometer package from the Mars InSight lander has collected its first continuous Martian year of data, revealing some surprises among the more than 500 marsquakes detected so far.

At the Seismological Society of America (SSA)'s 2021 Annual Meeting, Savas Ceylan of ETH Zürich discussed some of the findings from The Marsquake Service, the part of the InSight ground team that detects marsquakes and curates the planet's seismicity catalog.

Marsquakes differ from earthquakes in a number of ways, Ceylan explained. To begin with, they are much smaller than earthquakes, with the largest event recorded at teleseismic distances around magnitude 3.6. SEIS is able to detect these small events because the background seismic noise on Mars can be much lower than on Earth, without the constant tremor produced by ocean waves.

"For much of a Martian year, from around sunset until early hours, the Martian atmosphere becomes very quiet, so there is no local noise either," he said. "Additionally, our sensors are optimized and shielded for operating under severe Martian conditions, such as extremely low temperatures and the extreme diurnal temperature fluctuations on the red planet."

Marsquakes also come in two distinct varieties: low-frequency events with seismic waves propagating at various depths in the planet's mantle, and high-frequency events with waves that appear to propagate through the crust. "In terms of how the seismic energy decays over time, the low-frequency events appear to be more like earthquakes" in which the shaking dies away relatively quickly, Ceylan said, "while the high-frequency events are resembling moonquakes" in persisting for longer periods.

The vast majority of the events are high-frequency and occur at hundreds of kilometers of distance from the lander. "It is not quite clear to us how these events could be confined to only high frequency energy while they occur at such large distances," he said. "On top of that, the frequency of those events seems to vary over the Martian year, which is a pattern that we do not know at all from Earth."

Only a handful of marsquakes have clear seismic phase arrivals -- the order in which the different types of seismic waves arrive at a location -- which allows researchers to calculate the direction and distance the waves come from. All these marsquakes originate from a sunken area of the surface called Cerberus Fossae, about 1800 kilometers away from the InSight Lander.

Cerberus Fossae is one of the youngest geological structures on Mars, and may have formed from extensional faulting or subsidence due to dike emplacement. Recent studies suggest extension mechanism may be the source of the Cerberus Fossae quakes, Ceylan noted, "however, we have a long way in front of us to be able to explain the main tectonic mechanisms behind these quakes."

The biggest challenge for The Marsquake Service and InSight science team has been "adapting to unexpected signals in the data from a new planet," Ceylan said.

Although there were significant efforts to shield SEIS from non-seismic noise by covering it and placing it directly on the Martian surface, its data are still contaminated by weather and lander noise.

"We needed to understand the noise on Mars from scratch, discover how our seismometers behave, how the atmosphere of Mars affects seismic recordings, and find alternative methods to interpret the data properly," said Ceylan.

It took the Service a while to be "confident in identifying the different event types," he added, "discriminating these weak signals from the rich and varied background noise, and being able to characterize these novel signals in a systematic manner to provide a self-consistent catalog."

Read more at Science Daily

New research uncovers continental crust emerged 500 million years earlier than thought

 The first emergence and persistence of continental crust on Earth during the Archaean (4 billion to 2.5 billion years ago) has important implications for plate tectonics, ocean chemistry, and biological evolution, and it happened about half a billion years earlier than previously thought, according to new research being presented at the EGU General Assembly 2021.

Once land becomes established through dynamic processes like plate tectonics, it begins to weather and add crucial minerals and nutrients to the ocean. A record of these nutrients is preserved in the ancient rock record. Previous research used strontium isotopes in marine carbonates, but these rocks are usually scarce or altered in rocks older than 3 billion years.

Now, researchers are presenting a new approach to trace the first emergence of old rocks using a different mineral: "barite."

Barite forms from a combination of sulfate coming from ocean water mixing with barium from hydrothermal vents. Barite holds a robust record of ocean chemistry within its structure, useful for reconstructing ancient environments. "The composition of the piece of barite we pick up in the field now that has been on Earth for three and a half billion years, is exactly the same as it was when it when it actually precipitated," says Desiree Roerdink, a geochemist at University of Bergen, Norway, and team leader of the new research. "So in essence, it is really a great recorder to look at processes on the early Earth."

Roerdink and her team tested six different deposits on three different continents, ranging from about 3.2 billion to 3.5 billion years old. They calculated the ratio of strontium isotopes in the barite, and from there, inferred the time where the weathered continental rock made its way to the ocean and incorporated itself into the barite. Based on the data captured in the barite, they found that weathering started about 3.7 billion years ago -- about 500 million years earlier than previously thought.

"That is a huge time period," Roerdink says. "It essentially has implications for the way that we think about how life evolved." She added that scientists usually think about life starting in deep sea, hydrothermal settings, but the biosphere is complex. "We don't really know if it is possible that life could have developed at the same time on land," she noted, adding "but then that land has to be there."

Lastly, the emergence of land says something about plate tectonics and the early emergence of a geodynamic Earth. "To get land, you need processes operating to form that continental crust, and form a crust that is chemically different from the oceanic crust," Roerdink says.

From Science Daily

3D holographic head-up display could improve road safety

Researchers have developed the first LiDAR-based augmented reality head-up display for use in vehicles. Tests on a prototype version of the technology suggest that it could improve road safety by 'seeing through' objects to alert of potential hazards without distracting the driver.

The technology, developed by researchers from the University of Cambridge, the University of Oxford and University College London (UCL), is based on LiDAR (light detection and ranging), and uses LiDAR data to create ultra high-definition holographic representations of road objects which are beamed directly to the driver's eyes, instead of 2D windscreen projections used in most head-up displays.

While the technology has not yet been tested in a car, early tests, based on data collected from a busy street in central London, showed that the holographic images appear in the driver's field of view according to their actual position, creating an augmented reality. This could be particularly useful where objects such as road signs are hidden by large trees or trucks, for example, allowing the driver to 'see through' visual obstructions. The results are reported in the journal Optics Express.

"Head-up displays are being incorporated into connected vehicles, and usually project information such as speed or fuel levels directly onto the windscreen in front of the driver, who must keep their eyes on the road," said lead author Jana Skirnewskaja, a PhD candidate from Cambridge's Department of Engineering. "However, we wanted to go a step further by representing real objects in as panoramic 3D projections."

Skirnewskaja and her colleagues based their system on LiDAR, a remote sensing method which works by sending out a laser pulse to measure the distance between the scanner and an object. LiDAR is commonly used in agriculture, archaeology and geography, but it is also being trialled in autonomous vehicles for obstacle detection.

Using LiDAR, the researchers scanned Malet Street, a busy street on the UCL campus in central London. Co-author Phil Wilkes, a geographer who normally uses LiDAR to scan tropical forests, scanned the whole street using a technique called terrestrial laser scanning. Millions of pulses were sent out from multiple positions along Malet Street. The LiDAR data was then combined with point cloud data, building up a 3D model.

"This way, we can stitch the scans together, building a whole scene, which doesn't only capture trees, but cars, trucks, people, signs, and everything else you would see on a typical city street," said Wilkes. "Although the data we captured was from a stationary platform, it's similar to the sensors that will be in the next generation of autonomous or semi-autonomous vehicles."

When the 3D model of Malet St was completed, the researchers then transformed various objects on the street into holographic projections. The LiDAR data, in the form of point clouds, was processed by separation algorithms to identify and extract the target objects. Another algorithm was used to convert the target objects into computer-generated diffraction patterns. These data points were implemented into the optical setup to project 3D holographic objects into the driver's field of view.

The optical setup is capable of projecting multiple layers of holograms with the help of advanced algorithms. The holographic projection can appear at different sizes and is aligned with the position of the represented real object on the street. For example, a hidden street sign would appear as a holographic projection relative to its actual position behind the obstruction, acting as an alert mechanism.

In future, the researchers hope to refine their system by personalising the layout of the head-up displays and have created an algorithm capable of projecting several layers of different objects. These layered holograms can be freely arranged in the driver's vision space. For example, in the first layer, a traffic sign at a further distance can be projected at a smaller size. In the second layer, a warning sign at a closer distance can be displayed at a larger size.

"This layering technique provides an augmented reality experience and alerts the driver in a natural way," said Skirnewskaja. "Every individual may have different preferences for their display options. For instance, the driver's vital health signs could be projected in a desired location of the head-up display.

"Panoramic holographic projections could be a valuable addition to existing safety measures by showing road objects in real time. Holograms act to alert the driver but are not a distraction."

Read more at Science Daily

Can a newborn's brain discriminate speech sounds?

People's ability to perceive speech sounds has been deeply studied, specially during someone's first year of life, but what happens during the first hours after birth? Are babies born with innate abilities to perceive speech sounds, or do neural encoding processes need to age for some time?

Researchers from the Institute of Neurosciences of the University of Barcelona (UBNeuro) and the Sant Joan de Déu Research Institute (IRSJD) have created a new methodology to try to answer this basic question on human development.

The results, published in the Nature's open-access journal Scientific Reports, confirm that newborn neural encoding of voice pitch is comparable to the adults' sabilities after three years of being exposed to language. However, there are differences regarding the perception of spectral and temporal fine structures of sounds, which consists on the ability to distinguish between vocal sounds such as /o/ and /a/. Therefore, according to the authors, neural encoding of this sound aspect, recorded for the first time in this study, is not found mature enough after being born, but it needs a certain exposure to the language as well as stimulation and time to develop.

According to the researchers, knowing the level of development typical in these neural encoding processes from birth will enable them to make an "early detection of language impairments, which would provide an early intervention or stimulus to reduce future negative consequences."

The study is led by Carles Escera, professor of Cognitive Neuroscience at the Department of Clinical Psychology and Psychobiology of the UB, and has been carried out at the IRSJD, in collaboration with Maria Dolores Gómez Roig, head of the Department of Obstetrics and Gynecology of Hospital Sant Joan de Déu. The study is also signed by the experts Sonia Arenillas Alcón, first author of the article, Jordi Costa Faidella and Teresa Ribas Prats, all members of the Cognitive Neuroscience Research Group (Brainlab) of the UB.

Decoding the spectral and temporal fine structure of sound

In order to distinguish the neural response to speech stimuli in newborns, one of the main challenges was to record, using the baby's electroencephalogram, a specific brain response: the frequency-following response (FFR). The FFR provides information on the neural encoding of two specific features of sound: fundamental frequency, responsible for the perception of voice pitch (high or low), and the spectral and temporal fine structure. The precise encoding of both features is, according to the study, "fundamental for the proper perception of speech, a requirement in future language acquisition."

To date, the available tools to study this neural encoding enabled researchers to determine whether the newborn's baby was able to encode inflections in the voice pitch, but it did not when it came to the spectral and temporal fine structure. "Inflections in voice pitch contour are very important, especially in tonal variations like in Mandarin, as well as to perceive the prosody from speech that transmits emotional content of what is said. However, the spectral and temporal fine structure of sound is the most relevant aspect in language acquisition regarding non-tonal languages like ours, and the few existing studies on the issue do not inform about the precision with which a newborn's brain encodes it," note the authors.

The main cause of this lack of studies is the technical limitation caused by the type of sounds used to conduct these tests. Therefore, authors have developed a new stimulus (/oa/) whose internal structure (increasing change in voice pitch, two different vocals) allows them to evaluate the precision of the neural encoding of both features of the sound simultaneously using the FFR analysis.

An adapted test to the limitations of the hospital environment

One of the most highlighted aspects of the study is that the stimulus and the methodology are compatible to the typical limitations of the hospital environment in which the tests are carried out. "Time is essential in the FFR research with newborns. On the one hand, because recording time limitations determine the stimuli they can record. On the other hand, for the actual conditions of the situation of newborns in hospitals, where there is a frequent and continuous access to the baby and the mother so they receive the required care and undergo evaluations and routine tests to rule out health problems," authors add. Considering these restrictions, the responses of the 34 newborns that were part of the study were recorded in sessions that lasted between twenty and thirty minutes, almost half the time used in common sessions in studies on speech sound discrimination.

A potential biomarker of learning problems

After this study, the objective of the researchers is to characterize the development f neural encoding of the spectral and temporal fine structure of speech sounds over time. To do so, they are currently recording the frequency-following response in those babies that took part in the present study, who are now 21 months old. "Given that the two first years of life are a critical period of stimulation for language acquisition, this longitudinal evaluation of the development will enable us to have a global view on how these encoding skills mature over the first months of life," note the researchers.

Read more at Science Daily

Apr 25, 2021

Experimental drug shows potential against Alzheimer's disease

Researchers at Albert Einstein College of Medicine have designed an experimental drug that reversed key symptoms of Alzheimer's disease in mice. The drug works by reinvigorating a cellular cleaning mechanism that gets rid of unwanted proteins by digesting and recycling them. The study was published online today in the journal Cell.

"Discoveries in mice don't always translate to humans, especially in Alzheimer's disease," said co-study leader Ana Maria Cuervo, M.D., Ph.D., the Robert and Renée Belfer Chair for the Study of Neurodegenerative Diseases, professor of developmental and molecular biology, and co-director of the Institute for Aging Research at Einstein. "But we were encouraged to find in our study that the drop-off in cellular cleaning that contributes to Alzheimer's in mice also occurs in people with the disease, suggesting that our drug may also work in humans." In the 1990s, Dr. Cuervo discovered the existence of this cell-cleaning process, known as chaperone-mediated autophagy (CMA) and has published 200 papers on its role in health and disease.

CMA becomes less efficient as people age, increasing the risk that unwanted proteins will accumulate into insoluble clumps that damage cells. In fact, Alzheimer's and all other neurodegenerative diseases are characterized by the presence of toxic protein aggregates in patients' brains. The Cell paper reveals a dynamic interplay between CMA and Alzheimer's disease, with loss of CMA in neurons contributing to Alzheimer's and vice versa. The findings suggest that drugs for revving up CMA may offer hope for treating neurodegenerative diseases.

Establishing CMA's Link to Alzheimer's

Dr. Cuervo's team first looked at whether impaired CMA contributes to Alzheimer's. To do so, they genetically engineered a mouse to have excitatory brain neurons that lacked CMA. The absence of CMA in one type of brain cell was enough to cause short-term memory loss, impaired walking, and other problems often found in rodent models of Alzheimer's disease. In addition, the absence of CMA profoundly disrupted proteostasis -- the cells' ability to regulate the proteins they contain. Normally soluble proteins had shifted to being insoluble and at risk for clumping into toxic aggregates.

Dr. Cuervo suspected the converse was also true: that early Alzheimer's impairs CMA. So she and her colleagues studied a mouse model of early Alzheimer's in which brain neurons were made to produce defective copies of the protein tau. Evidence indicates that abnormal copies of tau clump together to form neurofibrillary tangles that contribute to Alzheimer's. The research team focused on CMA activity within neurons of the hippocampus -- the brain region crucial for memory and learning. They found that CMA activity in those neurons was significantly reduced compared to control animals.

What about early Alzheimer's in people -- does it block CMA too? To find out, the researchers looked at single-cell RNA-sequencing data from neurons obtained postmortem from the brains of Alzheimer's patients and from a comparison group of healthy individuals. The sequencing data revealed CMA's activity level in patients' brain tissue. Sure enough, CMA activity was somewhat inhibited in people who had been in the early stages of Alzheimer's, followed by much greater CMA inhibition in the brains of people with advanced Alzheimer's.

"By the time people reach the age of 70 or 80, CMA activity has usually decreased by about 30% compared to when they were younger," said Dr. Cuervo. "Most peoples' brains can compensate for this decline. But if you add neurodegenerative disease to the mix, the effect on the normal protein makeup of brain neurons can be devastating. Our study shows that CMA deficiency interacts synergistically with Alzheimer's pathology to greatly accelerate disease progression."

A New Drug Cleans Neurons and Reverses Symptoms

In an encouraging finding, Dr. Cuervo and her team developed a novel drug that shows potential for treating Alzheimer's. "We know that CMA is capable of digesting defective tau and other proteins," said Dr. Cuervo. "But the sheer amount of defective protein in Alzheimer's and other neurodegenerative diseases overwhelms CMA and essentially cripples it. Our drug revitalizes CMA efficiency by boosting levels of a key CMA component."

In CMA, proteins called chaperones bind to damaged or defective proteins in cells of the body. The chaperones ferry their cargo to the cells' lysosomes -- membrane-bound organelles filled with enzymes, which digest and recycle waste material. To successfully get their cargo into lysosomes, however, chaperones must first "dock" the material onto a protein receptor called LAMP2A that sprouts from the membranes of lysosomes. The more LAMP2A receptors on lysosomes, the greater the level of CMA activity possible. The new drug, called CA, works by increasing the number of those LAMP2A receptors.

"You produce the same amount of LAMP2A receptors throughout life," said Dr. Cuervo. "But those receptors deteriorate more quickly as you age, so older people tend to have less of them available for delivering unwanted proteins into lysosomes. CA restores LAMP2A to youthful levels, enabling CMA to get rid of tau and other defective proteins so they can't form those toxic protein clumps." (Also this month, Dr. Cuervo's team reported in Nature Communications that, for the first time, they had isolated lysosomes from the brains of Alzheimer's disease patients and observed that reduction in the number of LAMP2 receptors causes loss of CMA in humans, just as it does in animal models of Alzheimer's.)

The researchers tested CA in two different mouse models of Alzheimer's disease. In both disease mouse models, oral doses of CA administered over 4 to 6 months led to improvements in memory, depression, and anxiety that made the treated animals resemble or closely resemble healthy, control mice. Walking ability significantly improved in the animal model in which it was a problem. And in brain neurons of both animal models, the drug significantly reduced levels of tau protein and protein clumps compared with untreated animals.

"Importantly, animals in both models were already showing symptoms of disease, and their neurons were clogged with toxic proteins before the drugs were administered," said Dr. Cuervo. "This means that the drug may help preserve neuron function even in the later stages of disease. We were also very excited that the drug significantly reduced gliosis -- the inflammation and scarring of cells surrounding brain neurons. Gliosis is associated with toxic proteins and is known to play a major role in perpetuating and worsening neurodegenerative diseases."

Treatment with CA did not appear to harm other organs even when given daily for extended periods of time. The drug was designed by Evripidis Gavathiotis, Ph.D.,, professor of biochemistry and of medicine and a co-leader of the study.

Read more at Science Daily

Fearsome tyrannosaurs were social animals, study shows

The fearsome tyrannosaur dinosaurs that ruled the northern hemisphere during the Late Cretaceous period (66-100 million years ago) may not have been solitary predators as popularly envisioned, but social carnivores similar to wolves, according to a new study.

The finding, based on research at a unique fossil bone site inside Utah's Grand Staircase-Escalante National Monument containing the remains of several dinosaurs of the same species, was made by a team of scientists including Celina Suarez, University of Arkansas associate professor of geosciences.

"This supports our hypothesis that these tyrannosaurs died in this site and were all fossilized together; they all died together, and this information is key to our interpretation that the animals were likely gregarious in their behavior," Suarez said.

The research team also include scientists from the U.S. Bureau of Land Management, Denver Museum of Nature and Science, Colby College of Maine and James Cook University in Australia. The study examines a unique fossil bone site inside Grand Staircase-Escalante National Monument called the "Rainbows and Unicorns Quarry" that they say exceeded the expectations raised even from the site's lofty nickname.

"Localities [like Rainbows and Unicorns Quarry] that produce insights into the possible behavior of extinct animals are especially rare, and difficult to interpret," said tyrannosaur expert Philip Currie in a press release from the BLM. "Traditional excavation techniques, supplemented by the analysis of rare earth elements, stable isotopes and charcoal concentrations convincingly show a synchronous death event at the Rainbows site of four or five tyrannosaurids. Undoubtedly, this group died together, which adds to a growing body of evidence that tyrannosaurids were capable of interacting as gregarious packs."

In 2014, BLM paleontologist Alan Titus discovered the Rainbows and Unicorns Quarry site in Grand Staircase-Escalante National Monument and led the subsequent research on the site, which is the first tyrannosaur mass death site found in the southern United States. Researchers ran a battery of tests and analyses on the vestiges of the original site, now preserved as small rock fragments and fossils in their final resting place, and sandbar deposits from the ancient river.

"We realized right away this site could potentially be used to test the social tyrannosaur idea. Unfortunately, the site's ancient history is complicated," Titus said. "With bones appearing to have been exhumed and reburied by the action of a river, the original context within which they lay has been destroyed. However, all has not been lost." As the details of the site's history emerged, the research team concluded that the tyrannosaurs died together during a seasonal flooding event that washed their carcasses into a lake, where they sat, largely undisturbed until the river later churned its way through the bone bed.

"We used a truly multidisciplinary approach (physical and chemical evidence) to piece the history of the site together, with the end-result being that the tyrannosaurs died together during a seasonal flooding event," said Suarez.

Using analysis of stable carbon and oxygen isotopes and concentrations of rare earth elements within the bones and rock, Suarez and her then-doctoral student, Daigo Yamamura, were able to provide a chemical fingerprint of the site. Based on the geochemical work, they were able to conclusively determine that the remains from the site all fossilized in the same environment and were not the result of an attritional assemblage of fossils washed in from a variety of areas.

"None of the physical evidence conclusively suggested that these organisms came to be fossilized together, so we turned to geochemistry to see if that could help us. The similarity of rare earth element patterns is highly suggestive that these organisms died and were fossilized together," said Suarez.

Excavation of the quarry site has been ongoing since its discovery in 2014 and due to the size of the site and volume of bones found there the excavation will probably continue into the foreseeable future. In addition to tyrannosaurs, the site has also yielded seven species of turtles, multiple fish and ray species, two other kinds of dinosaurs, and a nearly complete skeleton of a juvenile (12-foot-long) Deinosuchus alligator, although they do not appear to have all died together like the tyrannosaurs.

"The new Utah site adds to the growing body of evidence showing that tyrannosaurs were complex, large predators capable of social behaviors common in many of their living relatives, the birds," said project contributor, Joe Sertich, curator of dinosaurs at the Denver Museum of Nature & Science. "This discovery should be the tipping point for reconsidering how these top carnivores behaved and hunted across the northern hemisphere during the Cretaceous."

Future research plans for the Rainbows and Unicorns Quarry fossils include additional trace element and isotopic analysis of the tyrannosaur bones, which paleontologists hope will determine with a greater degree of certainty the mystery of Teratophoneus' social behavior.

Read more at Science Daily