Dec 29, 2020

Fluvial mapping of Mars

 It took fifteen years of imaging and nearly three years of stitching the pieces together to create the largest image ever made, the 8-trillion-pixel mosaic of Mars' surface. Now, the first study to utilize the image in its entirety provides unprecedented insight into the ancient river systems that once covered the expansive plains in the planet's southern hemisphere. These three billion-year-old sedimentary rocks, like those in Earth's geologic record, could prove valuable targets for future exploration of past climates and tectonics on Mars.

The work, published this month in Geology, complements existing research into Mars' hydrologic history by mapping ancient fluvial (river) ridges, which are essentially the inverse of a riverbed. "If you have a river channel, that's the erosion part of a river. So, by definition, there aren't any deposits there for you to study," Jay Dickson, lead author on the paper, explains. "You have rivers eroding rocks, so where did those rocks go? These ridges are the other half of the puzzle." Using the mosaic, as opposed to more localized imagery, let the researchers solve that puzzle on a global scale.

Mars used to be a wet world, as evidenced by rock records of lakes, rivers, and glaciers. The river ridges were formed between 4 and 3 billion years ago, when large, flat-lying rivers deposited sediments in their channels (rather than only having the water cut away at the surface). Similar systems today can be found in places like southern Utah and Death Valley in the U.S., and the Atacama Desert in Chile. Over time, sediment built up in the channels; once the water dried up, those ridges were all that was left of some rivers.

The ridges are present only in the southern hemisphere, where some of Mars' oldest and most rugged terrain is, but this pattern is likely a preservation artifact. "These ridges probably used to be all over the entire planet, but subsequent processes have buried them or eroded them away," Dickson says. "The northern hemisphere is very smooth because it's been resurfaced, primarily by lava flows." Additionally, the southern highlands are "some of the flattest surfaces in the solar system," says Woodward Fischer, who was involved in this work. That exceptional flatness made for good sedimentary deposition, allowing the creation of the records being studied today.

Whether or not a region has fluvial ridges is a basic observation that wasn't possible until this high-resolution image of the planet's surface was assembled. Each of the 8 trillion pixels represents 5 to 6 square meters, and coverage is nearly 100 percent, thanks to the "spectacular engineering" of NASA's context camera that has allowed it to operate continuously for well over a decade. An earlier attempt to map these ridges was published in 2007 by Rebecca Williams, a co-author on the new study, but that work was limited by imagery coverage and quality.

"The first inventory of fluvial ridges using meter-scale images was conducted on data acquired between 1997 and 2006," Williams says. "These image strips sampled the planet and provided tantalizing snapshots of the surface, but there was lingering uncertainty about missing fluvial ridges in the data gaps."

The resolution and coverage of Mars' surface in the mosaic has eliminated much of the team's uncertainty, filling in gaps and providing context for the features. The mosaic allows researchers to explore questions at global scales, rather than being limited to patchier, localized studies and extrapolating results to the whole hemisphere. Much previous research on Mars hydrology has been limited to craters or single systems, where both the sediment source and destination are known. That's useful, but more context is better in order to really understand a planet's environmental history and to be more certain in how an individual feature formed.

In addition to identifying 18 new fluvial ridges, using the mosaic image allowed the team to re-examine features that had previously been identified as fluvial ridges. Upon closer inspection, some weren't formed by rivers after all, but rather lava flows or glaciers. "If you only see a small part of [a ridge], you might have an idea of how it formed," Dickson says. "But then you see it in a larger context -- like, oh, it's the flank of a volcano, it's a lava flow. So now we can more confidently determine which are fluvial ridges, versus ridges formed by other processes."

Now that we have a global understanding of the distribution of ancient rivers on Mars, future explorations -- whether by rover or by astronauts -- could use these rock records to investigate what past climates and tectonics were like. "One of the biggest breakthroughs in the last twenty years is the recognition that Mars has a sedimentary record, which means we're not limited to studying the planet today," Fischer says. "We can ask questions about its history." And in doing so, he says, we learn not only about a single planet's past, but also find "truths about how planets evolved... and why the Earth is habitable."

Read more at Science Daily

Brain imaging predicts PTSD after brain injury

 Posttraumatic stress disorder (PTSD) is a complex psychiatric disorder brought on by physical and/or psychological trauma. How its symptoms, including anxiety, depression and cognitive disturbances arise remains incompletely understood and unpredictable. Treatments and outcomes could potentially be improved if doctors could better predict who would develop PTSD. Now, researchers using magnetic resonance imaging (MRI) have found potential brain biomarkers of PTSD in people with traumatic brain injury (TBI).

The study appears in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging, published by Elsevier.

"The relationship between TBI and PTSD has garnered increased attention in recent years as studies have shown considerable overlap in risk factors and symptoms," said lead author Murray Stein, MD, MPH, FRCPC, a Distinguished Professor of Psychiatry and Family Medicine & Public Health at the University of California San Diego, San Diego, La Jolla, CA, USA. "In this study, we were able to use data from TRACK-TBI, a large longitudinal study of patients who present in the Emergency Department with TBIs serious enough to warrant CT (computed tomography) scans."

The researchers followed over 400 such TBI patients, assessing them for PTSD at 3 and 6 months after their brain injury. At 3 months, 77 participants, or 18 percent, had likely PTSD; at 6 months, 70 participants or 16 percent did. All subjects underwent brain imaging after injury.

"MRI studies conducted within two weeks of injury were used to measure volumes of key structures in the brain thought to be involved in PTSD," said Dr. Stein. "We found that the volume of several of these structures were predictive of PTSD 3-months post-injury."

Specifically, smaller volume in brain regions called the cingulate cortex, the superior frontal cortex, and the insula predicted PTSD at 3 months. The regions are associated with arousal, attention and emotional regulation. The structural imaging did not predict PTSD at 6 months.

The findings are in line with previous studies showing smaller volume in several of these brain regions in people with PTSD and studies suggesting that the reduced cortical volume may be a risk factor for developing PTSD. Together, the findings suggest that a "brain reserve," or higher cortical volumes, may provide some resilience against PTSD.

Although the biomarker of brain volume differences is not yet robust enough to provide clinical guidance, Dr. Stein said, "it does pave the way for future studies to look even more closely at how these brain regions may contribute to (or protect against) mental health problems such as PTSD."

Read more at Science Daily

Flag leaves could help top off photosynthetic performance in rice

 The flag leaf is the last to emerge, indicating the transition from crop growth to grain production. Photosynthesis in this leaf provides the majority of the carbohydrates needed for grain filling -- so it is the most important leaf for yield potential. A team from the University of Illinois and the International Rice Research Institute (IRRI) found that some flag leaves of different varieties of rice transform light and carbon dioxide into carbohydrates better than others. This finding could potentially open new opportunities for breeding higher yielding rice varieties.

Published in the Journal of Experimental Botany, this study explores flag leaf induction -- which is the process that the leaf goes through to "start up" photosynthesis again after a transition from low to high light. This is important because the wind, clouds, and movement of the sun across the sky cause frequent fluctuations in light levels. How quickly photosynthesis adjusts to these changes has a major influence on productivity.

For the first time, these researchers revealed considerable differences between rice varieties in the ability of flag leaves to adjust to fluctuating light. They also showed that the ability to adjust differs between the flag leaf and leaves formed before flowering. Six rice varieties chosen to represent the breadth of genetic variation across a diverse collection of more than 3000 were analyzed as a first step in establishing if there was variation in ability to cope with fluctuations in light.

In this study, they discovered the flag leaf of one rice variety that began photosynthesizing nearly twice (185%) as fast as the slowest. Another top-performing flag leaf fixed 152% more sugar. They also found large differences (77%) in how much water the plant's flag leaves exchanged for the carbon dioxide that fuels photosynthesis. Additionally, they found that water-use efficiency in flag leaves correlated with water-use efficiency earlier in development of these rice varieties, suggesting that water-use efficiency in dynamic conditions could be screened for at younger stages of rice development.

"What's more, we found no correlation between the flag leaf and other leaves on the plant, aside from water-use efficiency, which indicates that both kinds of leaves may need to be optimized for induction," said Stephen Long, Illinois' Ikenberry Endowed University Chair of Crop Sciences and Plant Biology. "While this means more work for plant scientists and breeders, it also means more opportunities to improve the plant's photosynthetic efficiency and water use. Improving water use is of increasing importance, as agriculture already accounts for over 70% of human water use, and rice is perhaps the largest single part of this."

Confirming their previous study in New Phytologist, they found no correlation between data collected in fluctuating and steady-state conditions, where the rice plants were exposed to constant high light levels. This finding adds to a growing consensus that researchers should move away from research dependent on steady-state measurements.

Read more at Science Daily

Big bumblebees learn locations of best flowers

 Big bumblebees take time to learn the locations of the best flowers, new research shows.

Meanwhile smaller bumblebees -- which have a shorter flight range and less carrying capacity -- don't pay special attention to flowers with the richest nectar.

University of Exeter scientists examined the "learning flights" which most bees perform after leaving flowers.

Honeybees are known to perform such flights -- and the study shows bumblebees do the same, repeatedly looking back to memorise a flower's location.

"It might not be widely known that pollinating insects learn and develop individual flower preferences, but in fact bumblebees are selective," said Natalie Hempel de Ibarra, Associate Professor at Exeter's Centre for Research in Animal Behaviour.

"On leaving a flower, they can actively decide how much effort to put into remembering its location.

"The surprising finding of our study is that a bee's size determines this decision making and the learning behaviour."

In the study, captive bees visited artificial flowers containing sucrose (sugar) solution of varying concentrations.

The larger the bee, the more its learning behaviour varied depending on the richness of the sucrose solution.

Smaller bees invested the same amount of effort in learning the locations of the artificial flowers, regardless of whether sucrose concentration was high or low.

"The differences we found reflect the different roles of bees in their colonies," said Professor Hempel de Ibarra.

"Large bumblebees can carry larger loads and explore further from the nest than smaller ones.

"Small ones with a smaller flight range and carrying capacity cannot afford to be as selective, so they accept a wider range of flowers.

"These small bees tend to be involved more with tasks inside the nest -- only going out to forage if food supplies in the colony are running low."

The study was conducted in collaboration with scientists from the University of Sussex.

Read more at Science Daily

Dec 28, 2020

Discovery boosts theory that life on Earth arose from RNA-DNA mix

 Chemists at Scripps Research have made a discovery that supports a surprising new view of how life originated on our planet.

In a study published in the chemistry journal Angewandte Chemie, they demonstrated that a simple compound called diamidophosphate (DAP), which was plausibly present on Earth before life arose, could have chemically knitted together tiny DNA building blocks called deoxynucleosides into strands of primordial DNA.

The finding is the latest in a series of discoveries, over the past several years, pointing to the possibility that DNA and its close chemical cousin RNA arose together as products of similar chemical reactions, and that the first self-replicating molecules -- the first life forms on Earth -- were mixes of the two.

The discovery may also lead to new practical applications in chemistry and biology, but its main significance is that it addresses the age-old question of how life on Earth first arose. In particular, it paves the way for more extensive studies of how self-replicating DNA-RNA mixes could have evolved and spread on the primordial Earth and ultimately seeded the more mature biology of modern organisms.

"This finding is an important step toward the development of a detailed chemical model of how the first life forms originated on Earth," says study senior author Ramanarayanan Krishnamurthy, PhD, associate professor of chemistry at Scripps Research.

The finding also nudges the field of origin-of-life chemistry away from the hypothesis that has dominated it in recent decades: The "RNA World" hypothesis posits that the first replicators were RNA-based, and that DNA arose only later as a product of RNA life forms.

Is RNA too sticky?

Krishnamurthy and others have doubted the RNA World hypothesis in part because RNA molecules may simply have been too "sticky" to serve as the first self-replicators.

A strand of RNA can attract other individual RNA building blocks, which stick to it to form a sort of mirror-image strand -- each building block in the new strand binding to its complementary building block on the original, "template" strand. If the new strand can detach from the template strand, and, by the same process, start templating other new strands, then it has achieved the feat of self-replication that underlies life.

But while RNA strands may be good at templating complementary strands, they are not so good at separating from these strands. Modern organisms make enzymes that can force twinned strands of RNA -- or DNA -- to go their separate ways, thus enabling replication, but it is unclear how this could have been done in a world where enzymes didn't yet exist.

A chimeric workaround

Krishnamurthy and colleagues have shown in recent studies that "chimeric" molecular strands that are part DNA and part RNA may have been able to get around this problem, because they can template complementary strands in a less-sticky way that permits them to separate relatively easily.

The chemists also have shown in widely cited papers in the past few years that the simple ribonucleoside and deoxynucleoside building blocks, of RNA and DNA respectively, could have arisen under very similar chemical conditions on the early Earth.

Moreover, in 2017 they reported that the organic compound DAP could have played the crucial role of modifying ribonucleosides and stringing them together into the first RNA strands. The new study shows that DAP under similar conditions could have done the same for DNA.

"We found, to our surprise, that using DAP to react with deoxynucleosides works better when the deoxynucleosides are not all the same but are instead mixes of different DNA 'letters' such as A and T, or G and C, like real DNA," says first author Eddy Jiménez, PhD, a postdoctoral research associate in the Krishnamurthy lab.

"Now that we understand better how a primordial chemistry could have made the first RNAs and DNAs, we can start using it on mixes of ribonucleoside and deoxynucleoside building blocks to see what chimeric molecules are formed -- and whether they can self-replicate and evolve," Krishnamurthy says.

Read more at Science Daily

Carbon capture: Faster, greener way of producing carbon spheres

 A fast, green and one-step method for producing porous carbon spheres, which are a vital component for carbon capture technology and for new ways of storing renewable energy, has been developed by Swansea University researchers.

The method produces spheres that have good capacity for carbon capture, and it works effectively at a large scale.

Carbon spheres range in size from nanometers to micrometers. Over the past decade they have begun to play an important role in areas such as energy storage and conversion, catalysis, gas adsorption and storage, drug and enzyme delivery, and water treatment.

They are also at the heart of carbon capture technology, which locks up carbon rather than emitting it into the atmosphere, thereby helping to tackle climate change.

The problem is that existing methods of making carbon spheres have drawbacks. They can be expensive or impractical, or they produce spheres that perform poorly in capturing carbon. Some use biomass, making them more environmentally friendly, but they require a chemical to activate them.

This is where the work of the Swansea team, based in the University's Energy Safety Research Institute, represents a major advance. It points the way towards a better, cleaner and greener way of producing carbon spheres.

The team adapted an existing method known as CVD -- chemical vapour deposition. This involves using heat to apply a coating to a material. Using pyromellitic acid as both carbon and oxygen source, they applied the CVD method at different temperatures, from 600-900 °C. They then studied how efficiently the spheres were capturing CO2 at different pressures and temperatures

They found that:
 

  • 800 °C was the optimum temperature for forming carbon spheres
  • The ultramicropores in the spheres that were produced gave them a high carbon capture capacity at both atmospheric and lower pressures
  • Specific surface area and total pore volume were influenced by the deposition temperature, leading to an appreciable change in overall carbon dioxide capture capacity
  • At atmospheric pressure the highest CO2 adsorption capacities, measured in millimolars per gram, for the best carbon spheres, were around 4.0 at 0 °C and 2.9 at 25 °C.


This new approach brings several advantages over existing methods of producing carbon spheres. It is alkali-free and it doesn't need a catalyst to trigger the shaping of the spheres. It uses a cheap and safe feedstock which is readily available in the market. There is no need for solvents to purify the material. It is also a rapid and safe procedure.

Dr Saeid Khodabakhshi of the Energy Safety Research Institute at Swansea University, who led the research, said:

"Carbon spheres are fast becoming vital products for a green and sustainable future. Our research shows a green and sustainable way of making them.

We demonstrated a safe, clean and rapid way of producing the spheres. Crucially, the micropores in our spheres means they perform very well in capturing carbon. Unlike other CVD methods, our procedure can produce spheres at large scale without relying on hazardous gas and liquid feedstocks.

Read more at Science Daily

Primordial black holes and the search for dark matter from the multiverse

 The Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) is home to many interdisciplinary projects which benefit from the synergy of a wide range of expertise available at the institute. One such project is the study of black holes that could have formed in the early universe, before stars and galaxies were born.

Such primordial black holes (PBHs) could account for all or part of dark matter, be responsible for some of the observed gravitational waves signals, and seed supermassive black holes found in the center of our Galaxy and other galaxies. They could also play a role in the synthesis of heavy elements when they collide with neutron stars and destroy them, releasing neutron-rich material. In particular, there is an exciting possibility that the mysterious dark matter, which accounts for most of the matter in the universe, is composed of primordial black holes. The 2020 Nobel Prize in physics was awarded to a theorist, Roger Penrose, and two astronomers, Reinhard Genzel and Andrea Ghez, for their discoveries that confirmed the existence of black holes. Since black holes are known to exist in nature, they make a very appealing candidate for dark matter.

The recent progress in fundamental theory, astrophysics, and astronomical observations in search of PBHs has been made by an international team of particle physicists, cosmologists and astronomers, including Kavli IPMU members Alexander Kusenko, Misao Sasaki, Sunao Sugiyama, Masahiro Takada and Volodymyr Takhistov.

To learn more about primordial black holes, the research team looked at the early universe for clues. The early universe was so dense that any positive density fluctuation of more than 50 percent would create a black hole. However, cosmological perturbations that seeded galaxies are known to be much smaller. Nevertheless, a number of processes in the early universe could have created the right conditions for the black holes to form.

One exciting possibility is that primordial black holes could form from the "baby universes" created during inflation, a period of rapid expansion that is believed to be responsible for seeding the structures we observe today, such as galaxies and clusters of galaxies. During inflation, baby universes can branch off of our universe. A small baby (or "daughter") universe would eventually collapse, but the large amount of energy released in the small volume causes a black hole to form.

An even more peculiar fate awaits a bigger baby universe. If it is bigger than some critical size, Einstein's theory of gravity allows the baby universe to exist in a state that appears different to an observer on the inside and the outside. An internal observer sees it as an expanding universe, while an outside observer (such as us) sees it as a black hole. In either case, the big and the small baby universes are seen by us as primordial black holes, which conceal the underlying structure of multiple universes behind their "event horizons." The event horizon is a boundary below which everything, even light, is trapped and cannot escape the black hole.

In their paper, the team described a novel scenario for PBH formation and showed that the black holes from the "multiverse" scenario can be found using the Hyper Suprime-Cam (HSC) of the 8.2m Subaru Telescope, a gigantic digital camera -- the management of which Kavli IPMU has played a crucial role -- near the 4,200 meter summit of Mt. Mauna Kea in Hawaii. Their work is an exciting extension of the HSC search of PBH that Masahiro Takada, a Principal Investigator at the Kavli IPMU, and his team are pursuing. The HSC team has recently reported leading constraints on the existence of PBHs in Niikura, Takada et. al. (Nature Astronomy 3, 524-534 (2019))

Why was the HSC indispensable in this research? The HSC has a unique capability to image the entire Andromeda galaxy every few minutes. If a black hole passes through the line of sight to one of the stars, the black hole's gravity bends the light rays and makes the star appear brighter than before for a short period of time. The duration of the star's brightening tells the astronomers the mass of the black hole. With HSC observations, one can simultaneously observe one hundred million stars, casting a wide net for primordial black holes that may be crossing one of the lines of sight.

Read more at Science Daily

Discovery about how cancer cells evade immune defenses inspires new treatment approach

 Cancer cells are known for spreading genetic chaos. As cancer cells divide, DNA segments and even whole chromosomes can be duplicated, mutated, or lost altogether. This is called chromosomal instability, and scientists at Memorial Sloan Kettering have learned that it is associated with cancer's aggressiveness. The more unstable chromosomes are, the more likely that bits of DNA from these chromosomes will end up where they don't belong: outside of a cell's central nucleus and floating in the cytoplasm.

Cells interpret these rogue bits of DNA as evidence of viral invaders, which sets off their internal alarm bells and leads to inflammation. Immune cells travel to the site of the tumor and churn out defensive chemicals. A mystery has been why this immune reaction, triggered by the cancer cells, does not spell their downfall.

"The elephant in the room is that we didn't really understand how cancer cells were able to survive and thrive in this inflammatory environment," says Samuel Bakhoum, a physician-scientist at MSK and a member of the Human Oncology and Pathogenesis Program.

According to a new study from Dr. Bakhoum's lab published December 28 in the journal Cancer Discovery, the reason has to do, in part, with a molecule sitting on the outside of the cancer cells that destroys the warning signals before they ever reach neighboring immune cells.

The findings help to explain why some tumors do not respond to immunotherapy, and -- equally important -- suggest ways to sensitize them to immunotherapy.

Detecting Dangerous DNA

The warning system Dr. Bakhoum studies is called cGAS-STING. When DNA from a virus (or an unstable cancer chromosome) lands in a cell's cytoplasm, cGAS binds to it, forming a compound molecule called cGAMP, which serves as a warning signal. Inside the cell, this warning signal activates an immune response called STING, which addresses the immediate problem of a potential viral invader.

In addition, much of the cGAMP also travels outside the cell where it serves as a warning signal to neighboring immune cells. It activates their STING pathway and unleashes an immune attack against the virally infected cell.

Previous work from the Bakhoum lab had shown that cGAS-STING signaling inside of cancer cells causes them to adopt features of immune cells -- in particular, the capacity to crawl and migrate -- which aids their ability to metastasize. This provided part of the answer to the question of how cancer cells survive inflammation and aid metastasis in the process. The new research shows how the cancer cells cope with the warning signals that activated cGAS-STING releases into the environment. A scissor-like protein shreds the signals, providing a second way the cells can thwart the threat of immune destruction.

Examples of human triple negative breast cancer staining negative (left) and positive (right) for ENPP1 expression. Examples of human triple negative breast cancer staining negative (left) and positive (right) for ENPP1.

The scissor-like protein that coats cancer cells is called ENPP1. When cGAMP finds its way outside the cell, ENPP1 chops it up and prevents the signal from reaching immune cells. At the same time, this chopping releases an immune-suppressing molecule called adenosine, which also quells inflammation.

Through a battery of experiments conducted in mouse models of breast, lung, and colorectal cancers, Dr. Bakhoum and his colleagues showed that ENPP1 acts like a control switch for immune suppression and metastasis. Turning it on suppresses immune responses and increases metastasis; turning it off enables immune responses and reduces metastasis.

The scientists also looked at ENPP1 in samples of human cancers. ENPP1 expression correlated with both increased metastasis and resistance to immunotherapy.

Empowering Immunotherapy

From a treatment perspective, perhaps the most notable finding of the study is that flipping the ENPP1 switch off could increase the sensitivity of several different cancer types to immunotherapy drugs called checkpoint inhibitors. The researchers showed that this approach was effective in mouse models of cancer.

Several companies -- including one that Dr. Bakhoum and colleagues founded -- are now developing drugs to inhibit ENPP1 on cancer cells.

Dr. Bakhoum says it's fortunate that ENPP1 is located on the surface of cancer cells since this makes it an easier target for drugs designed to block it.

It's also relatively specific. Since most other tissues in a healthy individual are not inflamed, drugs targeting ENPP1 primarily affect cancer.

Finally, targeting ENPP1 undercuts cancer in two separate ways: "You're simultaneously increasing cGAMP levels outside the cancer cells, which activates STING in neighboring immune cells, while you're also preventing the production of the immune-suppressive adenosine. So, you're hitting two birds with one stone," Dr. Bakhoum explains.

Read more at Science Daily

Dec 23, 2020

How nearby galaxies form their stars

 Stars are born in dense clouds of molecular hydrogen gas that permeates interstellar space of most galaxies. While the physics of star formation is complex, recent years have seen substantial progress towards understanding how stars form in a galactic environment. What ultimately determines the level of star formation in galaxies, however, remains an open question.

In principle, two main factors influence the star formation activity: The amount of molecular gas that is present in galaxies and the timescale over which the gas reservoir is depleted by converting it into stars. While the gas mass of galaxies is regulated by a competition between gas inflows, outflows and consumption, the physics of the gas-to-star conversion is currently not well understood. Given its potentially critical role, many efforts have been undertaken to determine the gas depletion timescale observationally. However, these efforts resulted in conflicting findings partly because of the challenge in measuring gas masses reliably given current detection limits.

Typical star formation is linked to the overall gas reservoir

The present study from the Institute for Computational Science of the University of Zurich uses a new statistical method based on Bayesian modeling to properly account for galaxies with undetected amounts of molecular or atomic hydrogen to minimize observational bias. This new analysis reveals that, in typical star-forming galaxies, molecular and atomic hydrogen are converted into stars over approximately constant timescales of 1 and 10 billion years, respectively. However, extremely active galaxies ("starbursts") are found to have much shorter gas depletion timescales. "These findings suggest that star formation is indeed directly linked to the overall gas reservoir and thus set by the rate at which gas enters or leaves a galaxy," says Robert Feldmann, professor at the Center for Theoretical Astrophysics and Cosmology. In contrast, the dramatically higher star-formation activity of starbursts likely has a different physical origin, such as galaxy interactions or instabilities in galactic disks.

This analysis is based on observational data of nearby galaxies. Observations with the Atacama Large Millimeter/Submillimeter Array, the Square Kilometer Array and other observatories promise to probe the gas content of large numbers of galaxies across cosmic history. It will be paramount to continue the development of statistical and data-science methods to accurately extract the physical content from these new observations and to fully uncover the mysteries of star formation in galaxies.

From Science Daily

Why an early start is key to developing musical skill later in life

 Among the many holiday traditions scuttled by pandemic restrictions this year are live concerts featuring skilled musicians. These gifted performers can often play with such ease that it is easy to underestimate the countless hours of practice that went into honing their craft.

But could there be more to mastering music? Is there, as some have suggested, a developmental period early in life when the brain is especially receptive to musical training? The answer, according to new research published in the journal Psychological Science, is probably not.

"It is a common observation that successful musicians often start their musical training early," said Laura Wesseldijk, a researcher at the Karolinska Institute in Sweden and first author on the paper. "One much-discussed explanation is that there may be a period in early childhood during which the brain is particularly susceptible to musical stimulation. We found, however, that the explanation to why an early start matters may be more complicated and interesting than previously believed."

While the new study supports the idea that an early start is associated with higher levels of musical skills and achievement in adulthood, the underlying reasons for this may have more to do with familial influences -- such as genetic factors and an encouraging musical family environment -- along with accumulating more total practice time than those who start later in life.

To untangle these effects, Wesseldijk and her colleagues recruited 310 professional musicians from various Swedish music institutions, such as orchestral and music schools. The researchers also used data from an existing research project, the Study of Twin Adults: Genes and Environment (STAGE). Participants from both studies were tested on musical aptitude and achievement. They also answered a series of questions that gauged how often they practiced and the age of onset of musical training. The STAGE data also provided genetic information on its participants.

By comparing the results from these two independent studies, the researchers were able to show that an earlier start age is associated with musical aptitude, both in amateurs and professional musicians, even after controlling for accumulated practice time. They then evaluated starting age in a manner that accounted for the genetic data from the STAGE study.

The results indicate that genetic factors -- possibly related to musical interest and talent -- have a substantial influence on the age individuals start music practice and their future musical aptitude. When controlling for familial factors, namely shared genetic and environmental influences, such as a home environment that is steeped in music, there was no additional association between an earlier start age and musicality.

A possible explanation for these results could be that children who display more talent in a particular field, such as music, are encouraged to start practicing earlier. Another possibility is that a musically active, interested, and talented family provides a musical environment for the child, while also passing on their genetic predispositions to engage in music.

Read more at Science Daily

Increased meat consumption associated with symptoms of childhood asthma

 Substances present in cooked meats are associated with increased wheezing in children, Mount Sinai researchers report. Their study, published in Thorax, highlights pro-inflammatory compounds called advanced glycation end-products (AGEs) as an example of early dietary risk factors that may have broad clinical and public health implications for the prevention of inflammatory airway disease.

Asthma prevalence among children in the United States has risen over the last few decades. Researchers found that dietary habits established earlier in life may be associated with wheezing and potentially the future development of asthma.

Researchers examined 4,388 children between 2 and 17 years old from the 2003-2006 National Health and Nutrition Examination Survey (NHANES), a program of the National Center for Health Statistics, which is part of the U.S. Centers for Disease Control and Prevention. It is designed to evaluate the health and nutritional status of adults and children in the United States through interviews and physical examinations.

The researchers used NHANES survey data to evaluate associations between dietary AGE and meat consumption frequencies, and respiratory symptoms. They found that higher AGE intake was significantly associated with increased odds of wheezing, importantly including wheezing that disrupted sleep and exercise, and that required prescription medication. Similarly, higher intake of non-seafood meats was associated with wheeze-disrupted sleep and wheezing that required prescription medication.

"We found that higher consumption of dietary AGEs, which are largely derived from intake of non-seafood meats, was associated with increased risk of wheezing in children, regardless of overall diet quality or an established diagnosis of asthma," said Jing Gennie Wang, MD, lead author of the study, and a former fellow in Pulmonary, Critical Care and Sleep Medicine at the Icahn School of Medicine at Mount Sinai.

Read more at Science Daily

Climate change -- not Genghis Khan -- caused the demise of Central Asia's river civilizations, research shows

Ruins of Otrar, Kazakhstan

A new study challenges the long-held view that the destruction of Central Asia's medieval river civilizations was a direct result of the Mongol invasion in the early 13th century CE.

The Aral Sea basin in Central Asia and the major rivers flowing through the region were once home to advanced river civilizations which used floodwater irrigation to farm.

The region's decline is often attributed to the devastating Mongol invasion of the early 13th century, but new research of long-term river dynamics and ancient irrigation networks shows the changing climate and dryer conditions may have been the real cause.

Research led by the University of Lincoln, UK, reconstructed the effects of climate change on floodwater farming in the region and found that decreasing river flow was equally, if not more, important for the abandonment of these previously flourishing city states.

Mark Macklin, author and Distinguished Professor of River Systems and Global Change, and Director of the Lincoln Centre for Water and Planetary Health at the University of Lincoln said: "Our research shows that it was climate change, not Genghis Khan, that was the ultimate cause for the demise of Central Asia's forgotten river civilizations.

"We found that Central Asia recovered quickly following Arab invasions in the 7th and 8th centuries CE because of favourable wet conditions. But prolonged drought during and following the later Mongol destruction reduced the resilience of local population and prevented the re-establishment of large-scale irrigation-based agriculture."

The research focused on the archaeological sites and irrigation canals of the Otrar oasis, a UNESCO World Heritage site that was once a Silk Road trade hub located at the meeting point of the Syr Darya and Arys rivers in present southern Kazakhstan.

The researchers investigated the region to determine when the irrigation canals were abandoned and studied the past dynamics of the Arys river, whose waters fed the canals. The abandonment of irrigation systems matches a phase of riverbed erosion between the 10th and 14th century CE, that coincided with a dry period with low river flows, rather than corresponding with the Mongol invasion.

Read more at Science Daily

Climate change: Threshold for dangerous warming will likely be crossed between 2027-2042

 

Photo concept, hourglass on beach
The threshold for dangerous global warming will likely be crossed between 2027 and 2042 -- a much narrower window than the Intergovernmental Panel on Climate Change's estimate of between now and 2052. In a study published in Climate Dynamics, researchers from McGill University introduce a new and more precise way to project the Earth's temperature. Based on historical data, it considerably reduces uncertainties compared to previous approaches.

Scientists have been making projections of future global warming using climate models for decades. These models play an important role in understanding the Earth's climate and how it will likely change. But how accurate are they?

Dealing with uncertainty

Climate models are mathematical simulations of different factors that interact to affect Earth's climate, such as the atmosphere, ocean, ice, land surface and the sun. While they are based on the best understanding of the Earth's systems available, when it comes to forecasting the future, uncertainties remain.

"Climate skeptics have argued that global warming projections are unreliable because they depend on faulty supercomputer models. While these criticisms are unwarranted, they underscore the need for independent and different approaches to predicting future warming," says co-author Bruno Tremblay, a professor in the Department of Atmospheric and Oceanic Sciences at McGill University.

Until now, wide ranges in overall temperature projections have made it difficult to pinpoint outcomes in different mitigation scenarios. For instance, if atmospheric CO2 concentrations are doubled, the General Circulation Models (GCMs) used by the Intergovernmental Panel on Climate Change (IPCC), predict a very likely global average temperature increase between 1.9 and 4.5C -- a vast range covering moderate climate changes on the lower end, and catastrophic ones on the other.

A new approach

"Our new approach to projecting the Earth's temperature is based on historical climate data, rather than the theoretical relationships that are imperfectly captured by the GCMs. Our approach allows climate sensitivity and its uncertainty to be estimated from direct observations with few assumptions," says co-author Raphael Hebert, a former graduate researcher at McGill University, now working at the Alfred-Wegener-Institut in Potsdam, Germany.

In a study for Climate Dynamics, the researchers introduced the new Scaling Climate Response Function (SCRF) model to project the Earth's temperature to 2100. Grounded on historical data, it reduces prediction uncertainties by about half, compared to the approach currently used by the IPCC. In analyzing the results, the researchers found that the threshold for dangerous warming (+1.5C) will likely be crossed between 2027 and 2042. This is a much narrower window than GCMs estimates of between now and 2052. On average, the researchers also found that expected warming was a little lower, by about 10 to 15 percent. They also found, however, that the "very likely warming ranges" of the SCRF were within those of the GCMs, giving the latter support.

Read more at Science Daily

Dec 22, 2020

Neuroscientists isolate promising mini antibodies against COVID-19 from a llama

 

Llama
National Institutes of Health researchers have isolated a set of promising, tiny antibodies, or "nanobodies," against SARS-CoV-2 that were produced by a llama named Cormac. Preliminary results published in Scientific Reports suggest that at least one of these nanobodies, called NIH-CoVnb-112, could prevent infections and detect virus particles by grabbing hold of SARS-CoV-2 spike proteins. In addition, the nanobody appeared to work equally well in either liquid or aerosol form, suggesting it could remain effective after inhalation. SARS-CoV-2 is the virus that causes COVID-19.

The study was led by a pair of neuroscientists, Thomas J. "T.J." Esparza, B.S., and David L. Brody, M.D., Ph.D., who work in a brain imaging lab at the NIH's National Institute of Neurological Disorders and Stroke (NINDS).

"For years TJ and I had been testing out how to use nanobodies to improve brain imaging. When the pandemic broke, we thought this was a once in a lifetime, all-hands-on-deck situation and joined the fight," said Dr. Brody, who is also a professor at Uniformed Services University for the Health Sciences and the senior author of the study. "We hope that these anti-COVID-19 nanobodies may be highly effective and versatile in combating the coronavirus pandemic."

A nanobody is a special type of antibody naturally produced by the immune systems of camelids, a group of animals that includes camels, llamas, and alpacas. On average, these proteins are about a tenth the weight of most human antibodies. This is because nanobodies isolated in the lab are essentially free-floating versions of the tips of the arms of heavy chain proteins, which form the backbone of a typical Y-shaped human IgG antibody. These tips play a critical role in the immune system's defenses by recognizing proteins on viruses, bacteria, and other invaders, also known as antigens.

Because nanobodies are more stable, less expensive to produce, and easier to engineer than typical antibodies, a growing body of researchers, including Mr. Esparza and Dr. Brody, have been using them for medical research. For instance, a few years ago scientists showed that humanized nanobodies may be more effective at treating an autoimmune form of thrombotic thrombocytopenic purpura, a rare blood disorder, than current therapies.

Since the pandemic broke, several researchers have produced llama nanobodies against the SARS-CoV-2 spike protein that may be effective at preventing infections. In the current study, the researchers used a slightly different strategy than others to find nanobodies that may work especially well.

"The SARS-CoV-2 spike protein acts like a key. It does this by opening the door to infections when it binds to a protein called the angiotensin converting enzyme 2 (ACE2) receptor, found on the surface of some cells," said Mr. Esparza, the lead author of the study. "We developed a method that would isolate nanobodies that block infections by covering the teeth of the spike protein that bind to and unlock the ACE2 receptor."

To do this, the researchers immunized Cormac five times over 28 days with a purified version of the SARS-CoV-2 spike protein. After testing hundreds of nanobodies they found that Cormac produced 13 nanobodies that might be strong candidates.

Initial experiments suggested that one candidate, called NIH-CoVnb-112, could work very well. Test tube studies showed that this nanobody bound to the ACE2 receptor 2 to 10 times stronger than nanobodies produced by other labs. Other experiments suggested that the NIH nanobody stuck directly to the ACE2 receptor binding portion of the spike protein.

Then the team showed that the NIH-CoVnB-112 nanobody could be effective at preventing coronavirus infections. To mimic the SARS-CoV-2 virus, the researchers genetically mutated a harmless "pseudovirus" so that it could use the spike protein to infect cells that have human ACE2 receptors. The researchers saw that relatively low levels of the NIH-CoVnb-112 nanobodies prevented the pseudovirus from infecting these cells in petri dishes.

Importantly, the researchers showed that the nanobody was equally effective in preventing the infections in petri dishes when it was sprayed through the kind of nebulizer, or inhaler, often used to help treat patients with asthma.

"One of the exciting things about nanobodies is that, unlike most regular antibodies, they can be aerosolized and inhaled to coat the lungs and airways," said Dr. Brody.

Read more at Science Daily

The aroma of distant worlds

Spices

Asian spices such as turmeric and fruits like the banana had already reached the Mediterranean more than 3000 years ago, much earlier than previously thought. A team of researchers working alongside archaeologist Philipp Stockhammer at Ludwig-Maximilians-Universität in Munich (LMU) has shown that even in the Bronze Age, long-distance trade in food was already connecting distant societies.

A market in the city of Megiddo in the Levant 3700 years ago: The market traders are hawking not only wheat, millet or dates, which grow throughout the region, but also carafes of sesame oil and bowls of a bright yellow spice that has recently appeared among their wares. This is how Philipp Stockhammer imagines the bustle of the Bronze Age market in the eastern Mediterranean. Working with an international team to analyze food residues in tooth tartar, the LMU archaeologist has found evidence that people in the Levant were already eating turmeric, bananas and even soy in the Bronze and Early Iron Ages. "Exotic spices, fruits and oils from Asia had thus reached the Mediterranean several centuries, in some cases even millennia, earlier than had been previously thought," says Stockhammer. "This is the earliest direct evidence to date of turmeric, banana and soy outside of South and East Asia." It is also direct evidence that as early as the second millennium BCE there was already a flourishing long-distance trade in exotic fruits, spices and oils, which is believed to have connected South Asia and the Levant via Mesopotamia or Egypt. While substantial trade across these regions is amply documented later on, tracing the roots of this nascent globalization has proved to be a stubborn problem. The findings of this study confirm that long-distance trade in culinary goods has connected these distant societies since at least the Bronze Age. People obviously had a great interest in exotic foods from very early on.

For their analyses, Stockhammer's international team examined 16 individuals from the Megiddo and Tel Erani excavations, which are located in present-day Israel. The region in the southern Levant served as an important bridge between the Mediterranean, Asia and Egypt in the 2nd millennium BCE. The aim of the research was to investigate the cuisines of Bronze Age Levantine populations by analyzing traces of food remnants, including ancient proteins and plant microfossils, that have remained preserved in human dental calculus over thousands of years.

The human mouth is full of bacteria, which continually petrify and form calculus. Tiny food particles become entrapped and preserved in the growing calculus, and it is these minute remnants that can now be accessed for scientific research thanks to cutting-edge methods. For the purposes of their analysis, the researchers took samples from a variety of individuals at the Bronze Age site of Megiddo and the Early Iron Age site of Tel Erani. They analyzed which food proteins and plant residues were preserved in the calculus on their teeth. "This enables us to find traces of what a person ate," says Stockhammer. "Anyone who does not practice good dental hygiene will still be telling us archaeologists what they have been eating thousands of years from now!"

Palaeoproteomics is the name of this growing new field of research. The method could develop into a standard procedure in archaeology, or so the researchers hope. "Our high-resolution study of ancient proteins and plant residues from human dental calculus is the first of its kind to study the cuisines of the ancient Near East," says Christina Warinner, a molecular archaeologist at Harvard University and the Max Planck Institute for the Science of Human History and co-senior author of the article. "Our research demonstrates the great potential of these methods to detect foods that otherwise leave few archaeological traces. Dental calculus is such a valuable source of information about the lives of ancient peoples."

"Our approach breaks new scientific ground," explains LMU biochemist and lead author Ashley Scott. That is because assigning individual protein remnants to specific foodstuffs is no small task. Beyond the painstaking work of identification, the protein itself must also survive for thousands of years. "Interestingly, we find that allergy-associated proteins appear to be the most stable in human calculus," says Scott, a finding she believes may be due to the known thermostability of many allergens. For instance, the researchers were able to detect wheat via wheat gluten proteins, says Stockhammer. The team was then able to independently confirm the presence of wheat using a type of plant microfossil known as phytoliths. Phytoliths were also used to identify millet and date palm in the Levant during the Bronze and Iron Ages, but phytoliths are not abundant or even present in many foods, which is why the new protein findings are so groundbreaking -- paleoproteomics enables the identification of foods that have left few other traces, such as sesame. Sesame proteins were identified in dental calculus from both Megiddo and Tel Erani. "This suggests that sesame had become a staple food in the Levant by the 2nd millennium BCE," says Stockhammer.

Two additional protein findings are particularly remarkable, explains Stockhammer. In one individual's dental calculus from Megiddo, turmeric and soy proteins were found, while in another individual from Tel Erani banana proteins were identified. All three foods are likely to have reached the Levant via South Asia. Bananas were originally domesticated in Southeast Asia, where they had been used since the 5th millennium BCE, and they arrived in West Africa 4000 years later, but little is known about their intervening trade or use. "Our analyses thus provide crucial information on the spread of the banana around the world. No archaeological or written evidence had previously suggested such an early spread into the Mediterranean region," says Stockhammer, although the sudden appearance of banana in West Africa just a few centuries later has hinted that such a trade might have existed. "I find it spectacular that food was exchanged over long distances at such an early point in history."

Stockhammer notes that they cannot rule out the possibility, of course, that one of the individuals spent part of their life in South Asia and consumed the corresponding food only while they were there. Even if the extent to which spices, oils and fruits were imported is not yet known, there is much to indicate that trade was indeed taking place, since there is also other evidence of exotic spices in the Eastern Mediterranean -- Pharaoh Ramses II was buried with peppercorns from India in 1213 BCE. They were found in his nose.

Read more at Science Daily

The upside of volatile space weather

 

Giant solar flare illustration.
Although violent and unpredictable, stellar flares emitted by a planet's host star do not necessarily prevent life from forming, according to a new Northwestern University study.

Emitted by stars, stellar flares are sudden flashes of magnetic imagery. On Earth, the sun's flares sometimes damage satellites and disrupt radio communications. Elsewhere in the universe, robust stellar flares also have the ability to deplete and destroy atmospheric gases, such as ozone. Without the ozone, harmful levels of ultraviolet (UV) radiation can penetrate a planet's atmosphere, thereby diminishing its chances of harboring surface life.

By combining 3D atmospheric chemistry and climate modeling with observed flare data from distant stars, a Northwestern-led team discovered that stellar flares could play an important role in the long-term evolution of a planet's atmosphere and habitability.

"We compared the atmospheric chemistry of planets experiencing frequent flares with planets experiencing no flares. The long-term atmospheric chemistry is very different," said Northwestern's Howard Chen, the study's first author. "Continuous flares actually drive a planet's atmospheric composition into a new chemical equilibrium."

"We've found that stellar flares might not preclude the existence of life," added Daniel Horton, the study's senior author. "In some cases, flaring doesn't erode all of the atmospheric ozone. Surface life might still have a fighting chance."

The study will be published on Dec. 21 in the journal Nature Astronomy. It is a joint effort among researchers at Northwestern, University of Colorado at Boulder, University of Chicago, Massachusetts Institute of Technology and NASA Nexus for Exoplanet System Science (NExSS).

Horton is an assistant professor of Earth and planetary sciences in Northwestern's Weinberg College of Arts and Sciences. Chen is a Ph.D. candidate in Horton's Climate Change Research Group and a NASA future investigator.

Importance of flares

All stars -- including our very own sun -- flare, or randomly release stored energy. Fortunately for Earthlings, the sun's flares typically have a minimal impact on the planet.

"Our sun is more of a gentle giant," said Allison Youngblood, an astronomer at the University of Colorado and co-author of the study. "It's older and not as active as younger and smaller stars. Earth also has a strong magnetic field, which deflects the sun's damaging winds."

Unfortunately, most potentially habitable exoplanets aren't as lucky. For planets to potentially harbor life, they must be close enough to a star that their water won't freeze -- but not so close that water vaporizes.

"We studied planets orbiting within the habitable zones of M and K dwarf stars -- the most common stars in the universe," Horton said. "Habitable zones around these stars are narrower because the stars are smaller and less powerful than stars like our sun. On the flip side, M and K dwarf stars are thought to have more frequent flaring activity than our sun, and their tidally locked planets are unlikely to have magnetic fields helping deflect their stellar winds."

Chen and Horton previously conducted a study of M dwarf stellar systems' long term climate averages. Flares, however, occur on an hours- or days-long timescales. Although these brief timescales can be difficult to simulate, incorporating the effects of flares is important to forming a more complete picture of exoplanet atmospheres. The researchers accomplished this by incorporating flare data from NASA's Transiting Exoplanet Satellite Survey, launched in 2018, into their model simulations.

Using flares to detect life


If there is life on these M and K dwarf exoplanets, previous work hypothesizes that stellar flares might make it easier to detect. For example, stellar flares can increase the abundance of life-indicating gasses (such as nitrogen dioxide, nitrous oxide and nitric acid) from imperceptible to detectable levels.

"Space weather events are typically viewed as a detriment to habitability," Chen said. "But our study quantitatively shows that some space weather can actually help us detect signatures of important gases that might signify biological processes."

This study involved researchers from a wide range of backgrounds and expertise, including climate scientists, exoplanet scientists, astronomers, theorists and observers.

Read more at Science Daily

Volcanic eruptions directly triggered ocean acidification during Early Cretaceous

 

Volcanic eruption
Around 120 million years ago, the earth experienced an extreme environmental disruption that choked oxygen from its oceans.

Known as oceanic anoxic event (OAE) 1a, the oxygen-deprived water led to a minor -- but significant -- mass extinction that affected the entire globe. During this age in the Early Cretaceous Period, an entire family of sea-dwelling nannoplankton virtually disappeared.

By measuring calcium and strontium isotope abundances in nannoplankton fossils, Northwestern earth scientists have concluded the eruption of the Ontong Java Plateau large igneous province (LIP) directly triggered OAE1a. Roughly the size of Alaska, the Ontong Java LIP erupted for seven million years, making it one of the largest known LIP events ever. During this time, it spewed tons of carbon dioxide (CO2) into the atmosphere, pushing Earth into a greenhouse period that acidified seawater and suffocated the oceans.

"We go back in time to study greenhouse periods because Earth is headed toward another greenhouse period now," said Jiuyuan Wang, a Northwestern Ph.D. student and first author of the study. "The only way to look into the future is to understand the past."

The study was published online last week (Dec. 16) in the journal Geology. It is the first study to apply stable strontium isotope measurements to the study of ancient ocean anoxic events.

Andrew Jacobson, Bradley Sageman and Matthew Hurtgen -- all professors of earth and planetary sciences at Northwestern's Weinberg College of Arts and Sciences -- coauthored the paper. Wang is co-advised by all three professors.

Clues inside cores

Nannoplankton shells and many other marine organisms build their shells out of calcium carbonate, which is the same mineral found in chalk, limestone and some antacid tablets. When atmospheric CO2 dissolves in seawater, it forms a weak acid that can inhibit calcium carbonate formation and may even dissolve preexisting carbonate.

To study the earth's climate during the Early Cretaceous, the Northwestern researchers examined a 1,600-meter-long sediment core taken from the mid-Pacific Mountains. The carbonates in the core formed in a shallow-water, tropical environment approximately 127 to 100 million years ago and are presently found in the deep ocean.

"When you consider the Earth's carbon cycle, carbonate is one of the biggest reservoirs for carbon," Sageman said. "When the ocean acidifies, it basically melts the carbonate. We can see this process impacting the biomineralization process of organisms that use carbonate to build their shells and skeletons right now, and it is a consequence of the observed increase in atmospheric CO2 due to human activities."

Strontium as corroborating evidence

Several previous studies have analyzed the calcium isotope composition of marine carbonate from the geologic past. The data can be interpreted in a variety of ways, however, and calcium carbonate can change throughout time, obscuring signals acquired during its formation. In this study, the Northwestern researchers also analyzed stable isotopes of strontium -- a trace element found in carbonate fossils -- to gain a fuller picture.

"Calcium isotope data can be interpreted in a variety of ways," Jacobson said. "Our study exploits observations that calcium and strontium isotopes behave similarly during calcium carbonate formation, but not during alteration that occurs upon burial. In this study, the calcium-strontium isotope 'multi-proxy' provides strong evidence that the signals are 'primary' and relate to the chemistry of seawater during OAE1a."

"Stable strontium isotopes are less likely to undergo physical or chemical alteration over time," Wang added. "Calcium isotopes, on the other hand, can be easily altered under certain conditions."

The team analyzed calcium and strontium isotopes using high-precision techniques in Jacobson's clean laboratory at Northwestern. The methods involve dissolving carbonate samples and separating the elements, followed by analysis with a thermal ionization mass spectrometer. Researchers have long suspected that LIP eruptions cause ocean acidification. "There is a direct link between ocean acidification and atmospheric CO2 levels," Jacobson said. "Our study provides key evidence linking eruption of the Ontong Java Plateau LIP to ocean acidification. This is something people expected should be the case based on clues from the fossil record, but geochemical data were lacking."

Modeling future warming

By understanding how oceans responded to extreme warming and increased atmospheric CO2, researchers can better understand how earth is responding to current, human-caused climate change. Humans are currently pushing the earth into a new climate, which is acidifying the oceans and likely causing another mass extinction.

"The difference between past greenhouse periods and current human-caused warming is in the timescale," Sageman said. "Past events have unfolded over tens of thousands to millions of years. We're making the same level of warming (or more) happen in less than 200 years."

Read more at Science Daily

Dec 21, 2020

Looking for dark matter near neutron stars with radio telescopes

 In the 1970s, physicists uncovered a problem with the Standard Model of particle physics -- the theory that describes three of the four fundamental forces of nature (electromagnetic, weak, and strong interactions; the fourth is gravity). They found that, while the theory predicts that a symmetry between particles and forces in our Universe and a mirror version should be broken, the experiments say otherwise. This mismatch between theory and observations is dubbed "the Strong CP problem" -- CP stands for Charge+Parity. What is the CP problem, and why has it puzzled scientists for almost half a century?

In the Standard Model, electromagnetism is symmetric under C (charge conjugation), which replaces particles with antiparticles; P (parity), which replaces all the particles with their mirror image counterparts; and, T (time reversal), which replaces interactions going forwards in time with ones going backwards in time, as well as combinations of the symmetry operations CP, CT, PT, and CPT. This means that experiments sensible to the electromagnetic interaction should not be able to distinguish the original systems from the ones that have been transformed by either of the aforementioned symmetry operations.

In the case of the electromagnetic interaction, the theory matches the observations very well. As anticipated, the problem lays in one of the two nuclear forces -- "the strong interaction." As it turns out, the theory allows violations of the combined symmetry operation CP (reflecting particles in a mirror and then changing particle for antiparticle) for both the weak and strong interaction. However, CP violations have so far been only observed for the weak interaction.

More specifically, for the weak interactions, CP violation occurs at approximately the 1-in-1,000 level, and many scientists expected a similar level of violations for the strong interactions. Yet experimentalists have looked for CP violation extensively but to no avail. If it does occur in the strong interaction, it's suppressed by more than a factor of one billion (10⁹).

In 1977, theoretical physicists Roberto Peccei and Helen Quinn proposed a possible solution: they hypothesized a new symmetry that suppresses CP-violating terms in the strong interaction, thus making the theory match the observations. Shortly after, Steven Weinberg and Frank Wilczek -- both of whom went on to win the Nobel Prize in physics in 1979 and 2004, respectively -- realized that this mechanism creates an entirely new particle. Wilczek ultimately dubbed this new particle the "axion," after a popular dish detergent with the same name, for its ability to "clean up" the strong CP problem.

The axion should be an extremely light particle, be extraordinarily abundant in number, and have no charge. Due to these characteristics, axions are excellent dark matter candidates. Dark matter makes up about 85 percent of the mass content of the Universe, but its fundamental nature remains one of the biggest mysteries of modern science. Finding that dark matter is made of axions would be one of the greatest discoveries of modern science.

In 1983, theoretical physicist Pierre Sikivie found that axions have another remarkable property: In the presence of an electromagnetic field, they should sometimes spontaneously convert to easily detectable photons. What was once thought to be completely undetectable, turned out to be potentially detectable as long as there is high enough concentration of axions and strong magnetic fields.

Some of the Universe's strongest magnetic fields surround neutron stars. Since these objects are also very massive, they could also attract copious numbers of axion dark matter particles. So physicists have proposed searching for axion signals in the surrounding regions of neutron stars. Now, an international research team, including the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) postdoc Oscar Macias, has done exactly that with two radio telescopes -- the Robert C. Byrd Green Bank Telescope in the US, and the Effelsberg 100-m Radio Telescope in Germany.

Read more at Science Daily

Meteoric evidence for a previously unknown asteroid

 A Southwest Research Institute-led team of scientists has identified a potentially new meteorite parent asteroid by studying a small shard of a meteorite that arrived on Earth a dozen years ago. The composition of a piece of the meteorite Almahata Sitta (AhS) indicates that its parent body was an asteroid roughly the size of Ceres, the largest object in the main asteroid belt, and formed in the presence of water under intermediate temperatures and pressures.

"Carbonaceous chondrite (CC) meteorites record the geological activity during the earliest stages of the Solar System and provide insight into their parent bodies' histories," said SwRI Staff Scientist Dr. Vicky Hamilton, first author of a paper published in Nature Astronomy outlining this research. "Some of these meteorites are dominated by minerals providing evidence for exposure to water at low temperatures and pressures. The composition of other meteorites points to heating in the absence of water. Evidence for metamorphism in the presence of water at intermediate conditions has been virtually absent, until now."

Asteroids -- and the meteors and meteorites that sometimes come from them -- are leftovers from the formation of our Solar System 4.6 billion years ago. Most reside in the main asteroid belt between the orbits of Mars and Jupiter, but collisions and other events have broken them up and ejected remnants into the inner Solar System. In 2008, a 9-ton, 13-foot diameter asteroid entered Earth's atmosphere, exploding into some 600 meteorites over the Sudan. This marked the first time scientists predicted an asteroid impact prior to entry and allowed recovery of 23 pounds of samples.

"We were allocated a 50-milligram sample of AhS to study," Hamilton said. "We mounted and polished the tiny shard and used an infrared microscope to examine its composition. Spectral analysis identified a range of hydrated minerals, in particular amphibole, which points to intermediate temperatures and pressures and a prolonged period of aqueous alteration on a parent asteroid at least 400, and up to 1,100, miles in diameter."

Amphiboles are rare in CC meteorites, having only been identified previously as a trace component in the Allende meteorite. "AhS is a serendipitous source of information about early Solar System materials that are not represented by CC meteorites in our collections," Hamilton said.

Orbital spectroscopy of asteroids Ryugu and Bennu visited by Japan's Hayabusa2 and NASA's OSIRIS-REx spacecraft this year is consistent with aqueously altered CC meteorites and suggests that both asteroids differ from most known meteorites in terms of their hydration state and evidence for large-scale, low-temperature hydrothermal processes. These missions have collected samples from the surfaces of the asteroids for return to Earth.

Read more at Science Daily

Plants can be larks or night owls just like us

 Plants have the same variation in body clocks as that found in humans, according to new research that explores the genes governing circadian rhythms in plants.

The research shows a single letter change in their DNA code can potentially decide whether a plant is a lark or a night owl. The findings may help farmers and crop breeders to select plants with clocks that are best suited to their location, helping to boost yield and even the ability to withstand climate change.

The circadian clock is the molecular metronome which guides organisms through day and night -- cockadoodledooing the arrival of morning and drawing the curtains closed at night. In plants, it regulates a wide range of processes, from priming photosynthesis at dawn through to regulating flowering time.

These rhythmic patterns can vary depending on geography, latitude, climate and seasons -- with plant clocks having to adapt to cope best with the local conditions.

Researchers at the Earlham Institute and John Innes Centre in Norwich wanted to better understand how much circadian variation exists naturally, with the ultimate goal of breeding crops that are more resilient to local changes in the environment -- a pressing threat with climate change.

To investigate the genetic basis of these local differences, the team examined varying circadian rhythms in Swedish Arabidopsis plants to identify and validate genes linked to the changing tick of the clock.

Dr Hannah Rees, a postdoctoral researcher at the Earlham Institute and author of the paper, said: "A plant's overall health is heavily influenced by how closely its circadian clock is synchronised to the length of each day and the passing of seasons. An accurate body clock can give it an edge over competitors, predators and pathogens.

"We were interested to see how plant circadian clocks would be affected in Sweden; a country that experiences extreme variations in daylight hours and climate. Understanding the genetics behind body clock variation and adaptation could help us breed more climate-resilient crops in other regions."

The team studied the genes in 191 different varieties of Arabidopsis obtained from across the whole of Sweden. They were looking for tiny differences in genes between these plants which might explain the differences in circadian function.

Their analysis revealed that a single DNA base-pair change in a specific gene -- COR28 -- was more likely to be found in plants that flowered late and had a longer period length. COR28 is a known coordinator of flowering time, freezing tolerance and the circadian clock; all of which may influence local adaptation in Sweden.

"It's amazing that just one base-pair change within the sequence of a single gene can influence how quickly the clock ticks," explained Dr Rees.

The scientists also used a pioneering delayed fluorescence imaging method to screen plants with differently-tuned circadian clocks. They showed there was over 10 hours difference between the clocks of the earliest risers and latest phased plants -- akin to the plants working opposite shift patterns. Both geography and the genetic ancestry of the plant appeared to have an influence.

"Arabidopsis thaliana is a model plant system," said Dr Rees. "It was the first plant to have its genome sequenced and it's been extensively studied in circadian biology, but this is the first time anyone has performed this type of association study to find the genes responsible for different clock types.

Read more at Science Daily

The mechanics of the immune system

 Highly complicated processes constantly take place in our body to keep pathogens in check: The T-cells of our immune system are busy searching for antigens -- suspicious molecules that fit exactly into certain receptors of the T-cells like a key into a lock. This activates the T-cell and the defense mechanisms of the immune system are set in motion.

How this process takes place at the molecular level is not yet well understood. What is now clear, however, is that not only chemistry plays a role in the docking of antigens to the T-cell; micromechanical effects are important too. Submicrometer structures on the cell surface act like microscopic tension springs. Tiny forces that occur as a result are likely to be of great importance for the recognition of antigens. At TU Wien, it has now been possible to observe these forces directly using highly developed microscopy methods.

This was made possible by a cooperation between TU Wien, Humbold Universität Berlin, ETH Zurich and MedUni Vienna. The results have now been published in the scientific journal Nano Letters.

Smelling and feeling

As far as physics is concerned, our human sensory organs work in completely different ways. We can smell, i.e. detect substances chemically, and we can touch, i.e. classify objects by the mechanical resistance they present to us. It is similar with T cells: they can recognize the specific structure of certain molecules, but they can also "feel" antigens in a mechanical way.

"T cells have so-called microvilli, which are tiny structures that look like little hairs," says Prof. Gerhard Schütz, head of the biophysics working group at the Institute of Applied Physics at TU Wien. As the experiments showed, remarkable effects can occur when these microvilli come into contact with an object: The microvilli can encompass the object, similar to a curved finger holding a pencil. They can then even enlarge, so that the finger-like protrusion eventually becomes an elongated cylinder, which is turned over the object.

"Tiny forces occur in the process, on the order of less than a nanonewton," says Gerhard Schütz. One nanonewton corresponds roughly to the weight force that a water droplet with a diameter of one-twentieth of a millimeter would exert.

Force measurement in the hydrogel

Measuring such tiny forces is a challenge. "We succeed by placing the cell together with tiny test beads in a specially developed gel. The beads carry molecules on their surface to which the T cell reacts," explains Gerhard Schütz. "If we know the resistance that our gel exerts on the beads and measure exactly how far the beads move in the immediate vicinity of the T-cell, we can calculate the force that acts between the T-cell and the beads."

Read more at Science Daily

Dec 20, 2020

What's up, Skip? Kangaroos really can 'talk' to us, study finds

 

Kangaroo
Animals that have never been domesticated, such as kangaroos, can intentionally communicate with humans, challenging the notion that this behaviour is usually restricted to domesticated animals like dogs, horses or goats, a first of its kind study from the University of Roehampton and the University of Sydney has found.

The research which involved kangaroos, marsupials that were never domesticated, at three locations across Australia*, revealed that kangaroos gazed at a human when trying to access food which had been put in a closed box. The kangaroos used gazes to communicate with the human instead of attempting to open the box themselves, a behaviour that is usually expected for domesticated animals.

Ten out of 11 kangaroos tested actively looked at the person who had put the food in a box to get it (this type of experiment is known as "the unsolvable problem task"). Nine of the 11 kangaroos additionally showed gaze alternations between the box and the person present, a heightened form of communication where they look between the box and human.

The research builds on previous work in the field which has looked at the communication of domesticated animals, such as dogs and goats, and whether intentional communication in animals is a result of domestication. Lead author Dr Alan McElligott, University of Roehampton (now based at City University of Hong Kong), previously led a study which found goats can understand human cues, including pointing, to gather information about their environment. Like dogs and goats, kangaroos are social animals and Dr McElligott's new research suggests they may be able to adapt their usual social behaviours for interacting with humans.

Dr Alan McElligott said: "Through this study, we were able to see that communication between animals can be learnt and that the behaviour of gazing at humans to access food is not related to domestication. Indeed, kangaroos showed a very similar pattern of behaviour we have seen in dogs, horses and even goats when put to the same test.

"Our research shows that the potential for referential intentional communication towards humans by animals has been underestimated, which signals an exciting development in this area. Kangaroos are the first marsupials to be studied in this manner and the positive results should lead to more cognitive research beyond the usual domestic species."

Dr Alexandra Green, School of Life and Environmental Sciences at the University of Sydney, said: "Kangaroos are iconic Australian endemic fauna, adored by many worldwide but also considered as a pest. We hope that this research draws attention to the cognitive abilities of kangaroos and helps foster more positive attitudes towards them."

Read more at Science Daily

Scientists show what loneliness looks like in the brain

 

Person sitting alone on bench.
This holiday season will be a lonely one for many people as social distancing due to COVID-19 continues, and it is important to understand how isolation affects our health. A new study shows a sort of signature in the brains of lonely people that make them distinct in fundamental ways, based on variations in the volume of different brain regions as well as based on how those regions communicate with one another across brain networks.

A team of researchers examined the magnetic resonance imaging (MRI) data, genetics and psychological self-assessments of approximately 40,000 middle-aged and older adults who volunteered to have their information included in the UK Biobank: an open-access database available to health scientists around the world. They then compared the MRI data of participants who reported often feeling lonely with those who did not.

The researchers found several differences in the brains of lonely people. These brain manifestations were centred on what is called the default network: a set of brain regions involved in inner thoughts such as reminiscing, future planning, imagining and thinking about others. Researchers found the default networks of lonely people were more strongly wired together and surprisingly, their grey matter volume in regions of the default network was greater. Loneliness also correlated with differences in the fornix: a bundle of nerve fibres that carries signals from the hippocampus to the default network. In lonely people, the structure of this fibre tract was better preserved.

We use the default network when remembering the past, envisioning the future or thinking about a hypothetical present. The fact the structure and function of this network is positively associated with loneliness may be because lonely people are more likely to use imagination, memories of the past or hopes for the future to overcome their social isolation.

"In the absence of desired social experiences, lonely individuals may be biased towards internally-directed thoughts such as reminiscing or imagining social experiences. We know these cognitive abilities are mediated by the default network brain regions," says Nathan Spreng from The Neuro (Montreal Neurological Institute-Hospital) of McGill University, and the study's lead author. "So this heightened focus on self-reflection, and possibly imagined social experiences, would naturally engage the memory-based functions of the default network."

Loneliness is increasingly being recognized as a major health problem, and previous studies have shown older people who experience loneliness have a higher risk of cognitive decline and dementia. Understanding how loneliness manifests itself in the brain could be key to preventing neurological disease and developing better treatments.

"We are just beginning to understand the impact of loneliness on the brain. Expanding our knowledge in this area will help us to better appreciate the urgency of reducing loneliness in today's society," says Danilo Bzdok, a researcher at The Neuro and the Quebec Artificial Intelligence Institute, and the study's senior author.

Read more at Science Daily

Dec 17, 2020

Saturn moon, Enceladus, could support life in its subsurface ocean

 Using data from NASA's Cassini spacecraft, scientists at Southwest Research Institute (SwRI) modeled chemical processes in the subsurface ocean of Saturn's moon Enceladus. The studies indicate the possibility that a varied metabolic menu could support a potentially diverse microbial community in the liquid water ocean beneath the moon's icy facade.

Prior to its deorbit in September of 2017, Cassini sampled the plume of ice grains and water vapor erupting from cracks on the icy surface of Enceladus, discovering molecular hydrogen, a potential food source for microbes. A new paper published in the planetary science journal Icarus explores other potential energy sources.

"The detection of molecular hydrogen (H2) in the plume indicated that there is free energy available in the ocean of Enceladus," said lead author Christine Ray, who works part time at SwRI as she pursues a Ph.D. in physics from The University of Texas at San Antonio. "On Earth, aerobic, or oxygen-breathing, creatures consume energy in organic matter such as glucose and oxygen to create carbon dioxide and water. Anaerobic microbes can metabolize hydrogen to create methane. All life can be distilled to similar chemical reactions associated with a disequilibrium between oxidant and reductant compounds."

This disequilibrium creates a potential energy gradient, where redox chemistry transfers electrons between chemical species, most often with one species undergoing oxidation while another species undergoes reduction. These processes are vital to many basic functions of life, including photosynthesis and respiration. For example, hydrogen is a source of chemical energy supporting anaerobic microbes that live in the Earth's oceans near hydrothermal vents. At Earth's ocean floor, hydrothermal vents emit hot, energy-rich, mineral-laden fluids that allow unique ecosystems teeming with unusual creatures to thrive. Previous research found growing evidence of hydrothermal vents and chemical disequilibrium on Enceladus, which hints at habitable conditions in its subsurface ocean.

"We wondered if other types of metabolic pathways could also provide sources of energy in Enceladus' ocean," Ray said. "Because that would require a different set of oxidants that we have not yet detected in the plume of Enceladus, we performed chemical modeling to determine if the conditions in the ocean and the rocky core could support these chemical processes."

For example, the authors looked at how ionizing radiation from space could create the oxidants O2 and H2O2, and how abiotic geochemistry in the ocean and rocky core could contribute to chemical disequilibria that might support metabolic processes. The team considered whether these oxidants could accumulate over time if reductants are not present in appreciable amounts. They also considered how aqueous reductants or seafloor minerals could convert these oxidants into sulfates and iron oxides.

"We compared our free energy estimates to ecosystems on Earth and determined that, overall, our values for both aerobic and anaerobic metabolisms meet or exceed minimum requirements," Ray said. "These results indicate that oxidant production and oxidation chemistry could contribute to supporting possible life and a metabolically diverse microbial community on Enceladus."

"Now that we've identified potential food sources for microbes, the next question to ask is 'what is the nature of the complex organics that are coming out of the ocean?'" said SwRI Program Director Dr. Hunter Waite, a coauthor of the new paper, referencing an online Nature paper authored by Postberg et al. in 2018. "This new paper is another step in understanding how a small moon can sustain life in ways that completely exceed our expectations!"

The paper's findings also have great significance for the next generation of exploration.

Read more at Science Daily

Dark storm on Neptune reverses direction, possibly shedding a fragment

 Astronomers using NASA's Hubble Space Telescope watched a mysterious dark vortex on Neptune abruptly steer away from a likely death on the giant blue planet.

The storm, which is wider than the Atlantic Ocean, was born in the planet's northern hemisphere and discovered by Hubble in 2018. Observations a year later showed that it began drifting southward toward the equator, where such storms are expected to vanish from sight. To the surprise of observers, Hubble spotted the vortex change direction by August 2020, doubling back to the north. Though Hubble has tracked similar dark spots over the past 30 years, this unpredictable atmospheric behavior is something new to see.

Equally as puzzling, the storm was not alone. Hubble spotted another, smaller dark spot in January this year that temporarily appeared near its larger cousin. It might possibly have been a piece of the giant vortex that broke off, drifted away, and then disappeared in subsequent observations.

"We are excited about these observations because this smaller dark fragment is potentially part of the dark spot's disruption process," said Michael H. Wong of the University of California at Berkeley. "This is a process that's never been observed. We have seen some other dark spots fading away, and they're gone, but we've never seen anything disrupt, even though it's predicted in computer simulations."

The large storm, which is 4,600 miles across, is the fourth dark spot Hubble has observed on Neptune since 1993. Two other dark storms were discovered by the Voyager 2 spacecraft in 1989 as it flew by the distant planet, but they had disappeared before Hubble could observe them. Since then, only Hubble has had the sharpness and sensitivity in visible light to track these elusive features, which have sequentially appeared and then faded away over a duration of about two years each. Hubble uncovered this latest storm in September 2018.

Wicked Weather

Neptune's dark vortices are high-pressure systems that can form at mid-latitudes and may then migrate toward the equator. They start out remaining stable due to Coriolis forces, which cause northern hemisphere storms to rotate clockwise, due to the planet's rotation. (These storms are unlike hurricanes on Earth, which rotate counterclockwise because they are low-pressure systems.) However, as a storm drifts toward the equator, the Coriolis effect weakens and the storm disintegrates. In computer simulations by several different teams, these storms follow a more-or-less straight path to the equator, until there is no Coriolis effect to hold them together. Unlike the simulations, the latest giant storm didn't migrate into the equatorial "kill zone."

"It was really exciting to see this one act like it's supposed to act and then all of a sudden it just stops and swings back," Wong said. "That was surprising."

Dark Spot Jr.

The Hubble observations also revealed that the dark vortex's puzzling path reversal occurred at the same time that a new spot, informally deemed "dark spot jr.," appeared. The newest spot was slightly smaller than its cousin, measuring about 3,900 miles across. It was near the side of the main dark spot that faces the equator -- the location that some simulations show a disruption would occur.

However, the timing of the smaller spot's emergence was unusual. "When I first saw the small spot, I thought the bigger one was being disrupted," Wong said. "I didn't think another vortex was forming because the small one is farther towards the equator. So it's within this unstable region. But we can't prove the two are related. It remains a complete mystery.

"It was also in January that the dark vortex stopped its motion and started moving northward again," Wong added. "Maybe by shedding that fragment, that was enough to stop it from moving towards the equator."

The researchers are continuing to analyze more data to determine whether remnants of dark spot jr. persisted through the rest of 2020.

Dark Storms Still Puzzling

It's still a mystery how these storms form, but this latest giant dark vortex is the best studied so far. The storm's dark appearance may be due to an elevated dark cloud layer, and it could be telling astronomers about the storm's vertical structure.

Another unusual feature of the dark spot is the absence of bright companion clouds around it, which were present in Hubble images taken when the vortex was discovered in 2018. Apparently, the clouds disappeared when the vortex halted its southward journey. The bright clouds form when the flow of air is perturbed and diverted upward over the vortex, causing gases to likely freeze into methane ice crystals. The lack of clouds could be revealing information on how spots evolve, say researchers.

Weather Eye on the Outer Planets

Hubble snapped many of the images of the dark spots as part of the Outer Planet Atmospheres Legacy (OPAL) program, a long-term Hubble project, led by Amy Simon of NASA's Goddard Space Flight Center in Greenbelt, Maryland, that annually captures global maps of our solar system's outer planets when they are closest to Earth in their orbits.

OPAL's key goals are to study long-term seasonal changes, as well as capture comparatively transitory events, such as the appearance of dark spots on Neptune or potentially Uranus. These dark storms may be so fleeting that in the past some of them may have appeared and faded during multi-year gaps in Hubble's observations of Neptune. The OPAL program ensures that astronomers won't miss another one.

Read more at Science Daily