Dec 23, 2020

How nearby galaxies form their stars

 Stars are born in dense clouds of molecular hydrogen gas that permeates interstellar space of most galaxies. While the physics of star formation is complex, recent years have seen substantial progress towards understanding how stars form in a galactic environment. What ultimately determines the level of star formation in galaxies, however, remains an open question.

In principle, two main factors influence the star formation activity: The amount of molecular gas that is present in galaxies and the timescale over which the gas reservoir is depleted by converting it into stars. While the gas mass of galaxies is regulated by a competition between gas inflows, outflows and consumption, the physics of the gas-to-star conversion is currently not well understood. Given its potentially critical role, many efforts have been undertaken to determine the gas depletion timescale observationally. However, these efforts resulted in conflicting findings partly because of the challenge in measuring gas masses reliably given current detection limits.

Typical star formation is linked to the overall gas reservoir

The present study from the Institute for Computational Science of the University of Zurich uses a new statistical method based on Bayesian modeling to properly account for galaxies with undetected amounts of molecular or atomic hydrogen to minimize observational bias. This new analysis reveals that, in typical star-forming galaxies, molecular and atomic hydrogen are converted into stars over approximately constant timescales of 1 and 10 billion years, respectively. However, extremely active galaxies ("starbursts") are found to have much shorter gas depletion timescales. "These findings suggest that star formation is indeed directly linked to the overall gas reservoir and thus set by the rate at which gas enters or leaves a galaxy," says Robert Feldmann, professor at the Center for Theoretical Astrophysics and Cosmology. In contrast, the dramatically higher star-formation activity of starbursts likely has a different physical origin, such as galaxy interactions or instabilities in galactic disks.

This analysis is based on observational data of nearby galaxies. Observations with the Atacama Large Millimeter/Submillimeter Array, the Square Kilometer Array and other observatories promise to probe the gas content of large numbers of galaxies across cosmic history. It will be paramount to continue the development of statistical and data-science methods to accurately extract the physical content from these new observations and to fully uncover the mysteries of star formation in galaxies.

From Science Daily

Why an early start is key to developing musical skill later in life

 Among the many holiday traditions scuttled by pandemic restrictions this year are live concerts featuring skilled musicians. These gifted performers can often play with such ease that it is easy to underestimate the countless hours of practice that went into honing their craft.

But could there be more to mastering music? Is there, as some have suggested, a developmental period early in life when the brain is especially receptive to musical training? The answer, according to new research published in the journal Psychological Science, is probably not.

"It is a common observation that successful musicians often start their musical training early," said Laura Wesseldijk, a researcher at the Karolinska Institute in Sweden and first author on the paper. "One much-discussed explanation is that there may be a period in early childhood during which the brain is particularly susceptible to musical stimulation. We found, however, that the explanation to why an early start matters may be more complicated and interesting than previously believed."

While the new study supports the idea that an early start is associated with higher levels of musical skills and achievement in adulthood, the underlying reasons for this may have more to do with familial influences -- such as genetic factors and an encouraging musical family environment -- along with accumulating more total practice time than those who start later in life.

To untangle these effects, Wesseldijk and her colleagues recruited 310 professional musicians from various Swedish music institutions, such as orchestral and music schools. The researchers also used data from an existing research project, the Study of Twin Adults: Genes and Environment (STAGE). Participants from both studies were tested on musical aptitude and achievement. They also answered a series of questions that gauged how often they practiced and the age of onset of musical training. The STAGE data also provided genetic information on its participants.

By comparing the results from these two independent studies, the researchers were able to show that an earlier start age is associated with musical aptitude, both in amateurs and professional musicians, even after controlling for accumulated practice time. They then evaluated starting age in a manner that accounted for the genetic data from the STAGE study.

The results indicate that genetic factors -- possibly related to musical interest and talent -- have a substantial influence on the age individuals start music practice and their future musical aptitude. When controlling for familial factors, namely shared genetic and environmental influences, such as a home environment that is steeped in music, there was no additional association between an earlier start age and musicality.

A possible explanation for these results could be that children who display more talent in a particular field, such as music, are encouraged to start practicing earlier. Another possibility is that a musically active, interested, and talented family provides a musical environment for the child, while also passing on their genetic predispositions to engage in music.

Read more at Science Daily

Increased meat consumption associated with symptoms of childhood asthma

 Substances present in cooked meats are associated with increased wheezing in children, Mount Sinai researchers report. Their study, published in Thorax, highlights pro-inflammatory compounds called advanced glycation end-products (AGEs) as an example of early dietary risk factors that may have broad clinical and public health implications for the prevention of inflammatory airway disease.

Asthma prevalence among children in the United States has risen over the last few decades. Researchers found that dietary habits established earlier in life may be associated with wheezing and potentially the future development of asthma.

Researchers examined 4,388 children between 2 and 17 years old from the 2003-2006 National Health and Nutrition Examination Survey (NHANES), a program of the National Center for Health Statistics, which is part of the U.S. Centers for Disease Control and Prevention. It is designed to evaluate the health and nutritional status of adults and children in the United States through interviews and physical examinations.

The researchers used NHANES survey data to evaluate associations between dietary AGE and meat consumption frequencies, and respiratory symptoms. They found that higher AGE intake was significantly associated with increased odds of wheezing, importantly including wheezing that disrupted sleep and exercise, and that required prescription medication. Similarly, higher intake of non-seafood meats was associated with wheeze-disrupted sleep and wheezing that required prescription medication.

"We found that higher consumption of dietary AGEs, which are largely derived from intake of non-seafood meats, was associated with increased risk of wheezing in children, regardless of overall diet quality or an established diagnosis of asthma," said Jing Gennie Wang, MD, lead author of the study, and a former fellow in Pulmonary, Critical Care and Sleep Medicine at the Icahn School of Medicine at Mount Sinai.

Read more at Science Daily

Climate change -- not Genghis Khan -- caused the demise of Central Asia's river civilizations, research shows

Ruins of Otrar, Kazakhstan

A new study challenges the long-held view that the destruction of Central Asia's medieval river civilizations was a direct result of the Mongol invasion in the early 13th century CE.

The Aral Sea basin in Central Asia and the major rivers flowing through the region were once home to advanced river civilizations which used floodwater irrigation to farm.

The region's decline is often attributed to the devastating Mongol invasion of the early 13th century, but new research of long-term river dynamics and ancient irrigation networks shows the changing climate and dryer conditions may have been the real cause.

Research led by the University of Lincoln, UK, reconstructed the effects of climate change on floodwater farming in the region and found that decreasing river flow was equally, if not more, important for the abandonment of these previously flourishing city states.

Mark Macklin, author and Distinguished Professor of River Systems and Global Change, and Director of the Lincoln Centre for Water and Planetary Health at the University of Lincoln said: "Our research shows that it was climate change, not Genghis Khan, that was the ultimate cause for the demise of Central Asia's forgotten river civilizations.

"We found that Central Asia recovered quickly following Arab invasions in the 7th and 8th centuries CE because of favourable wet conditions. But prolonged drought during and following the later Mongol destruction reduced the resilience of local population and prevented the re-establishment of large-scale irrigation-based agriculture."

The research focused on the archaeological sites and irrigation canals of the Otrar oasis, a UNESCO World Heritage site that was once a Silk Road trade hub located at the meeting point of the Syr Darya and Arys rivers in present southern Kazakhstan.

The researchers investigated the region to determine when the irrigation canals were abandoned and studied the past dynamics of the Arys river, whose waters fed the canals. The abandonment of irrigation systems matches a phase of riverbed erosion between the 10th and 14th century CE, that coincided with a dry period with low river flows, rather than corresponding with the Mongol invasion.

Read more at Science Daily

Climate change: Threshold for dangerous warming will likely be crossed between 2027-2042

 

Photo concept, hourglass on beach
The threshold for dangerous global warming will likely be crossed between 2027 and 2042 -- a much narrower window than the Intergovernmental Panel on Climate Change's estimate of between now and 2052. In a study published in Climate Dynamics, researchers from McGill University introduce a new and more precise way to project the Earth's temperature. Based on historical data, it considerably reduces uncertainties compared to previous approaches.

Scientists have been making projections of future global warming using climate models for decades. These models play an important role in understanding the Earth's climate and how it will likely change. But how accurate are they?

Dealing with uncertainty

Climate models are mathematical simulations of different factors that interact to affect Earth's climate, such as the atmosphere, ocean, ice, land surface and the sun. While they are based on the best understanding of the Earth's systems available, when it comes to forecasting the future, uncertainties remain.

"Climate skeptics have argued that global warming projections are unreliable because they depend on faulty supercomputer models. While these criticisms are unwarranted, they underscore the need for independent and different approaches to predicting future warming," says co-author Bruno Tremblay, a professor in the Department of Atmospheric and Oceanic Sciences at McGill University.

Until now, wide ranges in overall temperature projections have made it difficult to pinpoint outcomes in different mitigation scenarios. For instance, if atmospheric CO2 concentrations are doubled, the General Circulation Models (GCMs) used by the Intergovernmental Panel on Climate Change (IPCC), predict a very likely global average temperature increase between 1.9 and 4.5C -- a vast range covering moderate climate changes on the lower end, and catastrophic ones on the other.

A new approach

"Our new approach to projecting the Earth's temperature is based on historical climate data, rather than the theoretical relationships that are imperfectly captured by the GCMs. Our approach allows climate sensitivity and its uncertainty to be estimated from direct observations with few assumptions," says co-author Raphael Hebert, a former graduate researcher at McGill University, now working at the Alfred-Wegener-Institut in Potsdam, Germany.

In a study for Climate Dynamics, the researchers introduced the new Scaling Climate Response Function (SCRF) model to project the Earth's temperature to 2100. Grounded on historical data, it reduces prediction uncertainties by about half, compared to the approach currently used by the IPCC. In analyzing the results, the researchers found that the threshold for dangerous warming (+1.5C) will likely be crossed between 2027 and 2042. This is a much narrower window than GCMs estimates of between now and 2052. On average, the researchers also found that expected warming was a little lower, by about 10 to 15 percent. They also found, however, that the "very likely warming ranges" of the SCRF were within those of the GCMs, giving the latter support.

Read more at Science Daily

Dec 22, 2020

Neuroscientists isolate promising mini antibodies against COVID-19 from a llama

 

Llama
National Institutes of Health researchers have isolated a set of promising, tiny antibodies, or "nanobodies," against SARS-CoV-2 that were produced by a llama named Cormac. Preliminary results published in Scientific Reports suggest that at least one of these nanobodies, called NIH-CoVnb-112, could prevent infections and detect virus particles by grabbing hold of SARS-CoV-2 spike proteins. In addition, the nanobody appeared to work equally well in either liquid or aerosol form, suggesting it could remain effective after inhalation. SARS-CoV-2 is the virus that causes COVID-19.

The study was led by a pair of neuroscientists, Thomas J. "T.J." Esparza, B.S., and David L. Brody, M.D., Ph.D., who work in a brain imaging lab at the NIH's National Institute of Neurological Disorders and Stroke (NINDS).

"For years TJ and I had been testing out how to use nanobodies to improve brain imaging. When the pandemic broke, we thought this was a once in a lifetime, all-hands-on-deck situation and joined the fight," said Dr. Brody, who is also a professor at Uniformed Services University for the Health Sciences and the senior author of the study. "We hope that these anti-COVID-19 nanobodies may be highly effective and versatile in combating the coronavirus pandemic."

A nanobody is a special type of antibody naturally produced by the immune systems of camelids, a group of animals that includes camels, llamas, and alpacas. On average, these proteins are about a tenth the weight of most human antibodies. This is because nanobodies isolated in the lab are essentially free-floating versions of the tips of the arms of heavy chain proteins, which form the backbone of a typical Y-shaped human IgG antibody. These tips play a critical role in the immune system's defenses by recognizing proteins on viruses, bacteria, and other invaders, also known as antigens.

Because nanobodies are more stable, less expensive to produce, and easier to engineer than typical antibodies, a growing body of researchers, including Mr. Esparza and Dr. Brody, have been using them for medical research. For instance, a few years ago scientists showed that humanized nanobodies may be more effective at treating an autoimmune form of thrombotic thrombocytopenic purpura, a rare blood disorder, than current therapies.

Since the pandemic broke, several researchers have produced llama nanobodies against the SARS-CoV-2 spike protein that may be effective at preventing infections. In the current study, the researchers used a slightly different strategy than others to find nanobodies that may work especially well.

"The SARS-CoV-2 spike protein acts like a key. It does this by opening the door to infections when it binds to a protein called the angiotensin converting enzyme 2 (ACE2) receptor, found on the surface of some cells," said Mr. Esparza, the lead author of the study. "We developed a method that would isolate nanobodies that block infections by covering the teeth of the spike protein that bind to and unlock the ACE2 receptor."

To do this, the researchers immunized Cormac five times over 28 days with a purified version of the SARS-CoV-2 spike protein. After testing hundreds of nanobodies they found that Cormac produced 13 nanobodies that might be strong candidates.

Initial experiments suggested that one candidate, called NIH-CoVnb-112, could work very well. Test tube studies showed that this nanobody bound to the ACE2 receptor 2 to 10 times stronger than nanobodies produced by other labs. Other experiments suggested that the NIH nanobody stuck directly to the ACE2 receptor binding portion of the spike protein.

Then the team showed that the NIH-CoVnB-112 nanobody could be effective at preventing coronavirus infections. To mimic the SARS-CoV-2 virus, the researchers genetically mutated a harmless "pseudovirus" so that it could use the spike protein to infect cells that have human ACE2 receptors. The researchers saw that relatively low levels of the NIH-CoVnb-112 nanobodies prevented the pseudovirus from infecting these cells in petri dishes.

Importantly, the researchers showed that the nanobody was equally effective in preventing the infections in petri dishes when it was sprayed through the kind of nebulizer, or inhaler, often used to help treat patients with asthma.

"One of the exciting things about nanobodies is that, unlike most regular antibodies, they can be aerosolized and inhaled to coat the lungs and airways," said Dr. Brody.

Read more at Science Daily

The aroma of distant worlds

Spices

Asian spices such as turmeric and fruits like the banana had already reached the Mediterranean more than 3000 years ago, much earlier than previously thought. A team of researchers working alongside archaeologist Philipp Stockhammer at Ludwig-Maximilians-Universität in Munich (LMU) has shown that even in the Bronze Age, long-distance trade in food was already connecting distant societies.

A market in the city of Megiddo in the Levant 3700 years ago: The market traders are hawking not only wheat, millet or dates, which grow throughout the region, but also carafes of sesame oil and bowls of a bright yellow spice that has recently appeared among their wares. This is how Philipp Stockhammer imagines the bustle of the Bronze Age market in the eastern Mediterranean. Working with an international team to analyze food residues in tooth tartar, the LMU archaeologist has found evidence that people in the Levant were already eating turmeric, bananas and even soy in the Bronze and Early Iron Ages. "Exotic spices, fruits and oils from Asia had thus reached the Mediterranean several centuries, in some cases even millennia, earlier than had been previously thought," says Stockhammer. "This is the earliest direct evidence to date of turmeric, banana and soy outside of South and East Asia." It is also direct evidence that as early as the second millennium BCE there was already a flourishing long-distance trade in exotic fruits, spices and oils, which is believed to have connected South Asia and the Levant via Mesopotamia or Egypt. While substantial trade across these regions is amply documented later on, tracing the roots of this nascent globalization has proved to be a stubborn problem. The findings of this study confirm that long-distance trade in culinary goods has connected these distant societies since at least the Bronze Age. People obviously had a great interest in exotic foods from very early on.

For their analyses, Stockhammer's international team examined 16 individuals from the Megiddo and Tel Erani excavations, which are located in present-day Israel. The region in the southern Levant served as an important bridge between the Mediterranean, Asia and Egypt in the 2nd millennium BCE. The aim of the research was to investigate the cuisines of Bronze Age Levantine populations by analyzing traces of food remnants, including ancient proteins and plant microfossils, that have remained preserved in human dental calculus over thousands of years.

The human mouth is full of bacteria, which continually petrify and form calculus. Tiny food particles become entrapped and preserved in the growing calculus, and it is these minute remnants that can now be accessed for scientific research thanks to cutting-edge methods. For the purposes of their analysis, the researchers took samples from a variety of individuals at the Bronze Age site of Megiddo and the Early Iron Age site of Tel Erani. They analyzed which food proteins and plant residues were preserved in the calculus on their teeth. "This enables us to find traces of what a person ate," says Stockhammer. "Anyone who does not practice good dental hygiene will still be telling us archaeologists what they have been eating thousands of years from now!"

Palaeoproteomics is the name of this growing new field of research. The method could develop into a standard procedure in archaeology, or so the researchers hope. "Our high-resolution study of ancient proteins and plant residues from human dental calculus is the first of its kind to study the cuisines of the ancient Near East," says Christina Warinner, a molecular archaeologist at Harvard University and the Max Planck Institute for the Science of Human History and co-senior author of the article. "Our research demonstrates the great potential of these methods to detect foods that otherwise leave few archaeological traces. Dental calculus is such a valuable source of information about the lives of ancient peoples."

"Our approach breaks new scientific ground," explains LMU biochemist and lead author Ashley Scott. That is because assigning individual protein remnants to specific foodstuffs is no small task. Beyond the painstaking work of identification, the protein itself must also survive for thousands of years. "Interestingly, we find that allergy-associated proteins appear to be the most stable in human calculus," says Scott, a finding she believes may be due to the known thermostability of many allergens. For instance, the researchers were able to detect wheat via wheat gluten proteins, says Stockhammer. The team was then able to independently confirm the presence of wheat using a type of plant microfossil known as phytoliths. Phytoliths were also used to identify millet and date palm in the Levant during the Bronze and Iron Ages, but phytoliths are not abundant or even present in many foods, which is why the new protein findings are so groundbreaking -- paleoproteomics enables the identification of foods that have left few other traces, such as sesame. Sesame proteins were identified in dental calculus from both Megiddo and Tel Erani. "This suggests that sesame had become a staple food in the Levant by the 2nd millennium BCE," says Stockhammer.

Two additional protein findings are particularly remarkable, explains Stockhammer. In one individual's dental calculus from Megiddo, turmeric and soy proteins were found, while in another individual from Tel Erani banana proteins were identified. All three foods are likely to have reached the Levant via South Asia. Bananas were originally domesticated in Southeast Asia, where they had been used since the 5th millennium BCE, and they arrived in West Africa 4000 years later, but little is known about their intervening trade or use. "Our analyses thus provide crucial information on the spread of the banana around the world. No archaeological or written evidence had previously suggested such an early spread into the Mediterranean region," says Stockhammer, although the sudden appearance of banana in West Africa just a few centuries later has hinted that such a trade might have existed. "I find it spectacular that food was exchanged over long distances at such an early point in history."

Stockhammer notes that they cannot rule out the possibility, of course, that one of the individuals spent part of their life in South Asia and consumed the corresponding food only while they were there. Even if the extent to which spices, oils and fruits were imported is not yet known, there is much to indicate that trade was indeed taking place, since there is also other evidence of exotic spices in the Eastern Mediterranean -- Pharaoh Ramses II was buried with peppercorns from India in 1213 BCE. They were found in his nose.

Read more at Science Daily

The upside of volatile space weather

 

Giant solar flare illustration.
Although violent and unpredictable, stellar flares emitted by a planet's host star do not necessarily prevent life from forming, according to a new Northwestern University study.

Emitted by stars, stellar flares are sudden flashes of magnetic imagery. On Earth, the sun's flares sometimes damage satellites and disrupt radio communications. Elsewhere in the universe, robust stellar flares also have the ability to deplete and destroy atmospheric gases, such as ozone. Without the ozone, harmful levels of ultraviolet (UV) radiation can penetrate a planet's atmosphere, thereby diminishing its chances of harboring surface life.

By combining 3D atmospheric chemistry and climate modeling with observed flare data from distant stars, a Northwestern-led team discovered that stellar flares could play an important role in the long-term evolution of a planet's atmosphere and habitability.

"We compared the atmospheric chemistry of planets experiencing frequent flares with planets experiencing no flares. The long-term atmospheric chemistry is very different," said Northwestern's Howard Chen, the study's first author. "Continuous flares actually drive a planet's atmospheric composition into a new chemical equilibrium."

"We've found that stellar flares might not preclude the existence of life," added Daniel Horton, the study's senior author. "In some cases, flaring doesn't erode all of the atmospheric ozone. Surface life might still have a fighting chance."

The study will be published on Dec. 21 in the journal Nature Astronomy. It is a joint effort among researchers at Northwestern, University of Colorado at Boulder, University of Chicago, Massachusetts Institute of Technology and NASA Nexus for Exoplanet System Science (NExSS).

Horton is an assistant professor of Earth and planetary sciences in Northwestern's Weinberg College of Arts and Sciences. Chen is a Ph.D. candidate in Horton's Climate Change Research Group and a NASA future investigator.

Importance of flares

All stars -- including our very own sun -- flare, or randomly release stored energy. Fortunately for Earthlings, the sun's flares typically have a minimal impact on the planet.

"Our sun is more of a gentle giant," said Allison Youngblood, an astronomer at the University of Colorado and co-author of the study. "It's older and not as active as younger and smaller stars. Earth also has a strong magnetic field, which deflects the sun's damaging winds."

Unfortunately, most potentially habitable exoplanets aren't as lucky. For planets to potentially harbor life, they must be close enough to a star that their water won't freeze -- but not so close that water vaporizes.

"We studied planets orbiting within the habitable zones of M and K dwarf stars -- the most common stars in the universe," Horton said. "Habitable zones around these stars are narrower because the stars are smaller and less powerful than stars like our sun. On the flip side, M and K dwarf stars are thought to have more frequent flaring activity than our sun, and their tidally locked planets are unlikely to have magnetic fields helping deflect their stellar winds."

Chen and Horton previously conducted a study of M dwarf stellar systems' long term climate averages. Flares, however, occur on an hours- or days-long timescales. Although these brief timescales can be difficult to simulate, incorporating the effects of flares is important to forming a more complete picture of exoplanet atmospheres. The researchers accomplished this by incorporating flare data from NASA's Transiting Exoplanet Satellite Survey, launched in 2018, into their model simulations.

Using flares to detect life


If there is life on these M and K dwarf exoplanets, previous work hypothesizes that stellar flares might make it easier to detect. For example, stellar flares can increase the abundance of life-indicating gasses (such as nitrogen dioxide, nitrous oxide and nitric acid) from imperceptible to detectable levels.

"Space weather events are typically viewed as a detriment to habitability," Chen said. "But our study quantitatively shows that some space weather can actually help us detect signatures of important gases that might signify biological processes."

This study involved researchers from a wide range of backgrounds and expertise, including climate scientists, exoplanet scientists, astronomers, theorists and observers.

Read more at Science Daily

Volcanic eruptions directly triggered ocean acidification during Early Cretaceous

 

Volcanic eruption
Around 120 million years ago, the earth experienced an extreme environmental disruption that choked oxygen from its oceans.

Known as oceanic anoxic event (OAE) 1a, the oxygen-deprived water led to a minor -- but significant -- mass extinction that affected the entire globe. During this age in the Early Cretaceous Period, an entire family of sea-dwelling nannoplankton virtually disappeared.

By measuring calcium and strontium isotope abundances in nannoplankton fossils, Northwestern earth scientists have concluded the eruption of the Ontong Java Plateau large igneous province (LIP) directly triggered OAE1a. Roughly the size of Alaska, the Ontong Java LIP erupted for seven million years, making it one of the largest known LIP events ever. During this time, it spewed tons of carbon dioxide (CO2) into the atmosphere, pushing Earth into a greenhouse period that acidified seawater and suffocated the oceans.

"We go back in time to study greenhouse periods because Earth is headed toward another greenhouse period now," said Jiuyuan Wang, a Northwestern Ph.D. student and first author of the study. "The only way to look into the future is to understand the past."

The study was published online last week (Dec. 16) in the journal Geology. It is the first study to apply stable strontium isotope measurements to the study of ancient ocean anoxic events.

Andrew Jacobson, Bradley Sageman and Matthew Hurtgen -- all professors of earth and planetary sciences at Northwestern's Weinberg College of Arts and Sciences -- coauthored the paper. Wang is co-advised by all three professors.

Clues inside cores

Nannoplankton shells and many other marine organisms build their shells out of calcium carbonate, which is the same mineral found in chalk, limestone and some antacid tablets. When atmospheric CO2 dissolves in seawater, it forms a weak acid that can inhibit calcium carbonate formation and may even dissolve preexisting carbonate.

To study the earth's climate during the Early Cretaceous, the Northwestern researchers examined a 1,600-meter-long sediment core taken from the mid-Pacific Mountains. The carbonates in the core formed in a shallow-water, tropical environment approximately 127 to 100 million years ago and are presently found in the deep ocean.

"When you consider the Earth's carbon cycle, carbonate is one of the biggest reservoirs for carbon," Sageman said. "When the ocean acidifies, it basically melts the carbonate. We can see this process impacting the biomineralization process of organisms that use carbonate to build their shells and skeletons right now, and it is a consequence of the observed increase in atmospheric CO2 due to human activities."

Strontium as corroborating evidence

Several previous studies have analyzed the calcium isotope composition of marine carbonate from the geologic past. The data can be interpreted in a variety of ways, however, and calcium carbonate can change throughout time, obscuring signals acquired during its formation. In this study, the Northwestern researchers also analyzed stable isotopes of strontium -- a trace element found in carbonate fossils -- to gain a fuller picture.

"Calcium isotope data can be interpreted in a variety of ways," Jacobson said. "Our study exploits observations that calcium and strontium isotopes behave similarly during calcium carbonate formation, but not during alteration that occurs upon burial. In this study, the calcium-strontium isotope 'multi-proxy' provides strong evidence that the signals are 'primary' and relate to the chemistry of seawater during OAE1a."

"Stable strontium isotopes are less likely to undergo physical or chemical alteration over time," Wang added. "Calcium isotopes, on the other hand, can be easily altered under certain conditions."

The team analyzed calcium and strontium isotopes using high-precision techniques in Jacobson's clean laboratory at Northwestern. The methods involve dissolving carbonate samples and separating the elements, followed by analysis with a thermal ionization mass spectrometer. Researchers have long suspected that LIP eruptions cause ocean acidification. "There is a direct link between ocean acidification and atmospheric CO2 levels," Jacobson said. "Our study provides key evidence linking eruption of the Ontong Java Plateau LIP to ocean acidification. This is something people expected should be the case based on clues from the fossil record, but geochemical data were lacking."

Modeling future warming

By understanding how oceans responded to extreme warming and increased atmospheric CO2, researchers can better understand how earth is responding to current, human-caused climate change. Humans are currently pushing the earth into a new climate, which is acidifying the oceans and likely causing another mass extinction.

"The difference between past greenhouse periods and current human-caused warming is in the timescale," Sageman said. "Past events have unfolded over tens of thousands to millions of years. We're making the same level of warming (or more) happen in less than 200 years."

Read more at Science Daily

Dec 21, 2020

Looking for dark matter near neutron stars with radio telescopes

 In the 1970s, physicists uncovered a problem with the Standard Model of particle physics -- the theory that describes three of the four fundamental forces of nature (electromagnetic, weak, and strong interactions; the fourth is gravity). They found that, while the theory predicts that a symmetry between particles and forces in our Universe and a mirror version should be broken, the experiments say otherwise. This mismatch between theory and observations is dubbed "the Strong CP problem" -- CP stands for Charge+Parity. What is the CP problem, and why has it puzzled scientists for almost half a century?

In the Standard Model, electromagnetism is symmetric under C (charge conjugation), which replaces particles with antiparticles; P (parity), which replaces all the particles with their mirror image counterparts; and, T (time reversal), which replaces interactions going forwards in time with ones going backwards in time, as well as combinations of the symmetry operations CP, CT, PT, and CPT. This means that experiments sensible to the electromagnetic interaction should not be able to distinguish the original systems from the ones that have been transformed by either of the aforementioned symmetry operations.

In the case of the electromagnetic interaction, the theory matches the observations very well. As anticipated, the problem lays in one of the two nuclear forces -- "the strong interaction." As it turns out, the theory allows violations of the combined symmetry operation CP (reflecting particles in a mirror and then changing particle for antiparticle) for both the weak and strong interaction. However, CP violations have so far been only observed for the weak interaction.

More specifically, for the weak interactions, CP violation occurs at approximately the 1-in-1,000 level, and many scientists expected a similar level of violations for the strong interactions. Yet experimentalists have looked for CP violation extensively but to no avail. If it does occur in the strong interaction, it's suppressed by more than a factor of one billion (10⁹).

In 1977, theoretical physicists Roberto Peccei and Helen Quinn proposed a possible solution: they hypothesized a new symmetry that suppresses CP-violating terms in the strong interaction, thus making the theory match the observations. Shortly after, Steven Weinberg and Frank Wilczek -- both of whom went on to win the Nobel Prize in physics in 1979 and 2004, respectively -- realized that this mechanism creates an entirely new particle. Wilczek ultimately dubbed this new particle the "axion," after a popular dish detergent with the same name, for its ability to "clean up" the strong CP problem.

The axion should be an extremely light particle, be extraordinarily abundant in number, and have no charge. Due to these characteristics, axions are excellent dark matter candidates. Dark matter makes up about 85 percent of the mass content of the Universe, but its fundamental nature remains one of the biggest mysteries of modern science. Finding that dark matter is made of axions would be one of the greatest discoveries of modern science.

In 1983, theoretical physicist Pierre Sikivie found that axions have another remarkable property: In the presence of an electromagnetic field, they should sometimes spontaneously convert to easily detectable photons. What was once thought to be completely undetectable, turned out to be potentially detectable as long as there is high enough concentration of axions and strong magnetic fields.

Some of the Universe's strongest magnetic fields surround neutron stars. Since these objects are also very massive, they could also attract copious numbers of axion dark matter particles. So physicists have proposed searching for axion signals in the surrounding regions of neutron stars. Now, an international research team, including the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU) postdoc Oscar Macias, has done exactly that with two radio telescopes -- the Robert C. Byrd Green Bank Telescope in the US, and the Effelsberg 100-m Radio Telescope in Germany.

Read more at Science Daily

Meteoric evidence for a previously unknown asteroid

 A Southwest Research Institute-led team of scientists has identified a potentially new meteorite parent asteroid by studying a small shard of a meteorite that arrived on Earth a dozen years ago. The composition of a piece of the meteorite Almahata Sitta (AhS) indicates that its parent body was an asteroid roughly the size of Ceres, the largest object in the main asteroid belt, and formed in the presence of water under intermediate temperatures and pressures.

"Carbonaceous chondrite (CC) meteorites record the geological activity during the earliest stages of the Solar System and provide insight into their parent bodies' histories," said SwRI Staff Scientist Dr. Vicky Hamilton, first author of a paper published in Nature Astronomy outlining this research. "Some of these meteorites are dominated by minerals providing evidence for exposure to water at low temperatures and pressures. The composition of other meteorites points to heating in the absence of water. Evidence for metamorphism in the presence of water at intermediate conditions has been virtually absent, until now."

Asteroids -- and the meteors and meteorites that sometimes come from them -- are leftovers from the formation of our Solar System 4.6 billion years ago. Most reside in the main asteroid belt between the orbits of Mars and Jupiter, but collisions and other events have broken them up and ejected remnants into the inner Solar System. In 2008, a 9-ton, 13-foot diameter asteroid entered Earth's atmosphere, exploding into some 600 meteorites over the Sudan. This marked the first time scientists predicted an asteroid impact prior to entry and allowed recovery of 23 pounds of samples.

"We were allocated a 50-milligram sample of AhS to study," Hamilton said. "We mounted and polished the tiny shard and used an infrared microscope to examine its composition. Spectral analysis identified a range of hydrated minerals, in particular amphibole, which points to intermediate temperatures and pressures and a prolonged period of aqueous alteration on a parent asteroid at least 400, and up to 1,100, miles in diameter."

Amphiboles are rare in CC meteorites, having only been identified previously as a trace component in the Allende meteorite. "AhS is a serendipitous source of information about early Solar System materials that are not represented by CC meteorites in our collections," Hamilton said.

Orbital spectroscopy of asteroids Ryugu and Bennu visited by Japan's Hayabusa2 and NASA's OSIRIS-REx spacecraft this year is consistent with aqueously altered CC meteorites and suggests that both asteroids differ from most known meteorites in terms of their hydration state and evidence for large-scale, low-temperature hydrothermal processes. These missions have collected samples from the surfaces of the asteroids for return to Earth.

Read more at Science Daily

Plants can be larks or night owls just like us

 Plants have the same variation in body clocks as that found in humans, according to new research that explores the genes governing circadian rhythms in plants.

The research shows a single letter change in their DNA code can potentially decide whether a plant is a lark or a night owl. The findings may help farmers and crop breeders to select plants with clocks that are best suited to their location, helping to boost yield and even the ability to withstand climate change.

The circadian clock is the molecular metronome which guides organisms through day and night -- cockadoodledooing the arrival of morning and drawing the curtains closed at night. In plants, it regulates a wide range of processes, from priming photosynthesis at dawn through to regulating flowering time.

These rhythmic patterns can vary depending on geography, latitude, climate and seasons -- with plant clocks having to adapt to cope best with the local conditions.

Researchers at the Earlham Institute and John Innes Centre in Norwich wanted to better understand how much circadian variation exists naturally, with the ultimate goal of breeding crops that are more resilient to local changes in the environment -- a pressing threat with climate change.

To investigate the genetic basis of these local differences, the team examined varying circadian rhythms in Swedish Arabidopsis plants to identify and validate genes linked to the changing tick of the clock.

Dr Hannah Rees, a postdoctoral researcher at the Earlham Institute and author of the paper, said: "A plant's overall health is heavily influenced by how closely its circadian clock is synchronised to the length of each day and the passing of seasons. An accurate body clock can give it an edge over competitors, predators and pathogens.

"We were interested to see how plant circadian clocks would be affected in Sweden; a country that experiences extreme variations in daylight hours and climate. Understanding the genetics behind body clock variation and adaptation could help us breed more climate-resilient crops in other regions."

The team studied the genes in 191 different varieties of Arabidopsis obtained from across the whole of Sweden. They were looking for tiny differences in genes between these plants which might explain the differences in circadian function.

Their analysis revealed that a single DNA base-pair change in a specific gene -- COR28 -- was more likely to be found in plants that flowered late and had a longer period length. COR28 is a known coordinator of flowering time, freezing tolerance and the circadian clock; all of which may influence local adaptation in Sweden.

"It's amazing that just one base-pair change within the sequence of a single gene can influence how quickly the clock ticks," explained Dr Rees.

The scientists also used a pioneering delayed fluorescence imaging method to screen plants with differently-tuned circadian clocks. They showed there was over 10 hours difference between the clocks of the earliest risers and latest phased plants -- akin to the plants working opposite shift patterns. Both geography and the genetic ancestry of the plant appeared to have an influence.

"Arabidopsis thaliana is a model plant system," said Dr Rees. "It was the first plant to have its genome sequenced and it's been extensively studied in circadian biology, but this is the first time anyone has performed this type of association study to find the genes responsible for different clock types.

Read more at Science Daily

The mechanics of the immune system

 Highly complicated processes constantly take place in our body to keep pathogens in check: The T-cells of our immune system are busy searching for antigens -- suspicious molecules that fit exactly into certain receptors of the T-cells like a key into a lock. This activates the T-cell and the defense mechanisms of the immune system are set in motion.

How this process takes place at the molecular level is not yet well understood. What is now clear, however, is that not only chemistry plays a role in the docking of antigens to the T-cell; micromechanical effects are important too. Submicrometer structures on the cell surface act like microscopic tension springs. Tiny forces that occur as a result are likely to be of great importance for the recognition of antigens. At TU Wien, it has now been possible to observe these forces directly using highly developed microscopy methods.

This was made possible by a cooperation between TU Wien, Humbold Universität Berlin, ETH Zurich and MedUni Vienna. The results have now been published in the scientific journal Nano Letters.

Smelling and feeling

As far as physics is concerned, our human sensory organs work in completely different ways. We can smell, i.e. detect substances chemically, and we can touch, i.e. classify objects by the mechanical resistance they present to us. It is similar with T cells: they can recognize the specific structure of certain molecules, but they can also "feel" antigens in a mechanical way.

"T cells have so-called microvilli, which are tiny structures that look like little hairs," says Prof. Gerhard Schütz, head of the biophysics working group at the Institute of Applied Physics at TU Wien. As the experiments showed, remarkable effects can occur when these microvilli come into contact with an object: The microvilli can encompass the object, similar to a curved finger holding a pencil. They can then even enlarge, so that the finger-like protrusion eventually becomes an elongated cylinder, which is turned over the object.

"Tiny forces occur in the process, on the order of less than a nanonewton," says Gerhard Schütz. One nanonewton corresponds roughly to the weight force that a water droplet with a diameter of one-twentieth of a millimeter would exert.

Force measurement in the hydrogel

Measuring such tiny forces is a challenge. "We succeed by placing the cell together with tiny test beads in a specially developed gel. The beads carry molecules on their surface to which the T cell reacts," explains Gerhard Schütz. "If we know the resistance that our gel exerts on the beads and measure exactly how far the beads move in the immediate vicinity of the T-cell, we can calculate the force that acts between the T-cell and the beads."

Read more at Science Daily

Dec 20, 2020

What's up, Skip? Kangaroos really can 'talk' to us, study finds

 

Kangaroo
Animals that have never been domesticated, such as kangaroos, can intentionally communicate with humans, challenging the notion that this behaviour is usually restricted to domesticated animals like dogs, horses or goats, a first of its kind study from the University of Roehampton and the University of Sydney has found.

The research which involved kangaroos, marsupials that were never domesticated, at three locations across Australia*, revealed that kangaroos gazed at a human when trying to access food which had been put in a closed box. The kangaroos used gazes to communicate with the human instead of attempting to open the box themselves, a behaviour that is usually expected for domesticated animals.

Ten out of 11 kangaroos tested actively looked at the person who had put the food in a box to get it (this type of experiment is known as "the unsolvable problem task"). Nine of the 11 kangaroos additionally showed gaze alternations between the box and the person present, a heightened form of communication where they look between the box and human.

The research builds on previous work in the field which has looked at the communication of domesticated animals, such as dogs and goats, and whether intentional communication in animals is a result of domestication. Lead author Dr Alan McElligott, University of Roehampton (now based at City University of Hong Kong), previously led a study which found goats can understand human cues, including pointing, to gather information about their environment. Like dogs and goats, kangaroos are social animals and Dr McElligott's new research suggests they may be able to adapt their usual social behaviours for interacting with humans.

Dr Alan McElligott said: "Through this study, we were able to see that communication between animals can be learnt and that the behaviour of gazing at humans to access food is not related to domestication. Indeed, kangaroos showed a very similar pattern of behaviour we have seen in dogs, horses and even goats when put to the same test.

"Our research shows that the potential for referential intentional communication towards humans by animals has been underestimated, which signals an exciting development in this area. Kangaroos are the first marsupials to be studied in this manner and the positive results should lead to more cognitive research beyond the usual domestic species."

Dr Alexandra Green, School of Life and Environmental Sciences at the University of Sydney, said: "Kangaroos are iconic Australian endemic fauna, adored by many worldwide but also considered as a pest. We hope that this research draws attention to the cognitive abilities of kangaroos and helps foster more positive attitudes towards them."

Read more at Science Daily

Scientists show what loneliness looks like in the brain

 

Person sitting alone on bench.
This holiday season will be a lonely one for many people as social distancing due to COVID-19 continues, and it is important to understand how isolation affects our health. A new study shows a sort of signature in the brains of lonely people that make them distinct in fundamental ways, based on variations in the volume of different brain regions as well as based on how those regions communicate with one another across brain networks.

A team of researchers examined the magnetic resonance imaging (MRI) data, genetics and psychological self-assessments of approximately 40,000 middle-aged and older adults who volunteered to have their information included in the UK Biobank: an open-access database available to health scientists around the world. They then compared the MRI data of participants who reported often feeling lonely with those who did not.

The researchers found several differences in the brains of lonely people. These brain manifestations were centred on what is called the default network: a set of brain regions involved in inner thoughts such as reminiscing, future planning, imagining and thinking about others. Researchers found the default networks of lonely people were more strongly wired together and surprisingly, their grey matter volume in regions of the default network was greater. Loneliness also correlated with differences in the fornix: a bundle of nerve fibres that carries signals from the hippocampus to the default network. In lonely people, the structure of this fibre tract was better preserved.

We use the default network when remembering the past, envisioning the future or thinking about a hypothetical present. The fact the structure and function of this network is positively associated with loneliness may be because lonely people are more likely to use imagination, memories of the past or hopes for the future to overcome their social isolation.

"In the absence of desired social experiences, lonely individuals may be biased towards internally-directed thoughts such as reminiscing or imagining social experiences. We know these cognitive abilities are mediated by the default network brain regions," says Nathan Spreng from The Neuro (Montreal Neurological Institute-Hospital) of McGill University, and the study's lead author. "So this heightened focus on self-reflection, and possibly imagined social experiences, would naturally engage the memory-based functions of the default network."

Loneliness is increasingly being recognized as a major health problem, and previous studies have shown older people who experience loneliness have a higher risk of cognitive decline and dementia. Understanding how loneliness manifests itself in the brain could be key to preventing neurological disease and developing better treatments.

"We are just beginning to understand the impact of loneliness on the brain. Expanding our knowledge in this area will help us to better appreciate the urgency of reducing loneliness in today's society," says Danilo Bzdok, a researcher at The Neuro and the Quebec Artificial Intelligence Institute, and the study's senior author.

Read more at Science Daily