Nov 20, 2021

Archaeologists discover salt workers’ residences at underwater Maya site

The ancient Maya had stone temples and palaces in the rainforest of Central America, along with dynastic records of royal leaders carved in stone, but they lacked a basic commodity essential to daily life: salt. The sources of salt are mainly along the coast, including salt flats on the Yucatan coast and brine-boiling along the coast of Belize, where it rains a lot. But how did the inland Maya maintain a supply of salt?

LSU Maya archaeologist Heather McKillop and her team have excavated salt kitchens where brine was boiled in clay pots over fires in pole and thatch buildings preserved in oxygen-free sediment below the sea floor in Belize. But where these salt workers lived has been elusive, leaving possible interpretations of daily or seasonal workers from the coast or even inland. This gap left nagging questions about the organization of production and distribution.

New findings on the organization of the salt industry to supply this basic dietary commodity to inland cities during the Classic Maya civilization are reported in a recent article by McKillop and LSU alumna Cory Sills, who is an associate professor at University Texas-Tyler. The article "Briquetage and brine: Living and Working at the Ek Way Nal Salt Works, Belize" was published in the journal Ancient Mesoamerica.

McKillop and Sills began this new project in search of residences where the salt workers lived and to understand the energetics of production of salt with funding from the National Science Foundation. Although field work at Ek Way Nal, where the Paynes Creek Saltworks is located, has been postponed since March 2020 due to the pandemic, the researchers turned to material previously exported for study in the LSU Archaeology lab, including hundreds of wood samples from pole and thatch buildings, as well as pottery sherds.

"The Archaeology lab looks like a Tupperware party, with hundreds of plastic containers of water, but they are keeping the wood samples wet so they don't dry out and deteriorate," said McKillop, who is the Thomas & Lillian Landrum Alumni Professor in the LSU Department of Geography and Anthropology.

She explained the strategy to continue research in the lab: "I decided to submit a wood post sample for radiocarbon dating from each building at Ek Way Nal to see if they all dated to the same time, which was suggested by the visibility of artifacts and buildings on the sea floor."

When the dates started coming in, two at a time, McKillop identified a building construction sequence that began in the Late Classic at the height of Maya civilization and continued through the Terminal Classic when the dynastic leaders of inland city states were losing control and eventually the cities were abandoned by A.D. 900.

According to McKillop, "Using the well-studied site, Sacapulas, Guatemala, as a model, worked well to develop archaeological expectations for different activities for brine boiling in a salt kitchen, a residence and other activities, including salting fish."

In the Ancient Mesoamerica article, they report a 3-part building construction sequence with salt kitchens, at least one residence and an outdoor area where fish were salted and dried. The archeologists' strategy of radiocarbon dating each building had produced a finer grain chronology for Ek Way Nal that they are using for more sites.

The new analysis verifies McKillop's estimate that 10 salt kitchens were in production at a time at the Paynes Creek Salt Works, which she reported in her book "Maya Salt Works" (2019, University Press of Florida).

Read more at Science Daily

Scientists key in on brain’s mechanism for singing, learning

New research reveals that specialized cells within neural circuitry that triggers complex learning in songbirds bears a striking resemblance to a type of neural cell associated with the development of fine motor skills in the cortex of the human brain.

The study by scientists at Oregon Health & Science University published today in the journal Nature Communications.

"These are the properties you need if you want to have a male song that's precise and distinct so the female can choose which bird she wants to mate with," said co-senior author Henrique von Gersdorff, Ph.D., senior scientist the OHSU Vollum Institute. "You need a highly specialized brain to produce this."

Benjamin Zemel, Ph.D., a postdoctoral fellow at OHSU, is lead author and conducted most of the challenging electrophysiology work involved in using thin brain slices and single cell recording.

The study reveals that a particular group of neurons express a set of genes that modulate sodium ion channel proteins. These ion channels generate electrical signals used for communication between cells in the nervous system. In this case, the assemblage enables neurons to fire repetitive spikes - known as action potentials - at extremely high speeds and frequencies as the bird sings.

The study describes "ultrafast spikes" that only last 0.2 milliseconds - compared with most action potential spikes that last a millisecond or more. A millisecond is itself mind-bendingly fast, a thousandth of a second.

Further, the findings suggest new avenues for understanding the mechanism in various aspects of human behavior and development that involves fine motor control.

Researchers say the assemblage of neurons and ion channels involved in the male zebra finch's singing closely resembles a similar assemblage of neurons known as Betz cells in the primary motor cortex of the human brain.

Among the largest known brain cells in humans, Betz cells have long and thick axons that can propagate spikes at very high velocities and frequencies. As such, they are thought to be important for fine motor skills involving hands, feet, fingers and wrists.

"Think of a piano player," said co-senior author Claudio Mello, M.D., Ph.D., professor of behavioral neuroscience in the OHSU School of Medicine. "They're thinking so fast, they have to rely on memories and actions that are learned and stored. Playing the guitar is the same thing."

The study published today is a result of an informal conversation that initially occurred over lunch in the Mackenzie Hall Café on OHSU's Marquam Hill campus.

Mello, a behavioral neuroscientist who has relied on the zebra finch as an animal model, has known Von Gersdorff socially for 20 years. Over lunch one day in the cafeteria, Mello popped open his laptop and showed a brain image of a young male zebra finch at an age just before he could sing, followed by a second image revealing a telltale subunit of proteins that had materialized after the bird was old enough to begin singing.

"Something remarkable was happening in a period of just a few days," said von Gersdorff, an expert in electrophysiology and the biophysics of neurons. "I said, this is exactly the protein we've been studying in the rodent auditory system. It promotes high frequency spiking."

Mello said the new study deepens scientific understanding of the mechanism involved in learning fine motor skills.

"This is a very important model, and we think this new study has broad potential," he said.

The fact that these same motor circuit properties are shared by species that diverged more than 300 million years ago speaks to the strength of the discovery, von Gersdorff and Mello said. Researchers say the neuronal properties they discovered in the male zebra finch may become optimized for speed and precision through convergent evolution.

Read more at Science Daily

Nov 19, 2021

Alien organisms – hitchhikers of the galaxy?

Scientists warn, without good biosecurity measures 'alien organisms' on Earth may become a reality stranger than fiction.

Published in international journal BioSciences, a team of scientists, including Dr Phill Cassey, Head of the Department of Ecology and Evolutionary Biology at the University of Adelaide, are calling for greater recognition of the biosecurity risks ahead of the space industry.

"In addition to government-led space missions, the arrival of private companies such as SpaceX has meant there are now more players in space exploration than ever before," said Associate Professor Cassey.

"We need to take action now to mitigate those risks."

Space biosecurity concerns itself with both the transfer of organisms from Earth to space (forward contamination) and vice-versa (backward contamination). While the research points out that at present the risk of alien organisms surviving the journey is low, it's not impossible.

Dr Cassey said: "Risks that have low probability of occurrence, but have the potential for extreme consequences, are at the heart of biosecurity management. Because when things go wrong, they go really wrong."

The research provides clear evidence of how humans have spread organisms to the remotest regions of the earth and sea, and even into space.

To address the risks of invasive species from space travel, the authors suggest the emerging field of 'invasion science', which deals with the causes and consequences of introducing organisms into new environments, could offer valuable learnings. This includes the fact that insular systems such as islands, lakes, and remote habitats, are most vulnerable to invasion threats.

Further insights that could be applied include protocols for early detection, hazard assessment, rapid response and containment procedures currently used in response to invasive species threats.

Dr Cassey said: "It is far cheaper to prevent biological contamination by implementing protocols on Earth than it is on Mars, for example."

Both Dr Cassey and co-author Dr Andrew Woolnough from the University of Melbourne and the University of Adelaide suggest that with some of the best biosecurity in the world Australia is well-positioned to contribute expertise in this area.

"We have a fantastic opportunity to contribute to international policy and to develop biosecurity mitigation measures that can be used by the expanding private space industry. This is an untapped economic development opportunity," Dr Woolnough said.

Read more at Science Daily

Warmer soil stores less carbon

Global warming will cause the world's soil to release carbon, new research shows.

Scientists used data on more than 9,000 soil samples from around the world, and found that carbon storage "declines strongly" as average temperatures increase.

This is an example of a "positive feedback," where global warming causes more carbon to be released into the atmosphere, further accelerating climate change.

Importantly, the amount of carbon that could be released depends on the soil type, with coarse-textured (low-clay) soils losing three times as much carbon as fine-textured (clay-rich) soils.

The researchers, from the University of Exeter and Stockholm University, say their findings help to identify vulnerable carbon stocks and provide an opportunity to improve Earth System Models (ESMs) that simulate future climate change.

"Because there is more carbon stored in soils than there is in the atmosphere and all the trees on the planet combined, releasing even a small percentage could have a significant impact on our climate," said Professor Iain Hartley of Exeter's College of Life and Environmental Sciences.

"Our analysis identified the carbon stores in coarse-textured soils at high-latitudes (far from the Equator) as likely to be the most vulnerable to climate change.

"Such stores, therefore, may require particular attention given the high rates of warming taking place in cooler regions.

"In contrast, we found carbon stores in fine-textured soils in tropical areas to be less vulnerable to climate warming."

The data on the 9,300 soil profiles came from the World Soil Information database, with the study focusing on the top 50cm of soil.

By comparing carbon storage in places with different average temperatures, the researchers estimated the likely impact of global warming.

For every 10°C of increase in temperature, average carbon storage (across all soils) fell by more than 25%.

"Even bleak forecasts do not anticipate this level of warming, but we used this scale to give us confidence that the effects we observed were caused by temperature rather than other variables," Professor Hartley said.

"Our results make it clear that, as temperatures rise, more and more carbon is release from soil.

"It's important to note that our study did not examine the timescales involved, and further research is needed to investigate how much carbon could be released this century."

The researchers found that their results could not be represented by an established ESM.

"This suggests that there is an opportunity to use the patterns we have observed to improve how models represent soils, and further reduce uncertainty in their projections," Professor Hartley said.

Read more at Science Daily

'Volcanic winter' likely contributed to ecological catastrophe 250 million years ago

A team of scientists has identified an additional force that likely contributed to a mass extinction event 250 million years ago. Its analysis of minerals in southern China indicate that volcano eruptions produced a "volcanic winter" that drastically lowered earth's temperatures -- a change that added to the environmental effects resulting from other phenomena at the time.

The research, which appears in the journal Science Advances, examined the end-Permian mass extinction (EPME), which was the most severe extinction event in the past 500 million years, wiping out 80 to 90 percent of species on land and in the sea.

"As we look closer at the geologic record at the time of the great extinction, we are finding that the end-Permian global environmental disaster may have had multiple causes among marine and non-marine species," says Michael Rampino, a professor in New York University's Department of Biology and one of the authors of the paper.

For decades, scientists have investigated what could have caused this global ecological catastrophe, with many pointing to the spread of vast floods of lava across what is known as the Siberian Traps -- a large region of volcanic rock in the Russian province of Siberia. These eruptions caused environmental stresses, including severe global warming from volcanic releases of carbon dioxide and related reduction in oxygenation of ocean waters -- the latter causing the suffocation of marine life.

The team for the Science Advances work, composed of more than two dozen researchers, including scientists from China's Nanjing University and Guangzhou Institute of Geochemistry as well as Smithsonian Institution's National Museum of Natural History and Montclair State University, considered other factors that may have contributed to the end of the Permian Period, which stretched from 300 million to 250 million years ago.

Specifically, they found mineral and related deposits on land in the south China region -- notably copper and mercury -- whose age coincided with the end-Permian mass extinction in non-marine localities. Specifically, these deposits were marked by anomalies in their composition likely due to sulfur-rich emissions from nearby volcanic eruptions -- they were covered by layers of volcanic ash.

"Sulfuric acid atmospheric aerosols produced by the eruptions may have been the cause of rapid global cooling of several degrees, prior to the severe warming seen across the end-Permian mass-extinction interval," explains Rampino.

Read more at Science Daily

New findings on the link between CRISPR gene-editing and mutated cancer cells

A protein that protects cells from DNA damage, p53, is activated during gene editing using the CRISPR technique. Consequently, cells with mutated p53 have a survival advantage, which can cause cancer. Researchers at Karolinska Institutet in Sweden have found new links between CRISPR, p53 and other cancer genes that could prevent the accumulation of mutated cells without compromising the gene scissors' effectiveness. The study, published in Cancer Research, can contribute to tomorrow's precision medicine.

Much hope is being pinned on the potential of gene editing using the CRISPR (gene scissors) method as a crucial part in the precision medicine of the future. However, before the method can become hospital routine, several hurdles need to be overcome.

One of these challenges is associated with how cells behave when subjected to DNA damage, which CRISPR gene editing causes in a controlled fashion. Damage to cells activates the protein p53, which acts as the cell's "first aid" response to DNA damage.

It is already known that the technique is less effective when p53 is active; at the same time, however, a lack of p53 can allow cells to start growing uncontrollably and become cancerous. In over half of all cancers the gene for p53 is mutated and thus unable to protect against uncontrolled cell division. It is therefore important to avoid the enrichment (accumulation) of such mutated cells.

Researchers at Karolinska Institutet have now shown that cells with inactivating mutations of the p53 gene gain a survival advantage when subjected to CRISPR and can thus accumulate in a mixed cell population.

The researchers have also identified a network of linked genes with mutations that have a similar effect to p53 mutations, and shown that the transient inhibition of p53 is a possible pharmaceutical strategy for preventing the enrichment of cells with such mutations.

"It can seem contradictory to inhibit p53 in a CRISPR context," says the study's first author Long Jiang, doctoral student at the Department of Medicine, Karolinska Institutet (Solna). "However, some of the literature supports the idea that p53 inhibition can make CRISPR more effective. In our study we show that this can also counteract the enrichment of cells with mutations in p53 and a group of associated genes."

The research has a contributory potential to the future clinical implementation of CRISPR in having identified a network of possible candidate genes that should be carefully controlled for mutations when cells are subjected to the CRISPR technique. Another possible conclusion is that the transient inhibition of p53 could prove a strategy for reducing the enrichment of mutated cells.

The researchers have also studied the DNA-damage response as a possible marker in the development of more precise guide RNA sequences, which are used to show CRISPR where a DNA sequence is to be altered.

"We believe that the up-regulation of genes involved in the DNA damage response can be a sensitive marker for how much unspecific ('off-target') activity a guide RNA has, and can thus help in the selection of 'safer' guide RNAs," says the study's last author Fredrik Wermeling, researcher at the Department of Medicine, Karolinska Institutet (Solna).

The study is largely based on CRISPR, CRISPR screening experiments on isolated cells and analyses of the DepMap database.

The next step of the research is to understand how relevant the described mechanisms are.

"In cell cultures, we see a rapid and pronounced enrichment of cells with p53 mutations when we subject the cells to CRISPR, provided, however, that cells with mutations are there from the start," says Dr. Wermeling. "So we can show that the mechanism exists and factors that affect it, but don't currently know at what level this is a genuine problem, and that's something we want to explore further in more clinic-centered tests."

Read more at Science Daily

Nov 18, 2021

Life on Mars search could be misled by false fossils, study says

Mars explorers searching for signs of ancient life could be fooled by fossil-like specimens created by chemical processes, research suggests.

Rocks on Mars may contain numerous types of non-biological deposits that look similar to the kinds of fossils likely to be found if the planet ever supported life, a study says.

Telling these false fossils apart from what could be evidence of ancient life on the surface of Mars -- which was temporarily habitable four billion years ago -- is key to the success of current and future missions, researchers say.

Astrobiologists from the Universities of Edinburgh and Oxford reviewed evidence of all known processes that could have created lifelike deposits in rocks on Mars.

They identified dozens of processes -- with many more likely still undiscovered -- that can produce structures that mimic those of microscopic, simple lifeforms that may once have existed on Mars.

Among the lifelike specimens these processes can create are deposits that look like bacterial cells and carbon-based molecules that closely resemble the building blocks of all known life.

Because signs of life can be so closely mimicked by non-living processes, the origins of any fossil-like specimens found on Mars are likely to be very ambiguous, the team says.

They call for greater interdisciplinary research to shed more light on how lifelike deposits could form on Mars, and thereby aid the search for evidence of ancient life there and elsewhere in the solar system.

The research is published in the Journal of the Geological Society.

Dr Sean McMahon, Chancellor's Fellow in Astrobiology at the University of Edinburgh's School of Physic and Astronomy, said: "At some stage a Mars rover will almost certainly find something that looks a lot like a fossil, so being able to confidently distinguish these from structures and substances made by chemical reactions is vital. For every type of fossil out there, there is at least one non-biological process that creates very similar things, so there is a real need to improve our understanding of how these form."

Read more at Science Daily

Bacteria as climate heroes

Acetogens are a group of bacteria that can metabolise formate. For example, they form acetic acid -- an important basic chemical. If these bacteria were manipulated to produce ethanol or lactic acid, a comprehensive circular economy for the greenhouse gas CO2 could be realised. To ensure that the process is sustainable, the CO2 is extracted directly from the air and converted to formate using renewable energy.

Circular economy for CO2

"The economy of the future must be carbon neutral," demands Stefan Pflügl. However, since carbon is an important component of many products -- such as fuel or plastics -- the existing CO2 should be recycled and returned to the cycle. One climate-neutral way to do this is capture CO2 directly from the air and convert it into formate with the help of renewable energy. This compound of carbon, oxygen and hydrogen can ultimately be a basic building block of the bioeconomy. The advantages of formate are that it is easy to transport and can be used flexibly for the production of chemicals and fuels. These substances can be produced with the help of acetogenic bacteria that feed on carbon compounds and produce acetic acid from them.

Formate recycling by A. woodii

In order to use acetogens for the production of raw materials, one needs to understand their metabolism and physiology. Although A. woodii is a model organism, meaning that the bacterium has already been extensively studied, the research team wanted to make a comparative observation. Thus, Stefan Pflügl and his team investigated how substrates such as formate, hydrogen, carbon monoxide, carbon dioxide or fructose affect the metabolism of A. woodii.

"The biggest difference, caused by the different substrates, is the amount of energy that A. woodii gains," observes Stefan Pflügl. He explains this as follows: "Acetogens are true survival artists that can also metabolise substrates such as CO, CO2 or formate. This is due to the fact that acetogens use what is probably the oldest metabolic pathway for CO2 fixation. Thus, they also manage to produce enough energy to survive under extreme conditions and from alternative food sources."

This means that acetogens are not only able to utilise CO2 but also do so very efficiently. Consequently, only little energy needs to be expended to convert CO2 into formate, which is then converted into the basic chemical acetic acid.

Replacement of oil-based products

To exploit the full potential of A. woodii, the researchers also investigated how the bacterium can be genetically modified to produce ethanol or lactic acid instead of acetic acid. While ethanol forms the basis for fuel, lactic acid can be used to produce biodegradable plastics. Oil-based substances could consequently be replaced by more sustainable alternatives. "Not only would this be in the sense of the bioeconomy, but CO2 and carbon monoxide, which are produced during the combustion of fuel or plastic, could also be recycled to the original product," Stefan Pflügl envisages.

Read more at Science Daily

Genetic changes in Bronze Age southern Iberia

The third millennium BCE is a highly dynamic period in the prehistory of Europe and western Asia, characterized by large-scale social and political changes. In the Iberian Peninsula, the Copper Age was in full swing around 2,500 years BCE with substantial demographic growth, attested by a large diversity of settlements and fortifications, monumental funerary structures, as well as ditched mega-sites larger than 100 hectares. For reasons that are still unclear, the latter half of the millennium experienced depopulation and the abandonment of the mega-sites, fortified settlements and necropoles.

In southeastern Iberia, one of the most outstanding archaeological entities of the European Bronze Age emerged around 2,200 BCE. This so-called 'El Argar' culture, one of the first state-level societies on the European continent, was characterised by large, central hilltop settlements, distinct pottery, specialized weapons and bronze, silver and gold artefacts, alongside an intra-murial burial rite. A new study explores the relation between dynamic shifts at population scale and the major social and political changes of the third and second millennia BCE by analysing the genomes of 136 ancient Iberians, ranging from 3,000 to 1,500 BCE.

Genetic turnover and melting pot

Including published genomes from Iberia, the new study encompasses data from nearly 300 ancient individuals and focuses specifically on the Copper to Bronze Age transition around 2,200 BCE.

"While we knew that the so-called 'steppe'-related ancestry, which had spread across Europe during the third millennium BCE, eventually reached the northern Iberian Peninsula around 2,400 BCE, we were surprised to see that all prehistoric individuals from the El Argar period carried a portion of this ancestry, while the Chalcolithic individuals did not," says Max Planck researcher Wolfgang Haak, senior author and principal investigator of the study.

The genomic data reveals some of the processes underlying this genetic shift. While the bulk of the genome shows that Bronze Age individuals are a mix of local Iberian Chalcolithic ancestry and a smaller part of incoming ancestry from the European mainland, the paternally inherited Y chromosome lineages show a complete turnover, linked to the movement of steppe-related ancestry that is also visible in other parts of Europe.

The rich new data from the El Argar sites also show that these two components do not fully account for the genetic make-up of the early Bronze Age societies.

"We also found signals of ancestry that we traced to the central and eastern Mediterranean and western Asia. We cannot say exactly whether these influences arrived at the same time as the steppe-related ancestry, but it shows that it formed an integrative part of the rising El Argar societies, attesting to continued contacts to these regions," adds Vanessa Villalba-Mouco, postdoctoral researcher at the Max Planck Institute and Instituto de Biología Evolutiva.

Social implications

"Whether the genetic shift was brought about by migrating groups from North and Central Iberia or by climatic deteriorations that affected the eastern Mediterranean around 2,200 BCE is the million-dollar question," says co-PI and senior author Prof Roberto Risch from the Universitat Autònoma de Barcelona. "It would be foolish to think that it can all be explained by a simple, one-factor model. While the temporal coincidence is striking, it is likely that many factors played a role."

One of these factors could be pandemics, such as an early form of the Plague, which has been attested to in other regions of Europe around the time. While not found directly among the tested individuals in southern Iberia, it could be a cause or driver for the movement or disappearance of other groups in the region.

"In any case, we can now conclude that the population movement starting in the eastern European steppe zones around 3,000 BCE was not a single migratory event, but required over four centuries to reach the Iberian Peninsula and another 200 years to appear in present-day Murcia and Alicante," adds Risch.

The archaeological record of the El Argar group shows a clear break with previous Chalcolithic traditions. Burial rites, for example, changed from communal to single and double burials within the building complexes. Elite burials also indicate the formation of strong social hierarchies. Testing for biological relatedness, the researchers found that males are on average more closely related to other people at the site, indicating that the group was likely patrilineally structured. Such a social organization could explain the stark reduction of the Y-lineage diversity.

Read more at Science Daily

Paleontologists debunk fossil thought to be missing link between lizards and first snakes

Filling in the links of the evolutionary chain with a fossil record of a ''snake with four legs" connecting lizards and early snakes would be a dream come true for paleontologists. But a specimen formerly thought to fit the bill is not the missing piece of the puzzle, according to a new Journal of Systematic Palaeontology study led by University of Alberta paleontologist Michael Caldwell.

"It has long been understood that snakes are members of a lineage of four-legged vertebrates that, as a result of evolutionary specializations, lost their limbs," said Caldwell, lead author of the study and professor in the departments of biological sciences and earth and atmospheric sciences.

"Somewhere in the fossil record of ancient snakes is an ancestral form that still had four legs. It has thus long been predicted that a snake with four legs would be found as a fossil."

Missing link discovered?

In a paper published in the journal Sciencein 2015, a team of researchers reported the discovery of what was believed to be an example of the first known four-legged snake fossil, an animal they named Tetrapodophis amplectus.

"If correctly interpreted based on the preserved anatomy, this would be a very important discovery," said Caldwell.

Caldwell explained that the new study of Tetrapodophis revealed a number of mischaracterizations of the anatomy and morphology of the specimen -- traits that initially seemed to be shared most closely with snakes, suggesting this might be the long-sought-after snake with four legs.

"There are many evolutionary questions that could be answered by finding a four-legged snake fossil, but only if it is the real deal. The major conclusion of our team is that Tetrapodophis amplectus is not in fact a snake and was misclassified," said Caldwell. "Rather, all aspects of its anatomy are consistent with the anatomy observed in a group of extinct marine lizards from the Cretaceous period known as dolichosaurs."

The clues to this conclusion, Caldwell noted, were hiding in the rock the fossil was extracted from.

"When the rock containing the specimen was split and it was discovered, the skeleton and skull ended up on opposite sides of the slab, with a natural mould preserving the shape of each on the opposite side," said Caldwell. "The original study only described the skull and overlooked the natural mould, which preserved several features that make it clear that Tetrapodophis did not have the skull of a snake -- not even of a primitive one."

A controversial specimen

Although Tetrapodophis may not be the snake with four legs that paleontologists prize, it still has much to teach us, said study coauthor Tiago Simões, a former U of A PhD student, Harvard post- doctoral fellow and Brazilian paleontologist, who pointed out some of the features that make it unique.

"One of the greatest challenges of studying Tetrapodophis is that it is one of the smallest fossil squamates ever found," said Simões. "It is comparable to the smallest squamates alive today that also have reduced limbs."

An additional challenge to studying the Tetrapodophis is access to the specimen itself.

"There were no appropriate permits for the specimen's original removal from Brazil and, since its original publication, it has been housed in a private collection with limited access to researchers. The situation was met with a large backlash from the scientific community," said Simões.

Read more at Science Daily

Brief 5:2 diet advice is as effective as traditional GP advice, but people like it better, according to new study

A clinical trial has found people prefer receiving information on the 5:2 diet than standard GP weight management advice despite both interventions achieving similar modest weight loss results.

The trial, funded by the Medical Research Council (MRC) and led by Queen Mary University of London, is the first randomised evaluation of the 5:2 diet, a popular type of intermittent fasting regime. Researchers studied the long-term effects of providing 5:2 diet instructions compared to traditional weight loss advice in 300 UK adults with obesity over a one-year period.

The findings show that long-term weight loss was similar for those who received 5:2 diet or standard weight management advice with 18 per cent and 15 per cent of participants respectively losing at least five per cent of their body weight at one year. However, when asked to rate each intervention, participants in the 5:2 diet group were more likely to recommend the intervention to others or be willing to continue with their diet.

Previous evidence suggests that peer support could be important for encouraging dieters to adhere to and realise the effects of the 5:2 diet. To test this, the researchers studied the impact of a weekly support group in addition to the simple 5:2 diet advice. They found that whilst initially face-to-face support generated better early effects and improved adherence to the 5:2 diet, these effects weakened over time.

Together, the findings suggest that providing brief advice on the 5:2 diet could extend the options clinicians can offer to patients.

Dr Katie Myers Smith, Chartered Health Psychologist and Senior Research Fellow at Queen Mary, said:"Here we've been able to provide the first results on the effectiveness of simple 5:2 diet advice in a real-life setting. We found that although the 5:2 diet wasn't superior to traditional approaches in terms of weight loss, users preferred this approach as it was simpler and more attractive. Based on these findings, GPs may consider recommending the 5:2 diet as part of their standard weight management advice."

The 5:2 diet is popular intermittent fasting weight loss intervention whereby dieters restrict their caloric intake on two non-consecutive days a week and then apply sensible eating on the remaining days. It first became popular in the UK through a BBC Horizon documentary and follow-up bestselling book.

Read more at Science Daily

Nov 17, 2021

Where does gold come from? New insights into element synthesis in the universe

How are chemical elements produced in our Universe? Where do heavy elements like gold and uranium come from? Using computer simulations, a research team from the GSI Helmholtzzentrum für Schwerionenforschung in Darmstadt, together with colleagues from Belgium and Japan, shows that the synthesis of heavy elements is typical for certain black holes with orbiting matter accumulations, so-called accretion disks. The predicted abundance of the formed elements provides insight into which heavy elements need to be studied in future laboratories -- such as the Facility for Antiproton and Ion Research (FAIR), which is currently under construction -- to unravel the origin of heavy elements. The results are published in the journal Monthly Notices of the Royal Astronomical Society.

All heavy elements on Earth today were formed under extreme conditions in astrophysical environments: inside stars, in stellar explosions, and during the collision of neutron stars. Researchers are intrigued with the question in which of these astrophysical events the appropriate conditions for the formation of the heaviest elements, such as gold or uranium, exist. The spectacular first observation of gravitational waves and electromagnetic radiation originating from a neutron star merger in 2017 suggested that many heavy elements can be produced and released in these cosmic collisions. However, the question remains open as to when and why the material is ejected and whether there may be other scenarios in which heavy elements can be produced.

Promising candidates for heavy element production are black holes orbited by an accretion disk of dense and hot matter. Such a system is formed both after the merger of two massive neutron stars and during a so-called collapsar, the collapse and subsequent explosion of a rotating star. The internal composition of such accretion disks has so far not been well understood, particularly with respect to the conditions under which an excess of neutrons forms. A high number of neutrons is a basic requirement for the synthesis of heavy elements, as it enables the rapid neutron-capture process or r-process. Nearly massless neutrinos play a key role in this process, as they enable conversion between protons and neutrons.

"In our study, we systematically investigated for the first time the conversion rates of neutrons and protons for a large number of disk configurations by means of elaborate computer simulations, and we found that the disks are very rich in neutrons as long as certain conditions are met," explains Dr. Oliver Just from the Relativistic Astrophysics group of GSI's research division Theory. "The decisive factor is the total mass of the disk. The more massive the disk, the more often neutrons are formed from protons through capture of electrons under emission of neutrinos, and are available for the synthesis of heavy elements by means of the r-process. However, if the mass of the disk is too high, the inverse reaction plays an increased role so that more neutrinos are recaptured by neutrons before they leave the disk. These neutrons are then converted back to protons, which hinders the r-process." As the study shows, the optimal disk mass for prolific production of heavy elements is about 0.01 to 0.1 solar masses. The result provides strong evidence that neutron star mergers producing accretion disks with these exact masses could be the point of origin for a large fraction of the heavy elements. However, whether and how frequently such accretion disks occur in collapsar systems is currently unclear.

Read more at Science Daily

Climate changed abruptly at tipping points in past

Abrupt changes in ice core samples and other records indicate dramatic changes in climate occurred at certain points in the past.

In Chaos, by AIP Publishing, climate scientists identify abrupt transitions in climate records that may have been caused by the climate system crossing a tipping point. This happens when self-reinforcing feedbacks in a system push it away from a stable state, leading to dramatic change.

Identifying these events in the Earth's past is critical to understanding the tipping points likely to be encountered this century as a warming climate destabilizes the Earth's physical systems and ecosystems.

The researchers from CNRS (France), UCLA, and Columbia University devised a statistical method to determine whether transitions seen in climate records such as ice cores are simply noise or evidence of a more significant change. This has typically been done by visual inspection, a process that is time-consuming and subjective.

Their method is less error-prone, since it doesn't rely on human determination of whether a jump is a significant transition. It allows comparing different records consistently and can identify important events that may have been overlooked in older studies.

An augmented Kolmogorov-Smirnov (KS) test, a statistical technique named after its original authors, provided an alternative approach to recurrence analysis. The KS test has been successfully applied to other inherently noisy systems, such as finance and signal processing.

The method compares two samples taken before and after the potential transition point to test whether they come from the same continuous distribution. If they don't, the transition point is identified as a significant abrupt change indicative of a true climactic shift.

"We applied our method to two paleoclimate records of the last climate cycle, a Greenland ice core and a speleothem composite record from China," said author Witold Bagniewski.

Analysis of ice cores reveals that the ratio of two oxygen isotopes varies over time. This ratio depends on the local temperature at the time the ice formed, providing a measurement of the climate at that particular time.

Speleothems are mineral deposits in caves showing a similar pattern of isotope ratios varying as the climate changes.

"Many of the abrupt transitions in the Greenland ice core record correspond to shifts between a warmer climate, known as Greenland Interstadials (GIs), and a colder climate, the Greenland Stadials (GSs)," said Bagniewski.

The existence of these two climate states, GI and GS, is an example of a bistable climate system, in which two distinct states are both stable. The climate may jump abruptly from one to the other when crossing a tipping point.

Read more at Science Daily

Neuroscientists explore mysterious 'events' in the brain that open new avenues for understanding brain injuries and disorders

Using a new model of brain activity, Indiana University computational neuroscientists Maria Pope, Richard Betzel and Olaf Sporns are exploring striking bursts of activity in the human brain that have not been examined before. These bursts may have potential to serve as biomarkers for brain disease and conditions such as depression, schizophrenia, dementia, and ADHD.

While analyzing human neuroimaging data, the IU research team discovered short bursts of activity that form ongoing "events" in the brain and are always taking place no matter the activity or state of the brain. In the course of a 10-minute brain scan, these events will occur roughly 10 to 20 times, each lasting for just a few seconds, the researchers found.

"What people had not seen is that how brain regions talk to each other is punctuated by these brief moments that are just a few seconds long during which there's a lot happening," said Olaf Sporns, who is Distinguished Professor and Robert H. Shaffer Chair in the College of Arts and Sciences Department of Psychological and Brain Sciences at IU Bloomington.

"Now that we see them, we've focused on those moments to get a picture of how specific brain regions link up and talk to each other during these events."

To begin investigating the workings of these mysterious events, the team built a computational model. Led by Maria Pope, a graduate student in Sporns' lab and a dual Ph.D. candidate in neuroscience and informatics, the group used neuroimaging data of a human brain to build a model replicating its connections. The model was then simulated in a state similar to the resting brain to create synthetic MRI signals, using mathematical equations that reenact neuronal activity.

The model showed burst-like events just like those seen in human brain recordings.

The paper outlining the model and describing how it compares to the real brain was published in the November 16 issue of the Proceedings of the National Academy of Sciences.

"The model shows us that these events are guided by the brain's structural network," Pope said. "They are tied to the physical structure of brain."

More specifically, the events originate in clusters of neurons and brain regions that are densely interconnected and momentarily light up together. Sporns compared the pattern to an orchestra playing a piece of music.

"There are moments when the orchestra comes together and there's a theme. They are not just playing a single note for 10 minutes. There are brief moments in which coordinated activity dominates and at other times there might be much less," Sporns said. "This ebb and flow of coordination is something we also see in the brain, and our model can reproduce it. Clusters of brain regions combine in different ways. It's not just one pattern, but multiple variations on a theme."

The new model's outcome, Sporns suggested, is a potential game changer.

"Functional connectivity has been a strong focus in research as a potential biomarker for brain disorders and has been related to conditions such as depression, schizophrenia, dementia, and ADHD. And researchers have tried for years to use brain simulations in clinical applications for modeling lesions or diseases," Sporns said. "This new model gives us a better lens through which to look at the brain, to see more clearly what goes on under both normal and abnormal conditions."

The researchers are now delving further into why the human brain employs these brief bursts of activity.

"Perhaps the brain has developed this type of activity because it's beneficial. Something about the structure of events may be useful to the brain," Pope said. "For example, many kinds of networked systems have to do occasional system updates or resets, taking some kind of globally useful information and communicating it to the rest of the system."

Answers to these questions may have implications not only for understanding the brain, but also for the study of neural networks and artificial intelligence.

"A clearer mapping of structure and function at the individual level could have implications for how we diagnose neurological disease and lead to personalized treatment and intervention," said Betzel, professor in the College of Arts and Sciences Department of Psychological and Brain Sciences.

Read more at Science Daily

New holographic camera sees the unseen with high precision

Northwestern University researchers have invented a new high-resolution camera that can see the unseen -- including around corners and through scattering media, such as skin, fog or potentially even the human skull.

Called synthetic wavelength holography, the new method works by indirectly scattering coherent light onto hidden objects, which then scatters again and travels back to a camera. From there, an algorithm reconstructs the scattered light signal to reveal the hidden objects. Due to its high temporal resolution, the method also has potential to image fast-moving objects, such as the beating heart through the chest or speeding cars around a street corner.

The study will be published on Nov. 17 in the journal Nature Communications.

The relatively new research field of imaging objects behind occlusions or scattering media is called non-line-of-sight (NLoS) imaging. Compared to related NLoS imaging technologies, the Northwestern method can rapidly capture full-field images of large areas with submillimeter precision. With this level of resolution, the computational camera could potentially image through the skin to see even the tiniest capillaries at work.

While the method has obvious potential for noninvasive medical imaging, early-warning navigation systems for automobiles and industrial inspection in tightly confined spaces, the researchers believe potential applications are endless.

"Our technology will usher in a new wave of imaging capabilities," said Northwestern's Florian Willomitzer, first author of the study. "Our current sensor prototypes use visible or infrared light, but the principle is universal and could be extended to other wavelengths. For example, the same method could be applied to radio waves for space exploration or underwater acoustic imaging. It can be applied to many areas, and we have only scratched the surface."

Willomitzer is a research assistant professor of electrical and computer engineering at Northwestern's McCormick School of Engineering. Northwestern co-authors include Oliver Cossairt, associate professor of computer science and electrical and computer engineering, and former Ph.D. student Fengqiang Li. The Northwestern researchers collaborated closely with Prasanna Rangarajan, Muralidhar Balaji and Marc Christensen, all researchers at Southern Methodist University.

Intercepting scattered light

Seeing around a corner versus imaging an organ inside the human body might seem like very different challenges, but Willomitzer said they are actually closely related. Both deal with scattering media, in which light hits an object and scatters in a manner that a direct image of the object can no longer be seen.

"If you have ever tried to shine a flashlight through your hand, then you have experienced this phenomenon," Willomitzer said. "You see a bright spot on the other side of your hand, but, theoretically, there should be a shadow cast by your bones, revealing the bones' structure. Instead, the light that passes the bones gets scattered within the tissue in all directions, completely blurring out the shadow image."

The goal, then, is to intercept the scattered light in order to reconstruct the inherent information about its time of travel to reveal the hidden object. But that presents its own challenge.

"Nothing is faster than the speed of light, so if you want to measure light's time of travel with high precision, then you need extremely fast detectors," Willomitzer said. "Such detectors can be terribly expensive."

Tailored waves

To eliminate the need for fast detectors, Willomitzer and his colleagues merged light waves from two lasers in order to generate a synthetic light wave that can be specifically tailored to holographic imaging in different scattering scenarios.

"If you can capture the entire light field of an object in a hologram, then you can reconstruct the object's three-dimensional shape in its entirety," Willomitzer explained. "We do this holographic imaging around a corner or through scatterers -- with synthetic waves instead of normal light waves."

Over the years, there have been many NLoS imaging attempts to recover images of hidden objects. But these methods typically have one or more problems. They either have low resolution, an extremely small angular field of regard, require a time-consuming raster scan or need large probing areas to measure the scattered light signal.

The new technology, however, overcomes these issues and is the first method for imaging around corners and through scattering media that combines high spatial resolution, high temporal resolution, a small probing area and a large angular field of view. This means that the camera can image tiny features in tightly confined spaces as well as hidden objects in large areas with high resolution -- even when the objects are moving.

Turning 'walls into mirrors'


Because light only travels on straight paths, an opaque barrier (such as a wall, shrub or automobile) must be present in order for the new device to see around corners. The light is emitted from the sensor unit (which could be mounted on top of a car), bounces off the barrier, then hits the object around the corner. The light then bounces back to the barrier and ultimately back into the detector of the sensor unit.

"It's like we can plant a virtual computational camera on every remote surface to see the world from the surface's perspective," Willomitzer said.

For people driving roads curving through a mountain pass or snaking through a rural forest, this method could prevent accidents by revealing other cars or deer just out of sight around the bend. "This technique turns walls into mirrors," Willomitzer said. "It gets better as the technique also can work at night and in foggy weather conditions."

In this manner, the high-resolution technology also could replace (or supplement) endoscopes for medical and industrial imaging. Instead of needing a flexible camera, capable of turning corners and twisting through tight spaces -- for a colonoscopy, for example -- synthetic wavelength holography could use light to see around the many folds inside the intestines.

Similarly, synthetic wavelength holography could image inside industrial equipment while it is still running -- a feat that is impossible for current endoscopes.

"If you have a running turbine and want to inspect defects inside, you would typically use an endoscope," Willomitzer said. "But some defects only show up when the device is in motion. You cannot use an endoscope and look inside the turbine from the front while it is running. Our sensor can look inside a running turbine to detect structures that are smaller than one millimeter."

Read more at Science Daily

Nov 16, 2021

Carbon dioxide cold traps on the moon are confirmed for the first time

After decades of uncertainty, researchers have confirmed the existence of lunar carbon dioxide cold traps that could potentially contain solid carbon dioxide. The discovery will likely have a major influence in shaping future lunar missions and could impact the feasibility of a sustained robot or human presence on the moon.

In the permanently shadowed regions at the poles of our moon, temperatures dip below those in the coldest areas of Pluto, allowing for carbon dioxide cold traps. In these cold traps, carbon dioxide molecules could freeze and remain in solid form even during peak temperatures in the lunar summer.

Future human or robot explorers could use the solid carbon dioxide in these cold traps to produce fuel or materials for longer lunar stays. The carbon dioxide and other potential volatile organics could also help scientists better understand the origin of water and other elements on the moon.

Although cold traps have been predicted by planetary scientists for years, this new study is the first to firmly establish and map the presence of carbon dioxide cold traps. To find the coldest spots on the moon's surface, researchers analyzed 11 years of temperature data from the Diviner Lunar Radiometer Experiment, an instrument flying aboard NASA's Lunar Reconnaissance Orbiter.

The new research, published in the AGU journal Geophysical Research Letters, which publishes high-impact, short-format reports with immediate implications spanning all Earth and space sciences, shows that these cold traps include several pockets concentrated around the lunar southern pole. The total area of these carbon dioxide traps totals 204 square kilometers, with the largest area in the Amundsen Crater hosting 82 square kilometers of traps. In these areas, temperatures continually remain below 60 degrees Kelvin (about minus 352 degrees Fahrenheit.)

The existence of carbon dioxide cold traps does not guarantee the existence of solid carbon dioxide on the moon, but this verification does make it highly likely that future missions could find carbon dioxide ice there, according to the researchers.

"I think when I started this, the question was, 'Can we confidently say there are carbon dioxide cold traps on the moon or not?'" said Norbert Schörghofer, a planetary scientist at the Planetary Science Institute and lead author on the study. "My surprise was that they're actually, definitely there. It could have been that we can't establish their existence, [they might have been] one pixel on a map... so I think the surprise was that we really found contiguous regions which are cold enough, beyond doubt."

Managing the moon

The existence of carbon dioxide traps on the moon will likely have implications for the planning of future lunar exploration and international policy regarding the resource.

If there is indeed solid carbon dioxide in these cold traps, it could potentially be used in a variety of ways. Future space explorers could use the resource in the production of steel as well as rocket fuel and biomaterials, which would both be essential for sustained robot or human presence on the moon. This potential has already attracted interest from governments and private companies.

Scientists could also study lunar carbon to understand how organic compounds form and what kind of molecules can be naturally produced in these harsh environments.

The carbon dioxide cold traps could also help scientists answer long-standing questions about the origins of water and other volatiles in the Earth-moon system, according to Paul Hayne, a planetary scientist at the University of Colorado, Boulder who was not involved in the study.

Carbon dioxide could be a tracer for the sources of water and other volatiles on the lunar surface, helping scientists to understand how they arrived on the moon and on Earth.

Read more at Science Daily

Astronomers team up to create new method to understand galaxy evolution

A husband-and-wife team of astronomers at The University of Toledo joined forces for the first time in their scientific careers during the pandemic to develop a new method to look back in time and change the way we understand the history of galaxies.

Until now forging parallel but separate careers while juggling home life and carpooling to cross country meets, Dr. Rupali Chandar, professor of astronomy, and Dr. J.D. Smith, director of the UToledo Ritter Astrophysical Research Center and professor of astronomy, merged their areas of expertise.

Working along with UToledo alumnus Dr. Adam Smercina who graduated with a bachelor's degree in physics in 2015 and is currently a postdoctoral researcher at the University of Washington, they used NASA's Hubble Space Telescope to focus on a post-starburst galaxy nearly 500 million light years away called S12 that looks like a jellyfish with a host of stars streaming out of the galaxy on one side.

Smercina, the "glue" that brought Smith and Chandar together on this research, worked with Smith as an undergraduate student starting in 2012 on the dust and gas in post-starburst galaxies.

While spiral galaxies like our Milky Way have continued to form stars at a fairly steady rate, post-starburst galaxies experienced an intense burst of star formation sometime in the last half billion years, shutting down their star formation.

The resulting breakthrough research published in the Astrophysical Journal outlines their new method to establish the star formation history of a post-starburst galaxy using its cluster population. The approach uses the age and mass estimates of stellar clusters to determine the strength and speed of the starburst that stopped more stars from forming in the galaxy.

Using this method, the astronomers discovered that S12 experienced two periods of starburst before it stopped forming stars, not one.

"Post-starbursts represent a phase of galaxy evolution that is pretty rare today," Smith said. "We think that nearly half of all galaxies went through this phase at some point in their lives. So far, their star-forming histories have been determined almost exclusively from detailed modeling of their composite starlight."

Smith has studied post-starburst galaxies for more than a decade, and Chandar works on the stellar clusters in galaxies that are typically about three or four times closer than those in Smith's data.

"Clusters are like fossils -- they can be age-dated and give us clues to the past history of galaxies," Chandar said. "Clusters can only be detected in these galaxies with the clear eyed-view of the Hubble Space Telescope. No clusters can be detected in even the highest quality images taken with telescopes on the ground."

Smith has led several large multi-wavelength projects to better understand the evolutionary history of post-starburst galaxies. He discovered, for example, that the raw fuel for star formation -- gas and dust -- is still present in surprising quantities in some of these systems including S12, even though no stars are currently being formed.

"While studying the light from these galaxies at multiple wavelengths has helped establish the time that the burst happened, we hadn't been able to determine how strong and how long the burst that shutoff star formation actually was," Smith said. "And that's important to know to better understand how these galaxies evolve."

The astronomers used well-studied cluster masses and star formation rates in eight nearby galaxies to develop the new method, which could be applied to determine the recent star formation histories for a number of post-starburst systems.

The researchers applied their different approach to S-12, which is short for SDSS 623-52051-207, since it was discovered and catalogued in the Sloan Digitized Sky Survey (SDSS).

"It must have had one of the highest rates of star formation of any galaxy we have ever studied," Chandar said. "S12 is the most distant galaxy I've ever worked on."

The study indicates star formation in S12 shut off 70 million years ago after a short but intense burst formed some of the most massive clusters known, with masses several times higher than similar-age counterparts forming in actively merging galaxies. The method also revealed an earlier burst of star formation that the previous method of composite starlight modeling could not detect.

"These results suggest that S12's unusual history may be even more complicated than expected, with multiple major events compounding to fully shut off star formation," Smith said.

Read more at Science Daily

A new approach to identify mammals good at learning sounds

Why are some animals good at learning sounds? Did this skill appear when animals started 'faking' their body size by lowering calls? In a new study on a wide range of mammals,researchers revisit this question. Surprisingly, many animals who are skilled vocalists (such as dolphins and seals) actually sound higher than would be expected for their body size.

Some animals -- such as red deer -- sound 'bigger' than they really are. This means that their calls are lower than you would expect based on their body size. Biologists think that 'faking' body size in such a way might be a strategy to impress the other sex. In a recent study, Garcia and Ravignani noticed that animals who can fake their body size by changing their calls also tend to be good at learning sounds -- an ability known as vocal learning. Could natural selection be the driving force behind both: faking body size and learning sounds? If true, this idea would have important implications for the evolution of speech.

To further explore the relationship between faking body size and vocal learning, Ravignani and Garcia expanded their earlier analyses of a wide range of mammals. Would the relation between faking body size and learning sounds turn out to be a systematic evolutionary pattern?

What they found was surprising. Contrary to expectations, most vocal learners -- such as dolphins, whales and seals -- sounded higher than you would expect based on their body size, not lower. Ravignani explains: "There might be an alternative evolutionary scenario in vocal learners, where selective pressures favour individuals that can change their tone of voice from low to high." In other words, good vocal learners are those animals that can hit the high notes. Vocal learners who sounded lower than expected often had anatomical adaptations that could explain the lowered voice (such as a longer nose). Garcia adds: "Of course there are exceptions, and we do not claim that all vocal learner species sound higher than expected for their body size. But there is a general trend, and this may help us to better characterise vocal communication in mammals."

Read more at Science Daily

In spreading politics, videos may not be much more persuasive than their text-based counterparts

It might seem that video would be a singularly influential medium for spreading information online. But a new experiment conducted by MIT researchers finds that video clips have only a modestly larger impact on political persuasion than the written word does.

"Our conclusion is that watching video is not much more persuasive than reading text," says David Rand, an MIT professor and co-author of a new paper detailing the study's results.

The study comes amid widespread concern about online political misinformation, including the possibility that technology-enabled "deepfake" videos could easily convince many people watching them to believe false claims.

"Technological advances have created new opportunities for people to falsify video footage, but we still know surprisingly little about how individuals process political video versus text," says MIT researcher Chloe Wittenberg, the lead author on the paper. "Before we can identify strategies for combating the spread of deepfakes, we first need to answer these more basic questions about the role of video in political persuasion."

The paper, "The (Minimal) Persuasive Advantage of Political Video over Text," is published today in Proceedings of the National Academy of Sciences. The co-authors are Adam J. Berinsky, the Mitsui Professor of Political Science; Rand, the Erwin H. Schell Professor and Professor of Management Science and Brain and Cognitive Sciences; Ben Tappin, a postdoc in the Human Cooperation Lab; and Chloe Wittenberg, a doctoral student in the Department of Political Science.

Believability and persuasion

The study operates on a distinction between the credibility of videos and their persuasiveness. That is, an audience might find a video believable, but their attitudes might not change in response. Alternately, a video might not seem credible to a large portion of the audience but still alter viewers' attitudes or behavior.

For example, Rand says, "When you watch a stain remover ad, they all have this same format, where some stain gets on a shirt, you pour the remover on it, and it goes in the washer and hey, look, the stain is gone. So, one question is: Do you believe that happened, or was it just trickery? And the second question is: How much do you want to buy the stain remover? The answers to those questions don't have to be tightly related."

To conduct the study, the MIT researchers performed a pair of survey experiments involving 7,609 Americans, using the Lucid and Dynata platforms. The first study involved 48 ads obtained through the Peoria Project, an archive of political materials. Survey participants either watched an ad, read a transcript of the ad, or received no information at all. (Each participant did this multiple times.) For each ad, participants were asked whether the message seemed believable and whether they agreed with its main message. They were then shown a series of questions measuring whether they found the subject personally important and whether they wanted more information.

The second study followed the same format but involved 24 popular video clips about Covid-19, taken from YouTube.

Overall, the results showed that video performed somewhat better than written text on the believability front but had a smaller relative advantage when it came to persuasion. Participants were modestly more likely to believe that events actually occurred when they were shown in a video as opposed to being described in a written transcript. However, the advantage of video over text was only one-third as big when it came to changing participants' attitudes and behavior.

As a further indication of this limited persuasive advantage of video versus text, the difference between the "control condition" (with participants who received no information) and reading text was as great as that between reading the transcript and watching the video.

These differences were surprisingly stable across groups. For instance, in the second study, there were only small differences in the effects seen for political versus nonpolitical messages about Covid-19, suggesting the findings hold across varying types of content. The researchers also did not find significant differences among the respondents based on factors such as age, political partisanship, and political knowledge.

"Seeing may be believing," Berinsky says, "but our study shows that just because video is more believable doesn't mean that it can change people's minds."

Questions about online behavior


The scholars acknowledge that the study did not exactly replicate the conditions in which people consume information online, but they suggest that the main findings yield valuable insight about the relative power of video versus text.

"It's possible that in real life things are a bit different," Rand says. "It's possible that as you're scrolling through your newsfeed, video captures your attention more than text would. You might be more likely to look at it. This doesn't mean that video is inherently more persuasive than text -- just that it has the potential to reach a wider audience."

That said, the MIT team notes there are some clear directions for future research in this field -- including the question of whether or not people are more willing to watch videos than to read materials.

Read more at Science Daily

Toward 'off-the-shelf’ immune cell therapy for cancer

Immunotherapies, which harness the body's natural defenses to combat disease, have revolutionized the treatment of aggressive and deadly cancers. But often, these therapies -- especially those based on immune cells -- must be tailored to the individual patient, costing valuable time and pushing their price into the hundreds of thousands of dollars.

Now, in a study published in the journal Cell Reports Medicine, UCLA researchers report a critical step forward in the development of an "off-the-shelf" cancer immunotherapy using rare but powerful immune cells that could potentially be produced in large quantities, stored for extended periods and safely used to treat a wide range of patients with various cancers.

"In order to reach the most patients, we want cell therapies that can be mass-produced, frozen and shipped to hospitals around the world," said Lili Yang, a member of the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA and the study's senior author. "That way, doses of these therapies can be ready and waiting for patients as soon as they are needed."

For the study, Yang and her colleagues focused on invariant natural killer T cells, or iNKT cells, which are unique not only for their power and efficacy but also because they don't carry the risk of graft-versus-host disease, which occurs when transplanted cells attack a recipient's body and which is the reason most cell-based immunotherapies must be created on a patient-specific basis, Yang said.

The researchers developed a new method for producing large numbers of these iNKT cells using blood-forming stem cells, which can self-replicate and produce all kinds of blood and immune cells. The team used stem cells obtained from four donor cord-blood samples and eight donor peripheral blood samples.

"Our findings suggest that one cord blood donation could produce up to 5,000 doses of the therapy and one peripheral blood donation could produce up to 300,000 doses," said Yang, who is also an associate professor of microbiology, immunology and molecular genetics and a member of the UCLA Jonsson Comprehensive Cancer Center. "At this yield, the cost of producing immune cell products could be dramatically reduced."

The researchers first used genetic engineering to program the blood-forming stem cells to make them more likely to develop into iNKT cells. Next, these genetically engineered stem cells were placed into artificial thymic organoids, which mimic the environment of the thymus, a specialized organ in which T cells naturally mature in the body. After eight weeks in the organoids, each stem cell produced, on average, 100,000 iNKT cells.

Yang and her collaborators then tested the resulting cells, called hematopoietic stem cell-engineered iNKT cells, or HSC-iNKT cells, by comparing their cancer-fighting abilities with those of immune cells called natural killer cells, or NK cells. In a lab dish, the HSC-iNKT cells were significantly better at killing multiple types of human tumor cells -- including leukemia, melanoma, lung cancer, prostate cancer and multiple myeloma cells -- than the NK cells, the researchers found.

Even more importantly, the HSC-iNKT cells sustained their tumor-killing efficacy after being frozen and thawed, an essential requirement for widespread distribution of an off-the-shelf cell therapy.

The researchers next equipped the HSC-iNKT cells with a chimeric antigen receptor, or CAR, a specialized molecule used in some immunotherapies to enable immune cells to recognize and kill a specific type of cancer. In this case, they added to the HSC-iNKT cells a CAR that targets a protein found on multiple myeloma cells and then tested the cells' ability to fight human multiple myeloma tumors that had been transplanted into mice.

These CAR-equipped HSC-iNKT cells eliminated the multiple myeloma tumors, and the mice that underwent this treatment remained tumor-free and showed no signs of complications such as graft-versus-host disease throughout their lives.

The researchers are now working to improve their manufacturing methods by moving to a feeder-free system that eliminates the need for supportive cells -- such as those used in the thymic organoids -- to assist blood stem cells in producing iNKT cells. Yang says she hopes this advance will better enable mass-production of the therapy and, ultimately, its clinical and commercial development.

The paper's co-first authors are UCLA doctoral students Yan-Ruide (Charlie) Li and Yang (Alice) Zhao. Additional authors include UCLA professors Dr. Sarah Larson, Dr. Joshua Sasine, Dr. Xiaoyan Wang, Matteo Pellegrini, Dr. Owen Witte and Dr. Antoni Ribas.

The researchers' genetic engineering of blood-forming stem cells utilized methods developed by Dr. Donald Kohn, and the artificial thymic organoids were developed by Dr. Gay Crooks, Dr. Chris Seet and Amélie Montel-Hagen, all of the UCLA Broad Stem Cell Research Center.

The methods and products described in this study are covered by patent applications filed by the UCLA Technology Development Group on behalf of the Regents of the University of California, with Yang, Li, Yu Jeong Kim, Jiaji Yu, Pin Wang, Yanni Zhu, Crooks, Montel-Hagen and Seet listed as co-inventors. The treatment strategy was used in preclinical tests only; it has not been tested in humans or approved by the U.S. Food and Drug Administration as safe and effective for use in humans.

Read more at Science Daily

Nov 15, 2021

Fate of sinking tectonic plates is revealed

Our world's surface is a jumble of jostling tectonic plates, with new ones emerging as others are pulled under. The ongoing cycle keeps our continents in motion and drives life on Earth. But what happens when a plate disappears into the planet's interior?

The question has long puzzled scientists because conventional wisdom said that sinking tectonic plates must remain intact to keep pulling on the portion behind it, but according to geophysical evidence, they are destroyed.

Now, in a study published Nov. 11 in Nature, scientists say they've found an answer that reconciles the two stories: Plates are significantly weakened as they sink but not so much that they break apart entirely.

The finding came after scientists put tectonic plates through a computer-generated gauntlet of destructive geologic forces. The model showed that as the plate enters the mantle, it bends abruptly downward, cracking its cold, brittle back. At the same time, the bending changes the fine grain structure of the rock along its underbelly, leaving it weakened. Combined, the stresses pinch the plate along its weak points, leaving it mostly intact but segmented like a slinky snake.

This means the plate continues to be pulled under despite becoming folded and distorted.

According to the researchers, the model predicted a scenario that matches observations from Japan. Studies of the region where the Pacific tectonic plate dives -- or subducts -- under Japan have turned up large cracks where the plate bends downward, and they have shown signs of weaker material underneath. Deep seismic imaging conducted by The University of Texas at Austin's Steve Grand has also revealed tectonic shapes in the Earth's mantle under Japan that appear a close match for the slinky snake in the model.

Co-author Thorsten Becker, a professor in UT's Jackson School of Geosciences, said that the study does not necessarily close the book on what happens to subducting plates, but it certainly gives a compelling case to explain several important geologic processes.

"It's an example of the power of computational geosciences," said Becker who assisted in developing the model and is a faculty associate at UT's Oden Institute for Computational Engineering & Sciences. "We combined these two processes that geology and rock mechanics are telling us are happening, and we learned something about the general physics of how the Earth works that we wouldn't have expected. As a physicist, I find that exciting."

The study's lead author, Taras Gerya, a professor of geophysics at ETH Zurich, added that until now, geophysicists had lacked a comprehensive explanation for how tectonic plates bend without breaking.

Things got interesting when the researchers ran their simulations with a hotter interior, similar to the early Earth. In these simulations, the tectonic snake segments made it only a few miles into the mantle before breaking off. That means that subduction would have occurred intermittently, raising the possibility that modern plate tectonics began only within the past billion years.

"Personally, I think there are a lot of good arguments for plate tectonics being much older," Becker said, "but the mechanism revealed by our model suggests things might be more sensitive to the temperature of the mantle than we thought, and that, I think, could lead to interesting new avenues of discussion."

Becker and Gerya were joined by David Bercovici, a geophysicist at Yale University whose investigation into how rock grains are altered in the deep mantle helped motivate the research. The study is based on a two-dimensional computer model of plate tectonics incorporating Bercovici's rock deformation research and other plate-weakening mechanics. The researchers are now studying the phenomena using 3D models and plan to investigate what those models can tell them about the occurrence of earthquakes.

Read more at Science Daily

Simulations provide clue to missing planets mystery

Forming planets are one possible explanation for the rings and gaps observed in disks of gas and dust around young stars. But this theory has trouble explaining why it is rare to find planets associated with rings. New supercomputer simulations show that after creating a ring, a planet can move away and leave the ring behind. Not only does this bolster the planet theory for ring formation, the simulations show that a migrating planet can produce a variety of patterns matching those actually observed in disks.

Young stars are encircled by protoplanetary disks of gas and dust. One of the world's most powerful radio telescope arrays, ALMA (Atacama Large Millimeter/submillimeter Array), has observed a variety of patterns of denser and less dense rings and gaps in these protoplanetary disks. Gravitational effects from planets forming in the disk are one theory to explain these structures, but follow-up observations looking for planets near the rings have largely been unsuccessful.

In this research a team from Ibaraki University, Kogakuin University, and Tohoku University in Japan used the world's most powerful supercomputer dedicated to astronomy, ATERUI II at the National Astronomical Observatory of Japan, to simulate the case of a planet moving away from its initial formation site. Their results showed that in a low viscosity disk, a ring formed at the initial location of a planet doesn't move as the planet migrates inwards. The team identified three distinct phases. In Phase I, the initial ring remains intact as the planet moves inwards. In Phase II, the initial ring begins to deform and a second ring starts forming at the new location of the planet. In Phase III, the initial ring disappears and only the latter ring remains.

These results help explain why planets are rarely observed near the outer rings, and the three phases identified in the simulations match well with the patterns observed in actual rings. Higher resolution observations from next-generation telescopes, which will be better able to search for planets close to the central star, will help determine how well these simulations match reality.

From Science Daily

DNA analysis confirms 2,000-year-old sustainable fishing practices of Tsleil-Waututh Nation

Ancient Indigenous fishing practices can be used to inform sustainable management and conservation today, according to a new study from Simon Fraser University.

Working with the Tsleil-Waututh Nation and using new palaeogenetic analytical techniques developed in SFU Archaeology's ancient DNA lab, directed by professor Dongya Yang, the results of a new collaborative study featured in Scientific Reports provides strong evidence that prior to European colonization, Coast Salish people were managing chum salmon by selectively harvesting males.

Selectively harvesting male salmon increases the overall size of the harvest, as male salmon are bigger than female salmon. It also helps ensure successful spawning as one male can mate with several females. This allows fisheries to maximize the size of their harvest without negatively impacting future returns.

"This management practice is also described in Coast Salish knowledge and, through archaeology, we were able to extend the time depth of this practice by 2,000 years," says Thomas Royle, a postdoctoral fellow working in the lab.

The research team applied the new palaeogenetic methods to archaeological salmon vertebrae to identify the sex of each sample, finding evidence to corroborate Coast Salish traditional knowledge that has been shared for centuries.

The Tsleil-Waututh ancestors worked to keep salmon populations plentiful for millennia, passing their knowledge on from one generation to the next. With current declines and collapses in many commercial fisheries, these traditional Tsleil-Waututh practices can potentially inform current management and conservation.

This research collaboration included the Tsleil-Waututh Nation (Michael George, Michelle George), SFU (Thomas C.A. Royle, Hua Zhang, Miguel Alcaide, Ryan Morin, Dongya Yang), University of British Columbia (Jesse Morin, Camilla Speller, Morgan Ritchie), and McMaster University (Aubrey Cannon) as part of a Tsleil-Waututh Nation project to establish the state of pre-contact ecosystems in Burrard Inlet.

The leadership of the Tsleil-Waututh Nation was an integral piece of the success of the collaboration and allowed cutting-edge science methods to be used to understand the traditional ecological knowledge of Tsleil-Waututh ancestors.

Read more at Science Daily

Capturing a true picture of wolves in Yellowstone: Reevaluating aspen recovery

It's an environmental success story that feels like a parable -- the reintroduction of wolves in Yellowstone National Park in the mid-1990s triggered a cascade of effects that ultimately restored the ecosystem, including the recovery of aspen trees. But like many stories based on ecological realities, it's more complex than at first glance -- aspen recovery in the park is not as robust as generally believed, according to new research.

The Yellowstone story is a textbook example ofa trophic cascade, in which predators help plants grow by eating or scaring away herbivores that eat the plants. When wolves were reintroduced into the Yellowstone food chain, they helped to reduce numbers of elk, which had been consuming young aspen trees. Previous research showed strong positive growth in young aspen as the elk populations decreased -- a welcome result, as aspen forests have been vanishing from the northern Yellowstone landscape for the last century.

But new research from Elaine Brice and Dan MacNulty, from Utah State University's Department of Wildland Resources and Ecology Center, and Eric Larsen, from the University of Wisconsin Stevens Point's Department of Geography and Geology, shows that the effect of wolves on the recovery of aspen has been exaggerated by how it was measured.

Previous studies evaluated aspen recovery in Yellowstone by measuring the five tallest young aspen within a stand. The reasoning was that the tallest young aspen trees represent a 'leading edge' indicator of the future recovery of the entire aspen population. But this is not the case -- sampling only the tallest young aspen estimated a rate of recovery that was significantly faster than was estimated by random sampling of all young aspen within the stand, according to the research.

"These are extremely complex systems, and understanding them is a major challenge because they are difficult to properly sample," said Brice. "The traditional method of sampling by only using the tallest young aspen plants to measure growth -- which most research currently relies on -- doesn't capture the entire picture."

For one, elk are picky about the aspen they consume. They tend to eat plants at shoulder height for which they don't have to crane their necks. As the leader stem (main trunk) of a young aspen grows past the shoulder height of adult elk, it is decreasingly likely to be eaten as it grows taller, said MacNulty. "This means that the tallest young aspen grow faster because they are taller, not because wolves reduce elk browsing," said MacNulty. This finding highlights the complicating fact that height of young aspen is both a cause and an effect of reduced elk browsing.

Taller aspen also thrive because they tend to have the best growing conditions (sunlight, moisture, soil quality). Measuring just the tallest young trees downplays the role of these other factors that have nothing to do with elk or wolf populations. And measuring just the tallest aspen also overlooks the failure of some young aspen to regenerate in the first place.

"That's like calculating a team's batting average without the player who always strikes out," said Brice. Random sampling from the research showed an absence of aspen regeneration in some places, a vital piece missing from the initial measurements.

Understanding how ecosystems respond to changes in large predator populations is vital to resolving broader debates about the structure of food webs, determining species abundance and delivering ecosystem services, said the authors. This study demonstrates how deviations from basic sampling principles can distort this understanding. Non-random sampling overestimated the strength of a trophic cascade in this case, but it may underestimate cascading effects in other situations. Randomization is one of the few protections against unreliable inferences and the misguided management decisions they may inspire, they said.

Read more at Science Daily

Anxiety cues found in the brain despite safe environment

Imagine you are in a meadow picking flowers. You know that some flowers are safe, while others have a bee inside that will sting you. How would you react to this environment and, more importantly, how would your brain react? This is the scene in a virtual-reality environment used by researchers to understand the impact anxiety has on the brain and how brain regions interact with one another to shape behavior.

"These findings tell us that anxiety disorders might be more than a lack of awareness of the environment or ignorance of safety, but rather that individuals suffering from an anxiety disorder cannot control their feelings and behavior even if they wanted to," said Benjamin Suarez-Jimenez, Ph.D., assistant professor in the Del Monte Institute for Neuroscience at the University of Rochester and first author of the study published in Communications Biology. "The patients with an anxiety disorder could rationally say -- I'm in a safe space -- but we found their brain was behaving as if it was not."

Watching anxiety in the brain

Using fMRI, the researchers observed the brain activity of volunteers with general and social anxiety as they navigated a virtual reality game of picking flowers. Half of the meadow had flowers without bees, the other half had flowers with bees that would sting them -- as simulated by a mild electrical stimulation to the hand. Researchers found all study participants could distinguish between the safe and dangerous areas, however, brain scans revealed volunteers with anxiety had increased insula and dorsomedial prefrontal cortex activation -- indicating their brain was associating a known safe area to danger or threat.

"This is the first time we've looked at discrimination learning in this way. We know what brain areas to look at, but this is the first time we show this concert of activity in such a complex 'real-world-like' environment," said Suarez-Jimenez. "These findings point towards the need for treatments that focus on helping patients take back control of their body."

The brain differences were the only differences seen in these patients. For example, sweat responses, a proxy for anxiety, which was also measured, failed to reveal any clear differences.

Suarez-Jimenez's research


Understanding the neural mechanisms by which the brain learns about the environment is the focus of Suarez-Jimenez's research, particularly how the brain predicts what is threatening and what is safe. He uses virtual reality environments to investigate neural signatures of anxiety disorders and post-traumatic stress disorder (PTSD). His goal is to understand how people build maps in the brain that are based on experience, and the role of those maps in psychopathologies of stress and anxiety.

Read more at Science Daily

Nov 14, 2021

Amazon Rainforest birds’ bodies transform due to climate change

The most pristine parts of the Amazon rainforest devoid of direct human contact are being impacted by human-induced climate change, according to new research by LSU scientists. New analyses of data collected over the past four decades show that not only has the number of sensitive resident birds throughout the Amazon rainforest declined, but the body size and wing length have changed for most studied species. These physical changes in the birds track increasingly hot and dry conditions in the dry season, from June to November.

"Even in the middle of this pristine Amazon rainforest, we are seeing the global effects of climate change caused by people, including us," said Vitek Jirinec, LSU alumnus (Ph.D. '21), associate ecologist at the Integral Ecology Research Center and lead author to this study published in the journal Science Advances.

Birds in the Amazon rainforest have become smaller and their wings have become longer over several generations, indicating a response to the shifting environmental conditions that may include new physiological or nutritional challenges.

This is the first study to discover these changes in non-migratory birds' body size and shape, which eliminates other factors that may have influenced these physiological changes. Jirinec and colleagues studied data collected on more than 15,000 individual birds that were captured, measured, weighed, marked with a leg band and released, over 40 years of field work in the world's largest rainforest. The data reveal that nearly all of the birds' bodies have reduced in mass, or become lighter, since the 1980s. Most of the bird species lost on average about 2 percent of their body weight every decade. For an average bird species that weighed about 30 grams in the 1980s, the population now averages about 27.6 grams. How significant is this?

"These birds don't vary that much in size. They are fairly fine-tuned, so when everyone in the population is a couple of grams smaller, it's significant," said co-author Philip Stouffer, who is the Lee F. Mason Professor in the LSU School of Renewable Natural Resources.

The data set covers a large range of the rainforest so the changes in the birds' bodies and wings across communities are not tied to one specific site, which means that the phenomenon is pervasive.

"This is undoubtedly happening all over and probably not just with birds," Stouffer said. "If you look out your window, and consider what you're seeing out there, the conditions are not what they were 40 years ago and it's very likely plants and animals are responding to those changes as well. We have this idea that the things we see are fixed in time, but if these birds aren't fixed in time, that may not be true."

The scientists investigated 77 species of rainforest birds that live from the cool, dark forest floor to the warmer, sunlit midstory. They discovered that the birds that reside in the highest section of the midstory and are the most exposed to heat and drier conditions, had the most dramatic change in body weight and wing size. These birds also tend to fly more than the birds that live on the forest floor. The idea is that these birds have adapted to a hotter, drier climate by reducing their wing loading therefore becoming more energy efficient in flight. Think of a fighter jet with a heavy body and short wings that requires a lot of energy to fly fast compared to a glider plane with a slim body and long wings that can soar with less energy. If a bird has a higher wing loading, it needs to flap its wings faster to stay aloft, which requires more energy and produces more metabolic heat. Reducing body weight and increasing wing length leads to more efficient resource use while also keeping cooler in a warming climate.

LSU alumnus Ryan Burner (Ph.D. '19) conducted much of the analysis that revealed the variation among the groups of birds over the years. Burner, who is now a research wildlife biologist at the U.S. Geological Survey Upper Midwest Environmental Sciences Center, is the second author of this study.

The question of the future capacity of Amazonian birds to deal with increasingly hotter and drier surroundings, especially in the dry season, remains unanswered. The same question can be asked for a lot of places and species that live at the edges of even more environmental extremes.

Read more at Science Daily

Climate change will destroy familiar environments, create new ones and undermine efforts to protect sea life

Climate change is altering familiar conditions of the world's oceans and creating new environments that could undermine efforts to protect sea life in the world's largest marine protected areas, new research from Oregon State University shows.

The changing conditions also have cultural and economic implications for the people whose traditions and livelihoods are dependent on ocean resources, said James Watson, an assistant professor in OSU's College of Earth, Ocean, and Atmospheric Sciences and the paper's co-author.

"What we're looking at here is the potential extinction of a whole environment," said Watson, who specializes in marine social-ecological systems and understanding complex adaptive systems. "In some places, the environments we have today are not going to exist in the future. We won't be able to go visit them or experience them. It is an environmental, cultural and economic loss we can't replace."

The researchers' analysis of multiple climate scenarios showed:
 

  • 60% to 87% of the ocean is expected to experience multiple biological and chemical changes, such as increases in water temperature, higher levels of acidity and changes in oxygen levels, by the year 2060.
  • The rate of change is expected to be even higher, 76% to 97%, in very large marine protected areas such as Australia's Great Barrier Reef Marine Park and the Galapagos Marine Reserve in Ecuador.
  • Increases in pH, which measures ocean acidity, are expected as soon as 2030. Ocean acidification reduces the amount of carbonate in seawater, which is necessary for marine organisms, such as corals and mollusks like oysters, to develop their shells and skeletons.


The findings were published this week in the journal One Earth. The paper's lead author is Steven Mana'oakamai Johnson, who conducted the research as part of his doctoral dissertation at Oregon State. Johnson, who earned his Ph.D. earlier this year, is now a postdoctoral researcher at Arizona State University.

The concept for the paper was borne from conversations between Johnson, a native of Saipan in the Northern Mariana Islands, a U.S. commonwealth in the Western Pacific Ocean, and Watson, a native of England, about what is likely to be lost due to climate change. One thing is the disappearance of ocean conditions they each experienced as children.

"All of us have experiences we define as normal under a given set of environmental conditions," said Johnson, who is has already witnessed climate change impacts such as a devastating coral bleaching event in Saipan.

"Properties such as temperature, acidity and oxygen levels define what a given part of the ocean looks like. For both James and me, the ocean experience we grew up with and have memories of will likely not exist for our grandchildren."

Using the last 50 years of ocean conditions as a measure of stability, the researchers used several climate models to see how six variables affecting ocean conditions might change as the planet warms. They used three warming scenarios with increasing degrees of severity.

"Our scenarios included likely, unlikely and highly unlikely degrees of warming, all of which are warmer today than they were 20 years ago," Johnson said. "In all three scenarios, conditions in more than half of the ocean are going to be novel, meaning new and significantly different, than they have been in the last 50 years."

Much of the change occurs in the ocean's two extremes: the tropics and the Arctic. The warmest places are seeing warming conditions never seen before, and the coldest places, like the Arctic, are no longer as cold as they once were. The researchers also found that most of those changes will occur by 2060, though most of the change in pH, or acidity, levels is expected much sooner, by the end of the decade.

The change is more pronounced for the very large marine protected areas that are designed to preserve threatened species and rare habitats such as coral reefs around the world. As ocean conditions change, animals in those protected areas are likely to seek other locations that are more favorable for their survival.

"These marine protected areas are an important tool for achieving conservation goals and can take a lot of political and social will to establish and work as intended," Johnson said. "In our analysis, 28 out of 29 of these areas will experience changes in conditions that could undermine conservation goals."

The researchers' findings present a picture of what the future might hold as the planet continues to warm, Johnson said. The research also offers important information to communities, policymakers and managers of protected habitats about how changing ocean conditions might impact them and how they might address those changes.

"For example, tuna thrive in certain ocean conditions. If the ocean gets too warm, the tuna may move to another area," Johnson said. "If your country depends on tuna for food or livelihood, what impact will that have?

"Or if you're a manager of a protected area, and you're protecting a species that is no longer in the area, what do you do?"

This type of forecasting advances how climate change is quantified, Watson said. It also gives people an opportunity to come to terms with the trauma of what is being lost as well as begin to make plans for a future without those resources.

Read more at Science Daily