Nov 8, 2019

Neanderthal extinction linked to human diseases

Growing up in Israel, Gili Greenbaum would give tours of local caves once inhabited by Neanderthals and wonder along with others why our distant cousins abruptly disappeared about 40,000 years ago. Now a scientist at Stanford, Greenbaum thinks he has an answer.

In a new study published in the journal Nature Communications, Greenbaum and his colleagues propose that complex disease transmission patterns can explain not only how modern humans were able to wipe out Neanderthals in Europe and Asia in just a few thousand years but also, perhaps more puzzling, why the end didn't come sooner.

"Our research suggests that diseases may have played a more important role in the extinction of the Neanderthals than previously thought. They may even be the main reason why modern humans are now the only human group left on the planet," said Greenbaum, who is the first author of the study and a postdoctoral researcher in Stanford's Department of Biology.

The slow kill


Archeological evidence suggests that the initial encounter between Eurasian Neanderthals and an upstart new human species that recently strayed out of Africa -- our ancestors -- occurred more than 130,000 years ago in the Eastern Mediterranean in a region known as the Levant.

Yet tens of thousands of years would pass before Neanderthals began disappearing and modern humans expanded beyond the Levant. Why did it take so long?

Employing mathematical models of disease transmission and gene flow, Greenbaum and an international team of collaborators demonstrated how the unique diseases harbored by Neanderthals and modern humans could have created an invisible disease barrier that discouraged forays into enemy territory. Within this narrow contact zone, which was centered in the Levant where first contact took place, Neanderthals and modern humans coexisted in an uneasy equilibrium that lasted tens of millennia.

Ironically, what may have broken the stalemate and ultimately allowed our ancestors to supplant Neanderthals was the coming together of our two species through interbreeding. The hybrid humans born of these unions may have carried immune-related genes from both species, which would have slowly spread through modern human and Neanderthal populations.

As these protective genes spread, the disease burden or consequences of infection within the two groups gradually lifted. Eventually, a tipping point was reached when modern humans acquired enough immunity that they could venture beyond the Levant and deeper into Neanderthal territory with few health consequences.

At this point, other advantages that modern humans may have had over Neanderthals -- such as deadlier weapons or more sophisticated social structures -- could have taken on greater importance. "Once a certain threshold is crossed, disease burden no longer plays a role, and other factors can kick in," Greenbaum said.

Why us?


To understand why modern humans replaced Neanderthals and not the other way around, the researchers modeled what would happen if the suite of tropical diseases our ancestors harbored were deadlier or more numerous than those carried by Neanderthals.

"The hypothesis is that the disease burden of the tropics was larger than the disease burden in temperate regions. An asymmetry of disease burden in the contact zone might have favored modern humans, who arrived there from the tropics," said study co-author Noah Rosenberg, the Stanford Professor of Population Genetics and Society in the School of Humanities and Sciences.

According to the models, even small differences in disease burden between the two groups at the outset would grow over time, eventually giving our ancestors the edge. "It could be that by the time modern humans were almost entirely released from the added burden of Neanderthal diseases, Neanderthals were still very much vulnerable to modern human diseases," Greenbaum said. "Moreover, as modern humans expanded deeper into Eurasia, they would have encountered Neanderthal populations that did not receive any protective immune genes via hybridization."

The researchers note that the scenario they are proposing is similar to what happened when Europeans arrived in the Americas in the 15th and 16th centuries and decimated indigenous populations with their more potent diseases.

Read more at Science Daily

Unless warming is slowed, emperor penguins will be marching towards extinction

Emperor penguins are some of the most striking and charismatic animals on Earth, but a new study from the Woods Hole Oceanographic Institution (WHOI) has found that a warming climate may render them extinct by the end of this century. The study, which was part of an international collaboration between scientists, published Nov. 7, 2019, in the journal Global Change Biology.

"If global climate keeps warming at the current rate, we expect emperor penguins in Antarctica to experience an 86 percent decline by the year 2100," says Stephanie Jenouvrier, a seabird ecologist at WHOI and lead author on the paper. "At that point, it is very unlikely for them to bounce back."

The fate of the penguins is largely tied to the fate of sea ice, which the animals use as a home base for breeding and molting, she notes. Emperor penguins tend to build their colonies on ice with extremely specific conditions -- it must be locked in to the shoreline of the Antarctic continent, but close enough to open seawater to give the birds access to food for themselves and their young. As climate warms, however, that sea ice will gradually disappear, robbing the birds of their habitat, food sources, and ability to hatch chicks.

Jenouvrier and her team conducted the study by combining two existing computer models. The first, a global climate model created by the National Center for Atmospheric Research (NCAR), offered projections of where and when sea ice would form under different climate scenarios. The second, a model of the penguin population itself, calculated how colonies might react to changes in that ice habitat.

"We've been developing that penguin model for 10 years," says Jenouvrier. "It can give a very detailed account of how sea ice affects the life cycle of emperor penguins, their reproduction, and their mortality. When we feed the results of the NCAR climate model into it, we can start to see how different global temperature targets may affect the emperor penguin population as a whole."

The researchers ran the model on three different scenarios: a future where global temperature increases by only 1.5 degrees Celsius (the goal set out by the Paris climate accord), one where temperatures increase by 2 degrees Celsius, and one where no action is taken to reduce climate change, causing to a temperature increase of 5 to 6 degrees Celsius.

Under the 1.5 degree scenario, the study found that only 5 percent of sea ice would be lost by 2100, causing a 19 percent drop in the number of penguin colonies. If the planet warms by 2 degrees, however, those numbers increase dramatically: the loss of sea ice nearly triples, and more than a third of existing colonies disappear. The 'business as usual' scenario is even more dire, Jenouvrier adds, with an almost complete loss of the colonies ensured.

Read more at Science Daily

Study: Actually, potted plants don't improve indoor air quality

Plants can help spruce up a home or office space, but claims about their ability to improve the air quality are vastly overstated, according to research out of Drexel University. A closer look at decades of research suggesting that potted plants can improve the air in homes and offices reveals that natural ventilation far outpaces plants when it comes to cleaning the air.

"This has been a common misconception for some time. Plants are great, but they don't actually clean indoor air quickly enough to have an effect on the air quality of your home or office environment," said Michael Waring, PhD, an associate professor of architectural and environmental engineering in Drexel's College of Engineering.

Waring and one of his doctoral students, Bryan Cummings, reviewed a dozen studies, spanning 30 years of research, to draw their conclusions and recently published findings in the Journal of Exposure Science and Environmental Epidemiology . The central finding is that the natural or ventilation air exchange rates in indoor environments, like homes and offices, dilutes concentrations of volatile organic compounds -- the air pollution that plants are allegedly cleaning -- much faster than plants can extract them from the air.

The high-profile experiment that seemed to create the myth of houseplants as air purifiers happened in 1989 when NASA, in search of ways to clean the air on space stations, declared that plants could be used to remove cancer-causing chemicals from the air.

But the problem with this experiment, and others like it, is that they were conducted in a sealed chamber in a lab -- a contained environment that has little in common with a house or office -- and the data from these studies was not interpreted further to reflect what the findings would be if the plant were in a real indoor environment with natural or ventilation air exchange.

"Typical for these studies," the researchers write, "a potted plant was placed in a sealed chamber (often with a volume of a cubic meter or smaller), into which a single VOC was injected, and its decay was tracked over the course of many hours or days."

Waring and Cummings's review takes the data from volumes of potted plant research one step farther, by using it to calculate a measure called the "clean air delivery rate," or "CADR." They were able to make this calculation for nearly all of the studies and what they found in every case was that the rate at which plants dissipated VOCs in a chamber was orders of magnitude slower than the standard rate of air exchange in a building -- thus proving the plants' overall effect on indoor air quality to be irrelevant.

"The CADR is the standard metric used for scientific study of the impacts of air purifiers on indoor environments, but many of the researchers conducting these studies were not looking at them from an environmental engineering perspective and did not understand how building air exchange rates interplay with the plants to affect indoor air quality," Waring said.

Many of these studies did show a reduction in the concentration of volatile organic compounds over time, which is likely why people have seized on them to extol the air purifying virtues of plants. But according to Waring and Cummings's calculations, it would take between 10 and 1,000 plants per square meter of floor space to compete with the air cleaning capacity of a building's air handling system or even just a couple open windows in a house.

Read more at Science Daily

Mammals' complex spines are linked to high metabolisms; we're learning how they evolved

Mammals' backbones are weird. Compared to other four-legged animals like reptiles, mammal spines are a complex mix of sections of differently-shaped bones. Our Frankenstein's monster backbones are a key component of mammals evolving the ability to move in a bunch of different ways -- compare a cheetah running, a person walking, a bat flying, and a whale swimming. A new study in Nature Communications delves into the nitty-gritty of how mammals' backbones became so complex. The scientists discovered that the process was marked by big, dramatic evolutionary changes, and that it's linked to mammals being active animals with high metabolisms.

"Looking around, the animals and plants that surround us are remarkably complex, but putting a number to that phenomenon is very tricky. With this study, we wanted to take a complex system-the mammal vertebral column-and measure how its complexity changed through time. We show that increases in complexity were discrete steps like rungs on a ladder instead of a smooth increase like a ramp. Adaptations for high activity levels in mammals seem to trigger these jumps in complexity, and they continue to influence its evolution today," says Katrina Jones, the paper's first author and a paleontologist from Harvard's Museum of Comparative Zoology.

"It's basically the story of how weird mammals' backbones are and how they evolved to be like that, starting starting from ancient relatives whose spines were much simpler," says Ken Angielczyk, a paleontologist at the Field Museum and one of the study's authors. "It looks like it's not just a gradual accumulation of little changes over time -- it's more discrete changes. And one of these big changes may be related to changes in how mammals are able to move and breathe that let us be so active."

Angielczyk and his co-authors, Jones and Stephanie Pierce of Harvard's Museum of Comparative Zoology, wanted to figure out how and when mammals and their ancestors first evolved these specialized backbones. They examined fossil backbones from mammal relatives called synapsids that lived between 300 and 200 million years ago and took precise measurements of the bones to determine how the spines were changing over time. They then fed all the data into a computer program that modeled the different ways that the spines might have evolved.

Based on the information from all the fossils, the model showed that the changes in synapsid backbones probably developed in comparatively quick bursts, rather than a super-slow, gradual pathway. Of course, explains Angielczyk, evolution is such a slow process that even quick bursts of evolutionary change can take millions of years. "It looks fast from our mountain-top view of evolution, but if you were one of these animals, it's not like your grandchildren would look totally different from you," he says. Rather, these big leaps really just mean that the evolutionary changes happened more quickly than what you'd expect to see in a totally random system where mutations and changes weren't good or bad, just neutral. Basically, big step-wise jumps in evolution mean that the changes that were happening made a big difference in the organisms' lives, making them better able to survive and pass on their genes.

Increasingly complex spines were such a good thing for mammal ancestors, the researchers argue, because they were part of a suite of changes related to higher activity levels.

Compared to reptiles, modern mammals have very high metabolisms -- we have more chemical reactions happening to keep our bodies going -- and we're more active. In general, mammals can move more efficiently and have more stamina, but those benefits come with a cost: mammals have to breathe more than reptiles do, we have to eat more, and we need fur to keep our bodies warm enough to keep our systems going. "As part of our study, we found that modern mammals with the most complex backbones also usually have the highest activity levels," says Pierce, "and some changes in in backbone complexity evolved at about the same time that other features associated with a more active lifestyle evolved, like fur or specialized muscles for breathing."

"The uniqueness of mammalian backbones is something that's been recognized for a long time, and our results show that there's a strong connection between the evolution of our backbones and the evolution of the soft tissues in our muscular and respiratory systems," says Angielczyk.

"We're interested in the big picture of how backbones evolve, and there are these long-standing ideas about it being related to the evolution of mammals' respiration, locomotion, and high acitvity levels," Angielczyk adds. "We're trying to test and refine those hypotheses, and to use them to better understand the broader question of how complexity increases through evolution."

And this big picture of how mammals' spines became complex could help to explain a lot about mammals alive today, including us. "Mammals kind of do their own thing," says Angielczyk. "If you look at mammals today, we have lots of weird features in our metabolism and our bodies and reproductive strategies. It would be really confusing to figure out how they evolved if you were only looking at modern mammals. But we have a really good fossil record of early mammal relatives, and that can help us understand the history of many of these very unusual traits."

Read more at Science Daily

Nov 7, 2019

Strained family relations and worsening of chronic health conditions

Strained relationships with parents, siblings or extended family members may be more harmful to people's health than a troubled relationship with a significant other, according to a study published by the American Psychological Association.

"We found that family emotional climate had a big effect on overall health, including the development or worsening of chronic conditions such as stroke and headaches over the 20-year span of midlife," said Sarah B. Woods, PhD, assistant professor of family and community medicine at UT Southwestern Medical Center and lead author of the study. "Contrary to previous research, which found that intimate relationships had a large effect on physical health, we did not get the same results."

The study was published in the Journal of Family Psychology.

"Most often, researchers focus on romantic relationships, especially marriage, presuming they likely have more of a powerful effect on heath," Woods said. "Given changes in how Americans are partnering, waiting longer to marry, if at all, and the lengthier, and possibly more emotion-laden trajectories of family-of-origin relationships, we wanted to compare the strength of associations between family and intimate partners and health over time."

The researchers used data from 2,802 participants in the Midlife Development in the U.S. survey that included a nationally representative sample of adults from 1995 to 2014. Three rounds of data were collected -- in 1995 to 1996, 2004 to 2006 and 2013 to 2014. The average participant was 45 years old during the first round.

The survey asked questions about family strain (e.g., "Not including your spouse or partner, how often do members of your family criticize you?") and family support (e.g., "How much can you rely on [your family] for help if you have a serious problem?") as well as intimate partner strain (e.g., "How often does your spouse or partner argue with you?") and support (e.g., "How much does your spouse or partner appreciate you?")

Health was measured using participants' total number of chronic conditions, such as stroke, headaches and stomach trouble, experienced in the 12 months prior to each of the three data collection times.

Participants also rated their overall health from excellent to poor at each round.

The researchers found that greater family relationship strain was associated with a greater number of chronic conditions and worse health appraisal 10 years later, during the second and third rounds of data collection.

"Comparatively, we found that greater family support during the second round of data collection in 2004 to 2006 was associated with better health appraisal 10 years later," said Jacob B. Priest, PhD, assistant professor of education at the University of Iowa and co-author of the study.

There were no significant effects of intimate partner relationships on health outcomes.

"We were honestly stunned that there were zero associations between intimate partner emotional climate and later health," Woods said.

She and her co-authors theorize that the lack of significant associations between intimate partner relationships and later health could be because those relationships can break up, whereas people are more likely to have longer associations with family members who aren't a spouse.

"The vast majority of the people in the study had living parents or siblings and thus, their relationship with a spouse or intimate partner was less likely to be as long as that of their family members," said Patricia N.E. Roberson, PhD, assistant professor of nursing of the University of Tennessee, Knoxville and co-author of the study. "Therefore, the emotional intensity of these relationships may be greater, so much so that people experience more of an effect on their health and well-being."

Woods and her colleagues said their findings show why physical and mental health care providers should consider family relationships when assessing and treating patients.

Read more at Science Daily

Physical activity linked to lower risk of fracture

Regular physical activity, including lighter intensity activities such as walking, is associated with reduced risk of hip and total fracture in postmenopausal women, according to new research from the University at Buffalo.

Published Oct. 25 in JAMA Network Open, the study is the most comprehensive evaluation of physical activity and fracture incidence in older women.

The study included more than 77,000 participants in the Women's Health Initiative, who were followed up over 14 years. During follow-up, 33% of participants reported experiencing at least one fracture.

The women who did the highest amount of physical activity -- which was approximately 35 minutes or more of daily recreational and household activities -- had an 18% lower risk of hip fracture and 6% lower risk of total fracture.

The study is one more among several papers -- all using data from the Women's Health Initiative -- published by UB researchers within the past few years that highlights the health benefits of being active, even at levels that are lower than the current physical activity guidelines.

"These findings provide evidence that fracture reduction is among the many positive attributes of regular physical activity in older women," said Jean Wactawski-Wende, PhD, study co-author and dean of the University at Buffalo School of Public Health and Health Professions.

"Fracture is very common in postmenopausal women, and is associated with loss of independence, physical limitations and increased mortality," Wactawski-Wende said.

In fact, the researchers note, approximately 1.5 million fractures occur in U.S. women each year, creating $12.7 billion in health care costs. About 14% of these fractures are in the hip. Mortality after a hip fracture is as high as 20%.

"Modest activities, including walking, can significantly reduce the risk of fracture, which can, in turn, lower the risk of death," Wactawski-Wende said.

Non-recreation physical activity -- examples include yardwork and household chores such as sweeping the floors or folding laundry -- also was inversely associated with several types of fracture.

The research has important implications for public health, considering that these lighter intensity activities are common among older adults.

The main message, says study first author Michael LaMonte, PhD, research associate professor of epidemiology and environmental health at UB, is "sit less, move more, and every movement counts."

Read more at Science Daily

Carbon dioxide capture and use could become big business

Carbon dioxide emissions concept
Capturing carbon dioxide and turning it into commercial products, such as fuels or construction materials, could become a new global industry, according to a study by researchers from UCLA, the University of Oxford and five other institutions.

Should that happen, the phenomenon would help the environment by reducing greenhouse gas emissions.

The research, published in Nature, is the most comprehensive study to date investigating the potential future scale and cost of 10 different ways to use carbon dioxide, including in fuels and chemicals, plastics, building materials, soil management and forestry. The study considered processes using carbon dioxide captured from waste gases that are produced by burning fossil fuels or from the atmosphere by an industrial process.

And in a step beyond most previous research on the subject, the authors also considered processes that use carbon dioxide captured biologically by photosynthesis.

The research found that on average each utilization pathway could use around 0.5 gigatonnes of carbon dioxide per year that would otherwise escape into the atmosphere. (A tonne, or metric ton, is equivalent to 1,000 kilograms, and a gigatonne is 1 billion tonnes, or about 1.1 billion U.S. tons.)

A top-end scenario could see more than 10 gigatonnes of carbon dioxide a year used, at a theoretical cost of under $100 per tonne of carbon dioxide. The researchers noted, however, that the potential scales and costs of using carbon dioxide varied substantially across sectors.

"The analysis we presented makes clear that carbon dioxide utilization can be part of the solution to combat climate change, but only if those with the power to make decisions at every level of government and finance commit to changing policies and providing market incentives across multiple sectors," said Emily Carter, a distinguished professor of chemical and biomolecular engineering at the UCLA Samueli School of Engineering and a co-author of the paper. "The urgency is huge and we have little time left to effect change."

According to the Intergovernmental Panel on Climate Change, keeping global warming to 1.5 degrees Celsius over the rest of the 21st century will require the removal of carbon dioxide from the atmosphere on the order of 100 to 1,000 gigatonnes of carbon dioxide. Currently, fossil carbon dioxide emissions are increasing by over 1% annually, reaching a record high of 37 gigatonnes of carbon dioxide in 2018.

"Greenhouse gas removal is essential to achieve net zero carbon emissions and stabilise the climate," said Cameron Hepburn, one of the study's lead authors, director of Oxford's Smith School of Enterprise and Environment. "We haven't reduced our emissions fast enough, so now we also need to start pulling carbon dioxide out of the atmosphere. Governments and corporations are moving on this, but not quickly enough.

"The promise of carbon dioxide utilization is that it could act as an incentive for carbon dioxide removal and could reduce emissions by displacing fossil fuels."

Critical to the success of these new technologies as mitigation strategies will be a careful analysis of their overall impact on the climate. Some are likely to be adopted quickly simply because of their attractive business models. For example, in certain kinds of plastic production, using carbon dioxide as a feedstock is a more profitable and environmentally cleaner production process than using conventional hydrocarbons, and it can displace up to three times as much carbon dioxide as it uses.

Biological uses might also present opportunities to reap co-benefits. In other areas, utilization could provide a "better choice" alternative during the global decarbonization process. One example might be the use of fuels derived from carbon dioxide, which could find a role in sectors that are harder to decarbonize, such as aviation.

The authors stressed that there is no "magic bullet" approach.

Read more at Science Daily

Go with the flow: Scientists design new grid batteries for renewable energy

High-voltage tower
How do you store renewable energy so it's there when you need it, even when the sun isn't shining or the wind isn't blowing? Giant batteries designed for the electrical grid -- called flow batteries, which store electricity in tanks of liquid electrolyte -- could be the answer, but so far utilities have yet to find a cost-effective battery that can reliably power thousands of homes throughout a lifecycle of 10 to 20 years.

Now, a battery membrane technology developed by researchers at the U.S. Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) may point to a solution.

As reported in the journal of Joule, the researchers developed a versatile yet affordable battery membrane -- from a class of polymers known as AquaPIMs. This class of polymers makes long-lasting and low-cost grid batteries possible based solely on readily available materials such as zinc, iron, and water. The team also developed a simple model showing how different battery membranes impact the lifetime of the battery, which is expected to accelerate early stage R&D for flow-battery technologies, particularly in the search for a suitable membrane for different battery chemistries.

"Our AquaPIM membrane technology is well-positioned to accelerate the path to market for flow batteries that use scalable, low-cost, water-based chemistries," said Brett Helms, a principal investigator in the Joint Center for Energy Storage Research (JCESR) and staff scientist at Berkeley Lab's Molecular Foundry who led the study. "By using our technology and accompanying empirical models for battery performance and lifetime, other researchers will be able to quickly evaluate the readiness of each component that goes into the battery, from the membrane to the charge-storing materials. This should save time and resources for researchers and product developers alike."

Most grid battery chemistries have highly alkaline (or basic) electrodes -- a positively charged cathode on one side, and a negatively charged anode on the other side. But current state-of-the-art membranes are designed for acidic chemistries, such as the fluorinated membranes found in fuel cells, but not for alkaline flow batteries. (In chemistry, pH is a measure of the hydrogen ion concentration of a solution. Pure water has a pH of 7 and is considered neutral. Acidic solutions have a high concentration of hydrogen ions, and are described as having a low pH, or a pH below 7. On the other hand, alkaline solutions have low concentrations of hydrogen ions and therefore have a high pH, or a pH above 7. In alkaline batteries, the pH can be as high as 14 or 15.)

Fluorinated polymer membranes are also expensive. According to Helms, they can make up 15% to 20% of the battery's cost, which can run in the range of $300/kWh.

One way to drive down the cost of flow batteries is to eliminate the fluorinated polymer membranes altogether and come up with a high-performing yet cheaper alternative such as AquaPIMs, said Miranda Baran, a graduate student researcher in Helms' research group and the study's lead author. Baran is also a Ph.D. student in the Department of Chemistry at UC Berkeley.

Getting back to basics

Helms and co-authors discovered the AquaPIM technology -- which stands for "aqueous-compatible polymers of intrinsic microporosity" -- while developing polymer membranes for aqueous alkaline (or basic) systems as part of a collaboration with co-author Yet-Ming Chiang, a principal investigator in JCESR and Kyocera Professor of Materials Science and Engineering at the Massachusetts Institute of Technology (MIT).

Through these early experiments, the researchers learned that membranes modified with an exotic chemical called an "amidoxime" allowed ions to quickly travel between the anode and cathode.

Later, while evaluating AquaPIM membrane performance and compatibility with different grid battery chemistries -- for example, one experimental setup used zinc as the anode and an iron-based compound as the cathode -- the researchers discovered that AquaPIM membranes lead to remarkably stable alkaline cells.

In addition, they found that the AquaPIM prototypes retained the integrity of the charge-storing materials in the cathode as well as in the anode. When the researchers characterized the membranes at Berkeley Lab's Advanced Light Source (ALS), the researchers found that these characteristics were universal across AquaPIM variants.

Baran and her collaborators then tested how an AquaPIM membrane would perform with an aqueous alkaline electrolyte. In this experiment, they discovered that under alkaline conditions, polymer-bound amidoximes are stable -- a surprising result considering that organic materials are not typically stable at high pH.

Such stability prevented the AquaPIM membrane pores from collapsing, thus allowing them to stay conductive without any loss in performance over time, whereas the pores of a commercial fluoro-polymer membrane collapsed as expected, to the detriment of its ion transport properties, Helms explained.

This behavior was further corroborated with theoretical studies by Artem Baskin, a postdoctoral researcher working with David Prendergast, who is the acting director of Berkeley Lab's Molecular Foundry and a principal investigator in JCESR along with Chiang and Helms.

Baskin simulated structures of AquaPIM membranes using computational resources at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) and found that the structure of the polymers making up the membrane were significantly resistant to pore collapse under highly basic conditions in alkaline electrolytes.

A screen test for better batteries

While evaluating AquaPIM membrane performance and compatibility with different grid battery chemistries, the researchers developed a model that tied the performance of the battery to the performance of various membranes. This model could predict the lifetime and efficiency of a flow battery without having to build an entire device. They also showed that similar models could be applied to other battery chemistries and their membranes.

"Typically, you'd have to wait weeks if not months to figure out how long a battery will last after assembling the entire cell. By using a simple and quick membrane screen, you could cut that down to a few hours or days," Helms said.

Read more at Science Daily

New model for the way humans localize sounds

Man listening
One of the enduring puzzles of hearing loss is the decline in a person's ability to determine where a sound originates, a key survival faculty that allows animals -- from lizards to humans -- to pinpoint the location of danger, prey and group members. In modern times, finding a lost cell phone by using the application "Find My Device," just to find it had slipped under a sofa pillow, relies on minute differences in the ringing sound that reaches the ears.

Unlike other sensory perceptions, such as feeling where raindrops hit the skin or being able to distinguish high notes from low on the piano, the direction of sounds must be computed; the brain estimates them by processing the difference in arrival time across the two ears, the so-called interaural time difference (ITD). A longstanding consensus among biomedical engineers is that humans localize sounds with a scheme akin to a spatial map or compass, with neurons aligned from left to right that fire individually when activated by a sound coming from a given angle -- say, at 30 degrees leftward from the center of the head.

But in research published this month in the journal eLife, Antje Ihlefeld, director of NJIT's Neural Engineering for Speech and Hearing Laboratory, is proposing a different model based on a more dynamic neural code. The discovery offers new hope, she says, that engineers may one day devise hearing aids, now notoriously poor in restoring sound direction, to correct this deficit.

"If there is a static map in the brain that degrades and can't be fixed, that presents a daunting hurdle. It means people likely can't "relearn" to localize sounds well. But if this perceptual capability is based on a dynamic neural code, it gives us more hope of retraining peoples' brains," Ihlefeld notes. "We would program hearing aids and cochlear implants not just to compensate for an individual's hearing loss, but also based upon how well that person could adapt to using cues from their devices. This is particularly important for situations with background sound, where no hearing device can currently restore the ability to single out the target sound. We know that providing cues to restore sound direction would really help."

What led her to this conclusion is a journey of scholarly detective work that began with a conversation with Robert Shapley, an eminent neurophysiologist at NYU who remarked on a peculiarity of human binocular depth perception -- the ability to determine how far away a visual object is -- that also depends on a computation comparing input received by both eyes. Shapley noted that these distance estimates are systematically less accurate for low-contrast stimuli (images that are more difficult to distinguish from their surrounding) than for high-contrast ones.

Ihlefeld and Shapley wondered if the same neural principle applied to sound localization: whether it is less accurate for softer sounds than for louder ones. But this would depart from the prevailing spatial map theory, known as the Jeffress model, which holds that sounds of all volumes are processed -- and therefore perceived -- the same way. Physiologists, who propose that mammals rely on a more dynamic neural model, have long disagreed with it. They hold that mammalian neurons tend to fire at different rates depending on directional signals and that the brain then compares these rates across sets of neurons to dynamically build up a map of the sound environment.

"The challenge in proving or disproving these theories is that we can't look directly at the neural code for these perceptions because the relevant neurons are located in the human brainstem, so we cannot obtain high-resolution images of them," she says. "But we had a hunch that the two models would give different sound location predictions at a very low volume."

They searched the literature for evidence and found only two papers that had recorded from neural tissue at these low sounds. One study was in barn owls -- a species thought to rely on the Jeffress model, based on high-resolution recordings in the birds' brain tissue -- and the other study was in a mammal, the rhesus macaque, an animal thought to use dynamic rate coding. They then carefully reconstructed the firing properties of the neurons recorded in these old studies and used their reconstructions to estimate sound direction both as a function of ITD and volume.

"We expected that for the barn owl data, it really should not matter how loud a source is -- the predicted sound direction should be really accurate no matter the sound volume -- and we were able to confirm that. However, what we found for the monkey data is that predicted sound direction depended on both ITD and volume," she said. "We then searched the human literature for studies on perceived sound direction as a function of ITD, which was also thought not to depend on volume, but surprisingly found no evidence to back up this long-held belief."

She and her graduate student, Nima Alamatsaz, then enlisted volunteers on the NJIT campus to test their hypothesis, using sounds to test how volume affects where people think a sound emerges.

"We built an extremely quiet, sound-shielded room with specialized calibrated equipment that allowed us to present sounds with high precision to our volunteers and record where they perceived the sound to originate. And sure enough, people misidentified the softer sounds," notes Alamatsaz.

"To date, we are unable to describe sound localization computations in the brain precisely," adds Ihlefeld. "However, the current results are inconsistent with the notion that the human brain relies on a Jeffress-like computation. Instead, we seem to rely on a slightly less accurate mechanism.

More broadly, the researchers say, their studies point to direct parallels in hearing and visual perception that have been overlooked before now and that suggest that rate-based coding is a basic underlying operation when computing spatial dimensions from two sensory inputs.

Read more at Science Daily

Nov 6, 2019

Exceptional fossils may need a breath of air to form

Some of the world's most exquisite fossil beds were formed millions of years ago during time periods when the Earth's oceans were largely without oxygen.

That association has led paleontologists to believe that the world's best-preserved fossil collections come from choked oceans. But research led by The University of Texas at Austin has found that while low oxygen environments set the stage, it takes a breath of air to catalyze the fossilization process.

"The traditional thinking about these exceptionally preserved fossil sites is wrong," said lead author Drew Muscente. "It is not the absence of oxygen that allows them to be preserved and fossilized. It is the presence of oxygen under the right circumstances."

The research was published in the journal PALAIOS on November 5.

Muscente conducted the research during a postdoctoral research fellowship at the UT Jackson School of Geosciences. He is currently an assistant professor at Cornell College in Mount Vernon, Iowa. The research co-authors are Jackson School Assistant Professor Rowan Martindale, Jackson School undergraduate students Brooke Bogan and Abby Creighton and University of Missouri Associate Professor James Schiffbauer.

The best-preserved fossil deposits are called "Konservat-lagerstätten." They are rare and scientifically valuable because they preserve soft tissues along with hard ones -- which in turn, preserves a greater variety of life from ancient ecosystems.

"When you look at lagerstätten, what's so interesting about them is everybody is there," said Bogan. "You get a more complete picture of the animal and the environment, and those living in it."

The research examined the fossilization history of an exceptional fossil site located at Ya Ha Tinda Ranch in Canada's Banff National Park. The site, which Martindale described in a 2017 paper, is known for its cache of delicate marine specimens from the Early Jurassic -- such as lobsters and vampire squids with their ink sacks still intact -- preserved in slabs of black shale.

During the time of fossilization, about 183 million years ago, high global temperatures sapped oxygen from the oceans. To determine if the fossils did indeed form in an oxygen-deprived environment, the team analyzed minerals in the fossils. Since different minerals form under different chemical conditions, the research could determine if oxygen was present or not.

"The cool thing about this work is that we can now understand the modes of formation of these different minerals as this organism fossilizes," Martindale said. "A particular pathway can tell you about the oxygen conditions."

The analysis involved using a scanning electron microscope to detect the mineral makeup.

"You pick points of interest that you think might tell you something about the composition," said Creighton, who analyzed a number of specimens. "From there you can correlate to the specific minerals."

The workup revealed that the vast majority of the fossils are made of apatite -- a phosphate-based mineral that needs oxygen to form. However, the research also found that the climatic conditions of a low-oxygen environment helped set the stage for fossilization once oxygen became available.

That's because periods of low ocean oxygen are linked to high global temperatures that raise sea levels and erode rock, which is a rich source of phosphate to help form fossils. If the low oxygen environment persisted, this sediment would simply release its phosphate into the ocean. But with oxygen around, the phosphate stays in the sediment where it could start the fossilization process.

Muscente said that the apatite fossils of Ya Ha Tinda point to this mechanism.

The research team does not know the source of the oxygen. But Muscente wasn't surprised to find evidence for it because the organisms that were fossilized would have needed to breathe oxygen when they were alive.

Read more at Science Daily

Satellite tracking shows how ships affect clouds and climate

By matching the movement of ships to the changes in clouds caused by their emissions, researchers have shown how strongly the two are connected.

When ships burn fossil fuels, they release airborne particles containing various naturally occurring chemicals, including sulphur. These particles are known to modify certain types of clouds, which can affect climate.

Better knowledge of how these particles, and particularly the sulphur components, affect clouds could help scientists create more accurate climate models.

In the latest study, satellite tracking was also used to show the impact of restrictions on sulphur in fuels, revealing the impact of ships on clouds largely disappears in restricted zones.

This information can be used to build a relationship between cloud properties and the sulphur content of shipping fuels. Importantly, this could help shipping companies monitor compliance with sulphur regulations that come into force on 1 January 2020.

The study, published today in Geophysical Research Letters, was led by researchers from Imperial College London, together with University College London and the University of Oxford.

Emissions from ships contain several chemicals, including sulphate aerosols -- small particles of sulphur and oxygen. The aerosols can act as 'seeds' around which water droplets accumulate, causing changes in cloud properties that are visible to satellites.

This means that ships can change clouds, leaving lines -- known as ship tracks -- in the clouds behind them as they sail.

However, exactly how these aerosols impact the properties of the clouds is not precisely known. This knowledge is important because the kinds of clouds that the emissions affect can influence climate warming, and is therefore important to capture in climate models.

Aerosol are emitted from many sources, such as factories and cars, but it has been difficult to match these outputs with the influence on clouds, as there are many other factors at play.

However, with ship tracks, the relationship is more straightforward, enabling researchers to tease out the links between aerosols and clouds more easily.

Lead researcher Dr Edward Gryspeerdt, from the Department of Physics at Imperial, said: "Ship tracks act like an experiment that would be impossible for us to do otherwise -- we cannot inject sulphate aerosols into the atmosphere at such scale to see what happens.

"Instead, restrictions on the amount of ship sulphate emissions can contain provide us with a perfect experiment for determining just how important the aerosols are in cloud formation. By analysing a huge dataset of ship tracks observed from satellites, we can see that they largely disappear when restrictions are introduced, demonstrating the strong impact of aerosols."

The team studied more than 17,000 ship tracks from satellite observations and matched them to the movements of individual ships using their onboard GPS.

The study period covered the introduction of emission control areas around the coast of North America, the North Sea, the Baltic Sea and the English Channel, which restricted sulphur in ship fuel to 0.5 percent, leading to fewer sulphate aerosol emissions.

The researchers found that in these areas, ship tracks nearly completely disappeared compared to before the restrictions, under similar weather conditions.

This shows that sulphate aerosols have the most significant impact on cloud formation, as opposed to other components of the ship exhaust, such as black carbon.

The result also means that a ship not in compliance with the regulations, by burning the current high-sulphur fuels without exhaust treatment, could be detected because it would create a measurable difference in the satellite-observed cloud properties.

Co-author Dr Tristan Smith, from UCL's Energy Institute, said: "Currently, it is hard for regulators to know what ships are doing in the middle of the ocean. The potential for undetected non-compliance with the 2020 sulphur regulations is a real risk for shipping companies because it can create commercial advantage to those companies who do not comply.

"This study shows that science and technology are producing significant advancements in the transparency of shipping, and helping to reduce risks and unfairness for responsible operators."

Read more at Science Daily

Childhood chores not related to self-control development

Although assigning household chores is considered an essential component of child-rearing, it turns out they might not help improve children's self-control, a coveted personality trait that allows people to suppress inappropriate impulses, focus their attention and perform an action when there is strong tendency to avoid it. That's the finding of a new study published in the Journal of Research in Personality by University of Houston assistant professor of psychology, Rodica Damian in collaboration with Olivia Atherton, Katherine Lawson and Richard Robins from the University of California, Davis.

Damian examined data from the UC Davis California Families Project, a 10-year longitudinal study of Mexican-origin youth assessed at ages 10, 12, 14, 16 and 19, in which self-control was reported by the children and parents separately. Damian's team examined whether household chores and self-control co-developed from ages 10 to 16.

"We found no evidence of co-developmental associations between chores and effortful or self-control, with four out of four of our hypotheses receiving no empirical support," said Damian, who admits it was not the finding they expected. "These null effects were surprising given the strong lay conceptions and theoretical basis for our predictions." Still, she said, she would not use the results to discourage childhood chores.

"Maybe chores don't matter for personality development, but they still predict future chore behavior," said Damian. "It is a stable habit and having a tidy home is not something to ignore."

Previous research indicated that doing more homework was related to an increase in conscientiousness, a personality trait similar to self-control, prompting Damian to question whether household chores would have a similar effect on personality development. Despite recent advances in understanding the origin of self-control, no known research existed investigating the co-development of chores and self-control.

Damian and colleagues also explored a matter unrelated to household chores -- whether initial levels of self-control at age 10, along with improved levels from age 10 to 16, predicted better work outcomes in young adulthood.

In this case the answer was yes. Both initial levels of self-control and increases in self-control predicted positive future job outcomes.

"We found that children who had higher self-control at age 10 had less job stress and better job fit nine years later. Additionally, children whose self-control showed positive changes from age 10 to 16 (regardless of their initial self-control level at age 10) had higher job satisfaction and job autonomy nine years later," said Damian. This is only the third study ever to examine whether changes in self-control predict better job outcomes.

Read more at Science Daily

'I knew that was going to happen:' Déjà vu and the 'postdictive' bias

For many, déjà vu is just a fleeting, eerie sensation that "I've been here before." For others, it gets even eerier: In that moment of unsettling familiarity, they also feel certain they know what's going to happen next -- like, a girl in a white shirt is going to pass me on the left.

And when the girl in the white shirt really does pass by, well, what can explain it? Cue theories of past lives, clairvoyance, and the supernatural.

Not so fast, says Anne Cleary, a memory researcher at Colorado State University who is one of the world's experts on déjà vu. A dogged scientist who uses laboratory experiments to induce déjà vu in human subjects, Cleary has a new theory on why déjà vu is accompanied not only by feelings of prediction, but also an "I knew that was going to happen" feeling a minute later.

Cleary's most recent déjà vu experiments, published in Psychonomic Bulletin & Review, document evidence of such a "postdictive" bias in déjà vu experiencers in the lab, and offers a plausible explanation for why it happens.

Prior experiments had uncovered a strong predictive bias in people having déjà vu -- that they feel like they know what's going to happen next. But in the lab, people who were having déjà vu were not able to actually predict what was going to happen next. That predictive feeling, however intense, was just that -- a feeling.

"If this is an illusion -- just a feeling -- why do people so strongly believe they actually predicted what unfolded next?" said Cleary, a professor in the CSU Department of Psychology. "I wondered if there was an explanation in some sort of cognitive illusion."

To test that theory in the lab, Cleary and co-authors immersed a bank of test subjects in a video game-like scene created in the Sims virtual world. Subjects were asked if they were experiencing déjà vu. Next, the virtual scene would turn left or right. Then participants were asked, did the scene unfold the way you expected? In a later experiment, participants were further asked to rate the familiarity of the scene, both before and after the turn.

After crunching their results, the researchers found that when intense feelings of prediction accompanied déjà vu, they were strongly correlated with feelings of "postdiction" -- that the person reported, after the fact, that they knew what particular turn was going to happen. But the experiment was set up so it would be impossible for them to know, because the turns were made at random.

The "I knew that was going to happen" bias was very strong when déjà vu occurred, and especially strong when the scene happened to be rated as very familiar. But, like the feelings of prediction, the feelings of having gotten the prediction right were not rooted in reality. In other words, déjà vu gave the subjects not only predictive feelings, but a strong hindsight bias after the fact.

Cleary's team concluded that the high degree of familiarity that accompanies déjà vu also carries through to the postdictive bias. "If the entire scene feels intensely familiar as it unfolds, that might trick our brains into thinking we got it right after all," Cleary said. "Because it felt so familiar as you were going through it, it felt like you knew all along how it was going to go, even if that could not have been the case."

So the "I knew that was going to happen" bias is probably all part of the illusion of prediction that often accompanies déjà vu, Cleary says. According to her prior experiments, déjà vu is a memory phenomenon in which we're trying to retrieve a memory, but we can't place it -- sort of like the feeling of a word on the tip of your tongue. She has previously demonstrated in the lab that when scenes in the Sims mapped spatially to different scenes that were viewed earlier but forgotten, more instances of déjà vu occur.

Cleary was driven to do experiments probing the postdictive bias because it felt like a missing puzzle piece to her existing theories on why déjà vu tends to be associated with clairvoyance. Since she started studying déjà vu over a decade ago, she's had countless people describe to her their déjà vu experiences, including when they were very sure they'd predicted something without explanation. And it's not just people who believe in the supernatural; many of them are what she calls "trained skeptics" - even fellow memory researchers -- who report extremely unsettling déjà vu experiences in which they feel like they predicted what was going to happen next.

Read more at Science Daily

Nov 5, 2019

Ancient bone protein reveals which turtles were on the menu in Florida, Caribbean

Thousands of years ago, the inhabitants of modern-day Florida and the Caribbean feasted on sea turtles, leaving behind bones that tell tales of ancient diets and the ocean's past.

An international team of scientists used cutting-edge technology to analyze proteins from these bones to help identify which turtle species people fished from the ocean millennia ago. This can aid modern conservation efforts by helping construct historical baselines for turtle populations, many of which are now endangered, and illuminate long-term trends of human impacts.

The technique, known as collagen fingerprinting, allows scientists to visualize distinct chemical signatures in collagen, the main structural protein in bone, that are often species-specific. This provides a complementary alternative to comparing specimens' physical characteristics and analyzing ancient DNA, two methods that can be unsuccessful for species identification in fragmented archaeological bones found in the tropics.

Applying collagen fingerprinting to more than 100 turtle samples from archaeological sites up to 2,500 years old, the researchers found that 63% of the collagen-containing bones belonged to green turtles, Chelonia mydas, with smaller numbers of hawksbill turtles, Eretmochelys imbricata, and ridley turtles, Lepidochelys species. Some specimens previously identified as sea turtles from their skeletal features were in fact bones from snapping turtles, terrapins and tortoises.

"This is the first time anyone has obtained species-level information using proteins preserved in archaeological sea turtle bone," said Virginia Harvey, the study's lead author and a doctoral researcher in marine biology and zooarchaeology at the University of Manchester. "Our method has allowed us to unlock ancient data otherwise lost in time to see which species of turtle humans were targeting thousands of years ago in the Caribbean and Florida regions."

Globally, sea turtles have been exploited for millennia for their meat, eggs, shells and other products. Today, they face threats from habitat loss and disturbance, poaching, pollution, climate change and fisheries. Only seven species of sea turtle remain, six of which are classified as vulnerable, endangered or critically endangered. Gaining a historical perspective on how turtle populations have changed through time is a crucial component of conserving them, Harvey said.

One of the research team's initial goals was to discern whether any collagen still survived in ancient turtle bone remains. In an analysis of 130 archaeological turtle samples, the team was able to detect collagen in 88%.

"We were very impressed with the levels of protein preservation in the turtle bones, some of which are thought to be up to 2,500 years old," said study co-author Michelle LeFebvre, assistant curator of South Florida archaeology and ethnography at the Florida Museum of Natural History. "The fact we were then able to use the protein signatures for species identification to better understand these archaeological sites was very exciting."

The team uncovered an unusual chemical signature in a small number of bone samples that could suggest they belong to a different species than those present in oceans today. But when the researchers attempted ancient DNA analysis on them, they found the material was too degraded.

"Where DNA sequencing can often give more accurate information about species identity, this molecule is very fragile and does not always survive too well in ancient samples from hot, humid climates," said study co-author Konstantina Drosou, ancient DNA specialist at the University of Manchester.

In contrast, proteins are present in much higher concentrations and therefore more likely to survive in the archaeological record, Drosou said.

"Proteins are very sturdy molecules," Harvey added. "The absence of preserved DNA in these samples means we have not been able to verify whether they represent a new species of sea turtle or not, but it does show us that our work here is far from complete. There is so much that we can still learn from the turtle remains at these sites and beyond."

Using collagen fingerprinting to correct misidentifications based on physical characteristics was "a nice additional outcome of the study," said Michael Buckley, senior author of the study and senior research fellow at the University of Manchester.

Susan deFrance, study co-author and professor in the University of Florida department of anthropology, said juvenile sea turtles are often misidentified because they are small and may lack the characteristics used to distinguish adult sea turtle bones.

"This is the first time we have been able to look so specifically into the preferred food choices of the site occupants," she said. "At the Florida Gulf Coast site, they captured a lot of juvenile turtles. The positive species-level identifications of these samples could not have been accomplished without this collagen fingerprinting technology."

At the same site, researchers found green turtle remains in both refuse heaps and mounds, but ridley turtle specimens were only found in mounds, suggesting they may have been reserved for feasting rituals, LeFebvre said.

"We knew these ancient people were eating sea turtles, but now we can begin to hone in on which turtles they were eating at particular times," she said. "It's no different than today -- we associate certain foods with certain events. It's how humans roll."

The researchers are also eager to continue to apply collagen fingerprinting to other archaeological museum specimens, many of which have yet to be positively identified to the species level.

Harvey said she hopes the study inspires further research on sea turtles and other vulnerable and endangered animals.

Read more at Science Daily

Jaw-some wombats may be great survivors

Flexible jaws may help wombats better survive in a changing world by adapting to climate change's effect on vegetation and new diets in conservation sanctuaries.

An international study, co-led by The University of Queensland's Dr Vera Weisbecker, has revealed that wombat jaws appear to change in relation to their diets.

"The survival of wombats depends on their ability to chew large amounts of tough plants such as grasses, roots and even bark," Dr Weisbecker said.

"Climate change and drought are thought to make these plants even tougher, which might require further short-term adaptations of the skull.

"Scientists had long suspected that native Australian marsupial mammals were limited in being able to adapt their skull in this way.

"But in good news, our research has contradicted this idea."

The team used a technique known as geometric morphometrics -- the study of how shapes vary -- to characterise skull shape variation within three different species of wombat, with each species having a slightly different diet.

The data were collected with computed tomography -- known to most as CT scanning -- and analysed with new computation techniques developed by UQ's Dr Thomas Guillerme.

Dr Olga Panagiotopoulou, who co-led the research project from the Monash Biomedicine Discovery Institute, said the study suggested that short-term jaw and skull adaptation was occurring.

"It seems that individuals within each wombat species differ most where their chewing muscles attach, or where biting is hardest," Dr Panagiotopoulou said.

"This means that individual shapes are related to an individual's diet and feeding preferences.

"Their skulls seem to be changing to match their diets.

"There are a number of factors that can influence skull shape, but it seems that wombats are able to remodel their jaws as the animals grow to become stronger and protect themselves from harm."

Dr Weisbecker said the team was particularly excited that the critically endangered northern hairy-nosed wombat, with around 250 individuals left, seemed to be able to adapt to new diets.

"In order to protect endangered animals, it's sometimes necessary to translocate them to new sanctuary locations where threats are less, but diets may be quite different," she said.

Read more at Science Daily

Stressed to the max? Deep sleep can rewire the anxious brain

When it comes to managing anxiety disorders, William Shakespeare's Macbeth had it right when he referred to sleep as the "balm of hurt minds." While a full night of slumber stabilizes emotions, a sleepless night can trigger up to a 30% rise in anxiety levels, according to new research from the University of California, Berkeley.

UC Berkeley researchers have found that the type of sleep most apt to calm and reset the anxious brain is deep sleep, also known as non-rapid eye movement (NREM) slow-wave sleep, a state in which neural oscillations become highly synchronized, and heart rates and blood pressure drop.

"We have identified a new function of deep sleep, one that decreases anxiety overnight by reorganizing connections in the brain," said study senior author Matthew Walker, a UC Berkeley professor of neuroscience and psychology. "Deep sleep seems to be a natural anxiolytic (anxiety inhibitor), so long as we get it each and every night."

The findings, published today, Nov. 4, in the journal Nature Human Behaviour, provide one of the strongest neural links between sleep and anxiety to date. They also point to sleep as a natural, non-pharmaceutical remedy for anxiety disorders, which have been diagnosed in some 40 million American adults and are rising among children and teens.

"Our study strongly suggests that insufficient sleep amplifies levels of anxiety and, conversely, that deep sleep helps reduce such stress," said study lead author Eti Ben Simon, a postdoctoral fellow in the Center for Human Sleep Science at UC Berkeley.

In a series of experiments using functional MRI and polysomnography, among other measures, Simon and fellow researchers scanned the brains of 18 young adults as they viewed emotionally stirring video clips after a full night of sleep, and again after a sleepless night. Anxiety levels were measured following each session via a questionnaire known as the state-trait anxiety inventory.

After a night of no sleep, brain scans showed a shutdown of the medial prefrontal cortex, which normally helps keep our anxiety in check, while the brain's deeper emotional centers were overactive.

"Without sleep, it's almost as if the brain is too heavy on the emotional accelerator pedal, without enough brake," Walker said.

After a full night of sleep, during which participants' brain waves were measured via electrodes placed on their heads, the results showed their anxiety levels declined significantly, especially for those who experienced more slow-wave NREM sleep.

"Deep sleep had restored the brain's prefrontal mechanism that regulates our emotions, lowering emotional and physiological reactivity and preventing the escalation of anxiety," Simon said.

Beyond gauging the sleep-anxiety connection in the 18 original study participants, the researchers replicated the results in a study of another 30 participants. Across all the participants, the results again showed that those who got more nighttime deep sleep experienced the lowest levels of anxiety the next day.

Moreover, in addition to the lab experiments, the researchers conducted an online study in which they tracked 280 people of all ages about how both their sleep and anxiety levels changed over four consecutive days.

The results showed that the amount and quality of sleep the participants got from one night to the next predicted how anxious they would feel the next day. Even subtle nightly changes in sleep affected their anxiety levels.

"People with anxiety disorders routinely report having disturbed sleep, but rarely is sleep improvement considered as a clinical recommendation for lowering anxiety," Simon said. "Our study not only establishes a causal connection between sleep and anxiety, but it identifies the kind of deep NREM sleep we need to calm the overanxious brain."

Read more at Science Daily

Single discrimination events alter college students' daily behavior

Discrimination -- differential treatment based on an aspect of someone's identity, such as nationality, race, sexual orientation or gender -- is linked to lower success in careers and poorer health. But there is little information about how individual discrimination events affect people in the short term and then lead to these longer-term disparities.

University of Washington researchers aimed to understand both the prevalence of discrimination events and how these events affect college students in their daily lives.

Over the course of two academic quarters, the team compared students' self-reports of unfair treatment to passively tracked changes in daily activities, such as hours slept, steps taken or time spent on the phone. On average, students who encountered unfair treatment were more physically active, interacted with their phones more and spent less time in bed on the day of the event. The team will present these findings Nov. 12 at the ACM Conference on Computer-Supported Cooperative Work in Austin, Texas.

"We looked at objective measures of behavior to try to really understand how this experience changed students' daily life," said lead author Yasaman Sefidgar, a doctoral student in the UW Paul G. Allen School of Computer Science & Engineering. "The ultimate goal is to use this information to develop changes that we can make both in terms of the educational structure and individual support systems for students to help them succeed both during and after their time in college."

The project started out as a way to monitor students' mental health during college.

"I was struck by how many students suffered from mental health issues and depression, due in part to the increased stress of college and being away from home," said co-author Anind Dey, professor and dean of the UW Information School. "Our approach in this paper, using passive sensing and data modeling, really lends itself to studying frequent events. Unfair treatment, or discrimination, might happen repeatedly in a quarter."

The team recruited 209 first-year UW students from across campus for a study over the 2018 winter and spring academic quarters. Of the 176 students who completed the study, 41% were in the College of Engineering while the rest were spread between various academic colleges, 65% identified as women and 29% identified as first-generation college students.

Participants wore Fitbit Flex 2 devices to track daily activities like time asleep and physical activity. The students also had an app installed on their phones to track location, activity, screen unlocking events and phone call length.

The team sent the students a series of surveys throughout the six-month study, including short "check-in" surveys at least twice a week. During the weeks before midterm and final exams, the students got a variation of this survey four times every day. Among the survey questions: Had the student, in the past 24 hours, been unfairly treated because of "ancestry or national origin, gender, sexual orientation, intelligence, major, learning disability, education or income level, age, religion, physical disability, height, weight or other aspect of one's physical appearance?"

"We had a very large table comparing everything, such as the number of steps that you've had for each day," Sefidgar said. "We also marked the days for the reports when they exist. Then it's a matter of determining for each individual whether there are changes for days with discrimination events compared to days with no events."

Overall, the researchers collected around 450 discrimination events and about one terabyte of data. The team analyzed people's actions on days when they were and weren't experiencing discrimination. On average, when students reported an unfair event they walked 500 more steps, had one more phone call in the evening, interacted five more times with their phones in the morning and spent about 15 fewer minutes in bed compared to days when they didn't experience discrimination.

"It's so hard to summarize the impact of something like this in a few statistics," said senior author Jennifer Mankoff, a professor in the Allen School. "Some people move more, sleep more or talk on the phone more, while some people do less. Maybe one student is reacting by playing games all day and another student put down their phone and went to hang out with a friend. It's giving us a lot of questions to follow up on."

Students listed ancestry or national origin, intelligence and gender as the top three reasons for experiencing unfair treatment.

The study likely didn't capture all discrimination events, according to the researchers. For example, the survey didn't include race as a reason for unfair treatment, and the students weren't surveyed every day.

"This was just a snapshot of some of the things the students experienced on the 40 days we surveyed them," Mankoff said. "But more than half of them reported experiencing at least one discrimination event, often four or five events."

The team repeated this study in the 2019 spring quarter, and it plans to continue to gather data on students over the next few years. The researchers have also started interviewing students to get a better understanding of how unfair treatment happens in the context of their other experiences.

"This project is helping us better understand challenges that our students face in real time," said co-author Eve Riskin, the associate dean of diversity and access for the UW College of Engineering and the principal investigator for the Washington State Academic RedShirt program. "With this understanding we should be able to design better interventions to improve the climate for all students."

The researchers also found that discrimination is associated with increased depression and loneliness, but less so for people with better social support.

"These results help underscore the deep impacts of discrimination on mental health, and the importance of resources like social support in helping to reduce the impact of discrimination in the long term," said Paula Nurius, a professor in the UW School of Social Work.

Students who completed the study received up to $245 and were allowed to keep their Fitbits.

Read more at Science Daily

Nov 4, 2019

Scientists create 'artificial leaf' that turns carbon into fuel

Scientists have created an "artificial leaf" to fight climate change by inexpensively converting harmful carbon dioxide (CO2) into a useful alternative fuel.

The new technology, outlined in a paper published today in the journal Nature Energy, was inspired by the way plants use energy from sunlight to turn carbon dioxide into food.

"We call it an artificial leaf because it mimics real leaves and the process of photosynthesis," said Yimin Wu, an engineering professor at the University of Waterloo who led the research. "A leaf produces glucose and oxygen. We produce methanol and oxygen."

Making methanol from carbon dioxide, the primary contributor to global warming, would both reduce greenhouse gas emissions and provide a substitute for the fossil fuels that create them.

The key to the process is a cheap, optimized red powder called cuprous oxide.

Engineered to have as many eight-sided particles as possible, the powder is created by a chemical reaction when four substances -- glucose, copper acetate, sodium hydroxide and sodium dodecyl sulfate -- are added to water that has been heated to a particular temperature.

The powder then serves as the catalyst, or trigger, for another chemical reaction when it is mixed with water into which carbon dioxide is blown and a beam of white light is directed with a solar simulator.

"This is the chemical reaction that we discovered," said Wu, who has worked on the project since 2015. "Nobody has done this before."

The reaction produces oxygen, as in photosynthesis, while also converting carbon dioxide in the water-powder solution into methanol. The methanol is collected as it evaporates when the solution is heated.

Next steps in the research include increasing the methanol yield and commercializing the patented process to convert carbon dioxide collected from major greenhouse gas sources such as power plants, vehicles and oil drilling.

"I'm extremely excited about the potential of this discovery to change the game," said Wu, a professor of mechanical and mechatronics engineering, and a member of the Waterloo Institute for Nanotechnology. "Climate change is an urgent problem and we can help reduce CO2 emissions while also creating an alternative fuel."

Read more at Science Daily

How to alter memories to protect consumers

In today's fast-paced digital age, information can become outdated rapidly and people must constantly update their memories. But changing our previous understanding of the news we hear or the products we use isn't always easy, even when holding onto falsities can have serious consequences.

A pharmaceutical company, for example, may present a testimonial about a consumer's positive experience with a new medication, along with details about the potential side effects and interactions with other drugs. Later, if the company announces that the drug is less effective than previously reported, many people will continue clinging to the belief that the drug is effective, according to results from a new study. The findings are available online in the Journal of Consumer Psychology.

When people read or hear stories, they build mental models of events that are linked together in a cause-effect chain, and this is embedded into their memories, says study author Anne Hamby, an assistant professor in the department of marketing at Boise State University in Idaho. Even if they later discover that one aspect of the chain of events is incorrect, it's difficult for people to change their memory of a story because this would create a gap in the chain. This is known as the continued influence effect.

Hamby and her colleagues were interested in testing whether the continued influence effect was more common when stories included an explanation for the outcome of the story -- rather than leaving out this detail. In one experiment, participants read about a man who was diagnosed with a disease and took a prescribed medication at night and with a glass of lemonade. The drug does not work and he returns to the doctor. Half of the participants read that the doctor explains why the drug did not work: The man needed to take the medication in the morning because hormones released at night block the effectiveness of the drug. The other participants did not get an explanation about why the drug was ineffective.

At the end of the story, all the participants are presented with another fact: citrus-based foods and drinks interfere with the absorption of the drug. Later, they learn that this information was false. The results showed that participants who did not get an explanation for the drug's ineffectiveness had difficulty rejecting the falsity of the drug-citrus interaction. "This group had used the citrus interaction to explain why the drug didn't work in the story, while the other group already had an explanation in mind," Hamby says. "Once the first group inserted causal information into a mental model of story, it was harder to remove it."

Though it's difficult to change an existing version of events, the researchers discovered that people are more willing to update their memories if something bad has happened to a character, such as a death or serious illness. "People are more motivated to do the mental work of updating the story if the change leads to a better outcome because the character's well-being could be related to their own well-being," she says.

Hamby hopes the findings will inform how companies and news organizations retract misinformation. "It may not work to simply send out a press release or make a public service announcement saying that information is incorrect," she says. "In order to effectively change beliefs, we need to give consumers an alternative cause and effect explanation." For example, rather than saying previous studies linking autism to vaccines are false, it may be wiser to explain other causes of autism.

Read more at Science Daily

The world is getting wetter, yet water may become less available for North America and Eurasia

Drips from faucet in dry environment.
With climate change, plants of the future will consume more water than in the present day, leading to less water available for people living in North America and Eurasia, according to a Dartmouth-led study in Nature Geoscience. The research suggests a drier future despite anticipated precipitation increases for places like the United States and Europe, populous regions already facing water stresses.

The study challenges an expectation in climate science that plants will make the world wetter in the future. Scientists have long thought that as carbon dioxide concentrations increase in the atmosphere, plants will reduce their water consumption, leaving more freshwater available in our soils and streams. This is because as more carbon dioxide accumulates in our atmosphere plants can photosynthesize the same amount while partly closing the pores (stomata) on their leaves. Closed stomata means less plant water loss to the atmosphere, increasing water in the land. The new findings reveal that this story of plants making the land wetter is limited to the tropics and the extremely high latitudes, where freshwater availability is already high and competing demands on it are low. For much of the mid-latitudes, the study finds, projected plant responses to climate change will not make the land wetter but drier, which has massive implications for millions of people.

"Approximately 60 percent of the global water flux from the land to the atmosphere goes through plants, called transpiration. Plants are like the atmosphere's straw, dominating how water flows from the land to the atmosphere. So vegetation is a massive determinant of what water is left on land for people," explained lead author Justin S. Mankin, an assistant professor of geography at Dartmouth and adjunct research scientist at Lamont-Doherty Earth Observatory at Columbia University. "The question we're asking here is, how do the combined effects of carbon dioxide and warming change the size of that straw?"

Using climate models, the study examines how freshwater availability may be affected by projected changes in the way precipitation is divided among plants, rivers and soils. For the study, the research team used a novel accounting of this precipitation partitioning, developed earlier by Mankin and colleagues to calculate the future runoff loss to future vegetation in a warmer, carbon dioxide-enriched climate.

The new study's findings revealed how the interaction of three key effects of climate change's impacts on plants will reduce regional freshwater availability. First, as carbon dioxide increases in the atmosphere, plants require less water to photosynthesize, wetting the land. Yet, second, as the planet warms, growing seasons become longer and warmer: plants have more time to grow and consume water, drying the land. Finally, as carbon dioxide concentrations increase, plants are likely to grow more, as photosynthesis becomes amplified. For some regions, these latter two impacts, extended growing seasons and amplified photosynthesis, will outpace the closing stomata, meaning more vegetation will consume more water for a longer amount of time, drying the land. As a result, for much of the mid-latitudes, plants will leave less water in soils and streams, even if there is additional rainfall and vegetation is more efficient with its water usage. The result also underscores the importance of improving how climate models represent ecosystems and their response to climate change.

The world relies on freshwater for human consumption, agriculture, hydropower, and industry. Yet, for many places, there's a fundamental disconnect between when precipitation falls and when people use this water, as is the case with California, which gets more than half of its precipitation in the winter, but peak demands are in the summer. "Throughout the world, we engineer solutions to move water from point A to point B to overcome this spatiotemporal disconnect between water supply and its demand. Allocating water is politically contentious, capital-intensive and requires really long-term planning, all of which affects some of the most vulnerable populations. Our research shows that we can't expect plants to be a universal panacea for future water availability. So, being able to assess clearly where and why we should anticipate water availability changes to occur in the future is crucial to ensuring that we can be prepared," added Mankin.

Read more at Science Daily

Voyager 2 reaches interstellar space

This artist's concept shows the locations of NASA's Voyager 1 and Voyager 2 spacecraft relative to the heliosphere, or the protective bubble of particles and magnetic fields created by our Sun. Both Voyagers are now outside the heliosphere, in a region known as interstellar space, or the space between stars.
Voyager 1 has a companion in the realm of the stars.

Researchers at the University of Iowa report that the spacecraft Voyager 2 has entered the interstellar medium (ISM), the region of space outside the bubble-shaped boundary produced by wind streaming outward from the sun. Voyager 2, thus, becomes the second human-made object to journey out of our sun's influence, following Voyager 1's solar exit in 2012.

In a new study, the researchers confirm Voyager 2's passage on Nov. 5, 2018, into the ISM by noting a definitive jump in plasma density detected by an Iowa-led plasma wave instrument on the spacecraft. The marked increase in plasma density is evidence of Voyager 2 journeying from the hot, lower-density plasma characteristic of the solar wind to the cool, higher-density plasma of interstellar space. It's also similar to the plasma density jump experienced by Voyager 1 when it crossed into interstellar space.

"In a historical sense, the old idea that the solar wind will just be gradually whittled away as you go further into interstellar space is simply not true," says Iowa's Don Gurnett, corresponding author on the study, published in the journal Nature Astronomy. "We show with Voyager 2 -- and previously with Voyager 1 -- that there's a distinct boundary out there. It's just astonishing how fluids, including plasmas, form boundaries."

Gurnett, professor emeritus in the UI Department of Physics and Astronomy, is the principal investigator on the plasma wave instrument aboard Voyager 2. He is also the principal investigator on the plasma wave instrument aboard Voyager 1 and authored the 2013 study published in Science that confirmed Voyager 1 had entered the ISM.

Voyager 2's entry into the ISM occurred at 119.7 astronomical units (AU), or more than 11 billion miles from the sun. Voyager 1 passed into the ISM at 122.6 AU. The spacecraft were launched within weeks of each other in 1977, with different mission goals and trajectories through space. Yet they crossed into the ISM at basically the same distances from the sun.

That gives valuable clues to the structure of the heliosphere -- the bubble, shaped much like a wind sock, created by the sun's wind as it extends to the boundary of the solar system.

"It implies that the heliosphere is symmetric, at least at the two points where the Voyager spacecraft crossed," says Bill Kurth, University of Iowa research scientist and a co-author on the study. "That says that these two points on the surface are almost at the same distance."

"There's almost a spherical front to this," adds Gurnett. "It's like a blunt bullet."

Data from the Iowa instrument on Voyager 2 also gives additional clues to the thickness of the heliosheath, the outer region of the heliosphere and the point where the solar wind piles up against the approaching wind in interstellar space, which Gurnett likens to the effect of a snowplow on a city street.

The Iowa researchers say the heliosheath has varied thickness, based on data showing Voyager 1 sailed 10 AU farther than its twin to reach the heliopause, a boundary where the solar wind and the interstellar wind are in balance and considered the crossing point to interstellar space. Some had thought Voyager 2 would make that crossing first, based on models of the heliosphere.

"It's kind of like looking at an elephant with a microscope," Kurth says. "Two people go up to an elephant with a microscope, and they come up with two different measurements. You have no idea what's going on in between. What the models do is try to take information that we have from those two points and what we've learned through the flight and put together a global model of the heliosphere that matches those observations."

The last measurement obtained from Voyager 1 was when the spacecraft was at 146 AU, or more than 13.5 billion miles from the sun. The plasma wave instrument is recording that the plasma density is rising, in data feeds from a spacecraft now so far away that it takes more than 19 hours for information to travel from the spacecraft to Earth.

"The two Voyagers will outlast Earth," Kurth says. "They're in their own orbits around the galaxy for five billion years or longer. And the probability of them running into anything is almost zero."

"They might look a little worn by then," Gurnett adds with a smile.

Read more at Science Daily

Nov 3, 2019

Conditions that trigger supernovae explosions

Understanding the thermonuclear explosion of Type Ia supernovae -- powerful and luminous stellar explosions -- is only possible through theoretical models, which previously were not able to account for the mechanism that detonated the explosion.

One of the key pieces of this explosion, present virtually in all models, is the formation of a supersonic reaction wave called detonation, which can travel faster than the speed of sound and is capable of burning up all of the material of a star before it gets dispersed into the vacuum of space.

But, the physics of the mechanisms that create a detonation in a star has been elusive.

Now, a team of researchers from the University of Connecticut, Texas A&M University, University of Central Florida, Naval Research Laboratory, and Air Force Research Laboratory has developed a theory that sheds light on the enigmatic process of detonation formation at the heart of these remarkable astronomical events.

The research, published Nov. 1 in Science, offers a critical understanding of this physical process both in stars and also in chemical systems on Earth. It was led by Alexei Poludnenko, UConn School of Engineering and Texas A&M University; in collaboration with Jessica Chambers and Kareem Ahmed, the University of Central Florida; Vadim Gamezo, the Naval Research Laboratory; and Brian Taylor, the Air Force Research Laboratory.

For the first time, researchers were able to demonstrate the process of detonation formation from a slow subsonic flame using both experiments and numerical simulations carried out on some of the largest supercomputers in the nation. They also successfully applied the results to predict the conditions of detonation formation in one of the classical theoretical scenarios of Type Ia supernova explosion.

Type Ia supernovae explosions happen when carbon and oxygen packed to a density of around 1,000 tons per cubic centimeter in the stellar core burn in quick, thermonuclear reactions. The resulting explosion disrupts a star in a matter of seconds and ejects most of its mass while emitting an amount of energy equal to the energy emitted by the star over its entire lifetime.

Typically, in order to form a detonation, burning must occur in a confined setting with walls, obstacles, or boundaries, which can confine pressure waves being released by burning.

As pressure rises, shock waves form, which can grow in strength to the point when they can compress the reacting mixture igniting it and producing a self-sustaining supersonic front. Stars do not have walls or obstacles, which makes the formation of a detonation enigmatic.

In this study, the team developed a unified theory of turbulence-induced deflagration-to-detonation that describes the mechanism and conditions for initiating detonation both in unconfined chemical and thermonuclear explosions.

According to the theory, if one takes reactive mixture, which burns and releases energy, and stirs it up to create intense turbulence, a catastrophic instability can result and would rapidly increase pressure in the system producing strong shocks and igniting a detonation. Remarkably this theory predicts the conditions for detonation formation in Type Ia supernovae.

Researchers were able to gain insight into the fundamental aspects of the physical processes that control supernovae explosions because thermonuclear combustion waves are similar to chemical combustion waves on Earth in that they are controlled by the same physical mechanisms.

Read more at Science Daily

How measles wipes out the body's immune memory

Measles concept photo
Over the last decade, evidence has mounted that the measles vaccine protects in not one but two ways: Not only does it prevent the well-known acute illness with spots and fever that frequently sends children to the hospital, but it also appears to protect from other infections over the long term.

How does this work?

Some researchers have suggested that the vaccine gives a general boost to the immune system.

Others have hypothesized that the vaccine's extended protective effects stem from preventing measles infection itself. According to this theory, the virus can impair the body's immune memory, causing so-called immune amnesia. By protecting against measles infection, the vaccine prevents the body from losing or "forgetting" its immune memory and preserves its resistance to other infections.

Past research hinted at the effects of immune amnesia, showing that immune suppression following measles infection could last as long as two to three years.

However, many scientists still debate which hypothesis is correct. Among the critical questions are: If immune amnesia is real, how exactly does it happen, and how severe is it?

Now, a study from an international team of researchers led by investigators at Harvard Medical School, Brigham and Women's Hospital and the Harvard T.H. Chan School of Public Health provides much-needed answers.

Reporting Oct. 31 in Science, the researchers show that the measles virus wipes out 11 percent to 73 percent of the different antibodies that protect against viral and bacterial strains a person was previously immune to -- anything from influenza to herpesvirus to bacteria that cause pneumonia and skin infections.

So, if a person had 100 different antibodies against chicken pox before contracting measles, they might emerge from having measles with only 50, cutting their chicken pox protection in half. That protection could dip even lower if some of the antibodies lost are potent defenses known as neutralizing antibodies.

"Imagine that your immunity against pathogens is like carrying around a book of photographs of criminals, and someone punched a bunch of holes in it," said the study's first author, Michael Mina, a postdoctoral researcher in the laboratory of Stephen Elledge at Harvard Medical School and Brigham and Women's Hospital at the time of the study, now an assistant professor of epidemiology at the Harvard T.H. Chan School of Public Health.

"It would then be much harder to recognize that criminal if you saw them, especially if the holes are punched over important features for recognition, like the eyes or mouth," said Mina.

The study is the first to measure the immune damage caused by the virus and underscores the value of preventing measles infection through vaccination, the authors said.

"The threat measles poses to people is much greater than we previously imagined," said senior author Stephen Elledge, the Gregor Mendel Professor of Genetics and of Medicine in the Blavatnik Institute at Harvard Medical School and Brigham and Women's Hospital. "We now understand the mechanism is a prolonged danger due to erasure of the immune memory, demonstrating that the measles vaccine is of even greater benefit than we knew."

The discovery that measles depletes people's antibody repertoires, partially obliterating immune memory to most previously encountered pathogens, supports the immune amnesia hypothesis.

"This is the best evidence yet that immune amnesia exists and impacts our bona fide long-term immune memory," added Mina, who first discovered the epidemiological effects of measles on long-term childhood mortality in a 2015 paper.

The team's current work was published simultaneously with a paper by a separate team in Science Immunology that reached complementary conclusions by measuring changes in B cells caused by the measles virus. An accompanying editorial in Science Immunology, written by Duane Wesemann, Harvard Medical School assistant professor of medicine at Brigham and Women's Hospital, contextualizes that study.

Elledge, Mina and colleagues found that those who survive measles gradually regain their previous immunity to other viruses and bacteria as they get re-exposed to them. But because this process may take months to years, people remain vulnerable in the meantime to serious complications of those infections.

In light of this finding, the researchers say clinicians may want to consider strengthening the immunity of patients recovering from measles infection with a round of booster shots of all previous routine vaccines, such as hepatitis and polio.

"Revaccination following measles could help to mitigate long-term suffering that might stem from immune amnesia and the increased susceptibility to other infections," the authors said.

Two steps forward, one step back

One of the most contagious diseases known to humankind, measles killed an average of 2.6 million people each year before a vaccine was developed, according to the World Health Organization. Widespread vaccination has slashed the death toll.

However, lack of access to vaccination and refusal to get vaccinated means measles still infects more than 7 million people and kills more than 100,000 each year worldwide, reports the WHO -- and cases are on the rise, tripling in early 2019. About 20 percent of people in the U.S. who get infected with measles require hospitalization, according to the CDC, and some experience well-known long-term consequences, including brain damage and vision and hearing loss.

Previous epidemiological research into immune amnesia suggests that death rates attributed to measles could be even higher -- accounting for as much as 50 percent of all childhood mortality -- if researchers factored in deaths caused by infections resulting from measles' ravaging effects on immunity.

Answers in the blood

This new discovery was made possible thanks to VirScan, a tool Elledge and Tomasz Kula, a PhD student in the Elledge Lab, developed in 2015.

VirScan detects antiviral and antibacterial antibodies in the blood that result from current or past encounters with viruses and bacteria, giving an overall snapshot of the immune system.

Study co-author Rik de Swart had gathered blood samples from unvaccinated children during a 2013 measles outbreak in the Netherlands. For the new study, Elledge's group used VirScan to measure antibodies before and two months after infection in 77 children from de Swart's samples who'd contracted the disease. The researchers also compared the measurements to those of 115 uninfected children and adults.

When Kula examined an initial set of these samples, he found a striking drop in antibodies from other pathogens in the measles-infected children that "clearly suggested a direct effect on the immune system," the authors said.

The effect resembled what Mina had hypothesized could drive measles-induced immune amnesia.

"This proved to be the first definitive evidence that measles affects the levels of protective antibodies themselves, providing a mechanism supporting immune amnesia," said Elledge.

Then, in collaboration with Diane Griffin at Johns Hopkins Bloomberg School of Public Health, the team measured antibodies in four rhesus macaques -- monkeys closely related to humans -- before and five months after measles infection. This covered a much longer period post-infection than what was available in the Netherlands samples.

Similar to the findings in people, the macaques lost an average of 40 to 60 percent of their preexisting antibodies to the viruses and bacteria they had been previously exposed to.

Further tests revealed that severe measles infection reduced people's overall immunity more than mild infection. This could be particularly problematic for certain categories of children and adults, the researchers said.

The authors stress that the effects observed in the current study occurred in previously healthy children. Because measles is known to hit malnourished children much harder, the degree of immune amnesia and its effects could be even more severe in less healthy populations.

"The average kid might emerge from measles with a dent in their immune system and their body will be able to handle that," said Elledge. "But kids on the edge -- such as those with severe measles infection or immune deficiencies or those who are malnourished -- will be in serious trouble."

Vital vaccination

Inoculation with the MMR (measles, mumps, rubella) vaccine did not impair children's overall immunity, the researchers found. The results align with decades of research.

Ensuring widespread vaccination against measles would not only help prevent the 120,000 deaths that will be directly attributed to measles this year alone but could also avert potentially hundreds of thousands of additional deaths attributable to the lasting damage to the immune system, the authors said.

"This drives home the importance of understanding and preventing the long-term effects of measles, including stealth effects that have flown under the radar of doctors and parents," said Mina. "If your child gets the measles and then gets pneumonia two years later, you wouldn't necessarily tie the two together. The symptoms of measles itself may be only the tip of the iceberg."

Read more at Science Daily