Oct 31, 2020

Water on ancient Mars

 There's a long-standing question in planetary science about the origin of water on Earth, Mars and other large bodies such as the moon. One hypothesis says that it came from asteroids and comets post-formation. But some planetary researchers think that water might just be one of many substances that occur naturally during the formation of planets. A new analysis of an ancient Martian meteorite adds support for this second hypothesis.

Several years ago, a pair of dark meteorites were discovered in the Sahara Desert. They were dubbed NWA 7034 and NWA 7533, where NWA stands for North West Africa and the number is the order in which meteorites are officially approved by the Meteoritical Society, an international planetary science organization. Analysis showed these meteorites are new types of Martian meteorites and are mixtures of different rock fragments.

The earliest fragments formed on Mars 4.4 billion years ago, making them the oldest known Martian meteorites. Rocks like this are rare and can fetch up to $10,000 per gram. But recently 50 grams of NWA 7533 was acquired for analysis by the international team in which Professor Takashi Mikouchi at the University of Tokyo was participating.

"I study minerals in Martian meteorites to understand how Mars formed and its crust and mantle evolved. This is the first time I have investigated this particular meteorite, nicknamed Black Beauty for its dark color," said Mikouchi. "Our samples of NWA 7533 were subjected to four different kinds of spectroscopic analysis, ways of detecting chemical fingerprints. The results led our team to draw some exciting conclusions."

It's well known to planetary scientists that there has been water on Mars for at least 3.7 billion years. But from the mineral composition of the meteorite, Mikouchi and his team deduced it's likely there was water present much earlier, at around 4.4 billion years ago.

"Igneous clasts, or fragmented rock, in the meteorite are formed from magma and are commonly caused by impacts and oxidation," said Mikouchi. "This oxidation could have occurred if there was water present on or in the Martian crust 4.4 billion years ago during an impact that melted part of the crust. Our analysis also suggests such an impact would have released a lot of hydrogen, which would have contributed to planetary warming at a time when Mars already had a thick insulating atmosphere of carbon dioxide."

Read more at Science Daily

Haunted house researchers investigate the mystery of playing with fear

 Chainsaw-wielding maniacs and brain-munching zombies are common tropes in horror films and haunted houses, which, in normal years, are popular Halloween-season destinations for thrill seekers. But what makes such fearsome experiences so compelling, and why do we actively seek them out in frightful recreational settings?

New research accepted for publication in the journal Psychological Science reveals that horror entertains us most effectively when it triggers a distinct physical response -- measured by changes in heart rate -- but is not so scary that we become overwhelmed. That fine line between fun and an unpleasant experience can vary from person to person.

"By investigating how humans derive pleasure from fear, we find that there seems to be a 'sweet spot' where enjoyment is maximized," said Marc Malmdorf Andersen, a researcher at the Interacting Minds Center at Aarhus University and lead author of the paper. "Our study provides some of the first empirical evidence on the relationship between fear, enjoyment, and physical arousal in recreational forms of fear."

For years, researchers have suspected that physiological arousal, such as a quickening pulse and a release of hormones in the brain, may play a key role in explaining why so many people find horror movies and haunted houses so attractive.

Until now, however, a direct relationship between arousal and enjoyment from these types of activities has not been established. "No prior studies have analyzed this relationship on subjective, behavioral, as well as physiological levels," said Andersen.

To explore this connection, Andersen and his colleagues studied how a group of 110 participants responded to a commercial haunted house attraction in Vejle, Denmark. The researchers fitted each participant with a heart rate monitor, which recorded real-time data as they walked through the attraction. The nearly 50-room haunted house produced an immersive and intimate live-action horror experience. The attraction used a variety of scare tactics to frighten guests, including frequent jump scares, in which zombies or other monstrous abominations suddenly appeared or charged toward the guest.

The researchers also studied the participants in real time through closed-circuit monitors inside the attraction. This enabled the team to make first-hand observations of participants' reactions to the most frightening elements, and, subsequently, to have independent coders analyze participants' behavior and responses. After the experience, participants evaluated their level of fright and enjoyment for each encounter. By comparing these self-reported experiences with the data from the heart rate monitors and surveillance cameras, the researchers were able to compare the fear-related and enjoyment-related elements of the attraction on subjective, behavioral, and physiological levels.

What Is Recreational Fear?

Recreational fear refers to the mixed emotional experience of feeling fear and enjoyment at the same time. Fear is generally considered to be an unpleasant emotion that evolved to protect people from harm. Paradoxically, humans sometimes seek out frightening experiences for purely recreational purposes. "Past studies on recreational fear, however, have not been able to establish a direct relationship between enjoyment and fear," said Andersen.

Studies on fearful responses to media, for example, have mostly been conducted in laboratory settings with relatively weak stimuli, such as short video clips from frightening films. Such experimental setups can sometimes make it difficult to measure physiological arousal because responses may be modest in a laboratory context.

"Conducting our study at a haunted attraction, where participants are screaming with both fear and delight, made this task easier," said Andersen. "It also presented unique challenges, such as the immensely complex logistics associated with conducting empirical studies in a 'messy' real-world context like a haunted house."

Discovering the "Goldilocks Zone"

Plotting the relationship between self-reported fear and enjoyment, the researchers discovered an inverted U-shape trend, revealing an apparent sweet spot for fear where enjoyment is maximized.

"If people are not very scared, they do not enjoy the attraction as much, and the same happens if they are too scared," said Andersen. "Instead, it seems to be the case that a 'just-right' amount of fear is central for maximizing enjoyment."

The data also showed a similar inverted U-shape for the participants' heart rate signatures, suggesting that enjoyment is related to just-right deviations from a person's normal physiological state. However, when fearful encounters trigger large and long-lasting deviations from this normal state, as measured by pulse rates going up and down frequently over a longer period of time, unpleasant sensations often follow.

"This is strikingly similar to what scientists have found to characterize human play," said Andersen. "We know, for instance, that curiosity is often aroused when individuals have their expectations violated to a just-right degree, and several accounts of play stress the importance of just-right doses of uncertainty and surprise for explaining why play feels enjoyable."

Read more at Science Daily

Oct 30, 2020

Mothers pass on allergies to offspring

 Mothers can pass allergies to offspring while they are developing in the womb, researchers from the Agency for Science, Technology and Research (A*STAR), KK Women's and Children's Hospital (KKH) and Duke-NUS Medical School in Singapore reported this week in the journal Science.

The study, which employed an animal model conducted according to the National Advisory Committee for Laboratory Animal Research (NACLAR) guidelines, shows that the key antibody responsible for triggering allergic reactions, immunoglobulin E (IgE), can cross the placenta and enter the fetus. When inside the fetus, the antibody binds to fetal mast cells, a type of immune cell that releases chemicals that trigger allergic reactions, from runny noses to asthma. After birth, newborn mice develop allergic reactions to the same type of allergen as their mothers at the time of first exposure -- unlike adult mice, which require two exposures. Studies in the laboratory also showed that maternal IgE can bind to human fetal mast cells, indicating they might cross the placenta in humans in a similar way.

Dr Florent Ginhoux, Senior Principal Investigator at A*STAR's Singapore Immunology Network (SIgN), a senior co-author of the study, said, "There is currently a significant lack of knowledge on mast cells that are present early on in the developing fetus. Here, we discovered that fetal mast cells phenotypically mature through the course of pregnancy, and can be sensitised by IgE of maternal origin that cross the placental barrier. The study suggests that a highly allergic pregnant mother may potentially transfer her IgE to her baby that consequently develop allergic reactions when exposed to the first time to the allergen."

"Allergies begin very early in life," said Associate Professor Ashley St. John, an immunologist at Duke-NUS' Emerging Infectious Diseases Programme and a senior co-author of the study. "Infants experience allergic responses closely linked with the mother's allergic response in ways that cannot only be explained by genetics. This work emphasises one way that allergic responses can pass from the mother to the developing fetus, and shows how allergies can then persist after birth."

As part of the study, following NACLAR guidelines, researchers exposed mice to ragweed pollen, a common allergen, prior to pregnancy. Mice that developed a sensitivity to the pollen had offspring that also showed an allergic reaction to ragweed. The sensitivity is allergen-specific; the offspring did not react to dust mites, another common allergen.

Notably, the transfer of sensitivity appears to fade with time. The newborn mice had allergic reactions when tested at four weeks, but less or none at six weeks.

The experimental studies were backed up with cellular tests and imaging, which showed maternal IgE bound to fetal mast cells, triggering the mast cells to release chemicals in reaction to an allergen, a process called degranulation.

This study further showed that the IgE transfer across the placenta requires the help of another protein, FcRN. Mice with FcRN knocked out lacked maternal IgE attached to their mast cells, and did not develop allergies after birth.

The study findings potentially open new intervention strategies to limit such transfer to minimise the occurrence of neonatal allergies. Currently, between 10 to 30 per cent of the world's population are affected by allergies. This number is set to continue rising and a solution preventing allergies being passed from mother to child could potentially bring those numbers down over time.

"Our research has really exciting findings that may explain the high incidence of early onset atopic dermatitis (eczema) in children of mothers with clinically proven eczema, which parallel findings in our local birth cohort findings," said Professor Jerry Chan, Senior Consultant, Department of Reproductive Medicine at KKH, Senior National Medical Research Council Clinician Scientist, and Vice Chair of Research with the Obstetrics and Gynaecology Academic Clinical Programme at the SingHealth Duke-NUS Academic Medical Centre. "From a clinical point of view, developing a further understanding in placental transfer of IgE, and the mechanism of fetal mast cell activation would be key to developing strategies to reduce the chance of eczema or other allergies from being transferred from mother to baby."

Read more at Science Daily

Positive outlook predicts less memory decline

 

Senior couple on bench
We may wish some memories could last a lifetime, but many physical and emotional factors can negatively impact our ability to retain information throughout life.

A new study published in the journal Psychological Science found that people who feel enthusiastic and cheerful -- what psychologists call "positive affect" -- are less likely to experience memory decline as they age. This result adds to a growing body of research on positive affect's role in healthy aging.

A team of researchers analyzed data from 991 middle-aged and older U.S. adults who participated in a national study conducted at three time periods: between 1995 and 1996, 2004 and 2006, and 2013 and 2014.

In each assessment, participants reported on a range of positive emotions they had experienced during the past 30 days. In the final two assessments, participants also completed tests of memory performance. These tests consisted of recalling words immediately after their presentation and again 15 minutes later.

The researchers examined the association between positive affect and memory decline, accounting for age, gender, education, depression, negative affect, and extraversion.

"Our findings showed that memory declined with age," said Claudia Haase, an associate professor at Northwestern University and senior author on the paper. "However, individuals with higher levels of positive affect had a less steep memory decline over the course of almost a decade," added Emily Hittner, a PhD graduate of Northwestern University and the paper's lead author.

Read more at Science Daily

Touch and taste? It's all in the tentacles

 

Suction cups on octopus tentacles
Octopuses have captured the human imagination for centuries, inspiring sagas of sea monsters from Scandinavian kraken legends to TV's "Voyage to the Bottom of the Sea" and, most recently, Netflix's less-threatening "My Octopus Teacher." With their eight suction-cup covered tentacles, their very appearance is unique, and their ability to use those appendages to touch and taste while foraging further sets them apart.

In fact, scientists have wondered for decades how those arms, or more specifically the suction cups on them, do their work, prompting a number of experiments into the biomechanics. But very few have studied what is happening on a molecular level. In a new report, Harvard researchers got a glimpse into how the nervous system in the octopus' arms (which operate largely independently from its centralized brain) manage this feat.

The work published Thursday in Cell.

The scientists identified a novel family of sensors in the first layer of cells inside the suction cups that have adapted to react and detect molecules that don't dissolve well in water. The research suggests these sensors, called chemotactile receptors, use these molecules to help the animal figure out what it's touching and whether that object is prey.

"We think because the molecules do not solubilize well, they could, for instance, be found on the surface of octopuses' prey and [whatever the animals touch]," said Nicholas Bellono, an assistant professor of molecular and cellular biology and the study's senior author. "So, when the octopus touches a rock versus a crab, now its arm knows, 'OK, I'm touching a crab [because] I know there's not only touch but there's also this sort of taste.'"

In addition, scientists found diversity in what the receptors responded to and the signals they then transmitted to the cell and nervous systems.

"We think that this is important because it could facilitate complexity in what the octopus senses and also how it can process a range of signals using its semi-autonomous arm nervous system to produce complex behaviors," Bellono said.

The scientists believe this research can help uncover similar receptor systems in other cephalopods, the invertebrate family that also includes squids and cuttlefish. The hope is to determine how these systems work on a molecular level and answer some relatively unexplored questions about how these creatures' capabilities evolved to suit their environment.

"Not much is known about marine chemotactile behavior and with this receptor family as a model system, we can now study which signals are important for the animal and how they can be encoded," said Lena van Giesen, a postdoctoral fellow in the Bellono Lab and lead author of the paper. "These insights into protein evolution and signal coding go far beyond just cephalopods."

Along with Giesen, other co-authors from the lab include Peter B. Kilian, an animal technician, and Corey A.H. Allard, a postdoctoral fellow.

"The strategies they have evolved in order to solve problems in their environment are unique to them and that inspires a great deal of interest from both scientists and non-scientists alike," Kilian said. "People are drawn to octopuses and other cephalopods because they are wildly different from most other animals."

The team set out to uncover how the receptors are able to sense chemicals and detect signals in what they touch, like a tentacle around a snail, to help them make choices.

Octopus arms are distinct and complex. About two-thirds of an octopus's neurons are located in their arms. Because the arms operate partially independently from the brain, if one is severed it can still reach for, identify, and grasp items.

The team started by identifying which cells in the suckers actually do the detecting. After isolating and cloning the touch and chemical receptors, they inserted them in frog eggs and in human cell lines to study their function in isolation. Nothing like these receptors exists in frog or human cells, so the cells act essentially like closed vessels for the study of these receptors.

The researchers then exposed those cells to molecules such as extracts from octopus prey and others items to which these receptors are known to react. Some test subjects were water-soluble, like salts, sugars, amino acids; others do not dissolve well and are not typically considered of interest by aquatic animals. Surprisingly, only the poorly soluble molecules activated the receptors.

Researchers then went back to the octopuses in their lab to see whether they too responded to those molecules by putting those same extracts on the floors of their tanks. They found the only odorants the octopuses receptors responded to were a non-dissolving class of naturally occurring chemicals known as terpenoid molecules.

"[The octopus] was highly responsive to only the part of the floor that had the molecule infused," Bellono said. This led the researchers to believe that the receptors they identified pick up on these types of molecules and help the octopus distinguish what it's touching. "With the semi-autonomous nervous system, it can quickly make this decision: 'Do I contract and grab this crab or keep searching?'"

While the study provides a molecular explanation for this aquatic touch-taste sensation in octopuses through their chemotactile receptors, the researchers suggest further study is needed, given that a great number of unknown natural compounds could also stimulate these receptors to mediate complex behaviors.

"We're now trying to look at other natural molecules that these animals might detect," Bellono said.

Read more at Science Daily

Where were Jupiter and Saturn born?

 

Jupiter and Saturn
New work led by Carnegie's Matt Clement reveals the likely original locations of Saturn and Jupiter. These findings refine our understanding of the forces that determined our Solar System's unusual architecture, including the ejection of an additional planet between Saturn and Uranus, ensuring that only small, rocky planets, like Earth, formed inward of Jupiter.

In its youth, our Sun was surrounded by a rotating disk of gas and dust from which the planets were born. The orbits of early formed planets were thought to be initially close-packed and circular, but gravitational interactions between the larger objects perturbed the arrangement and caused the baby giant planets to rapidly reshuffle, creating the configuration we see today.

"We now know that there are thousands of planetary systems in our Milky Way galaxy alone," Clement said. "But it turns out that the arrangement of planets in our own Solar System is highly unusual, so we are using models to reverse engineer and replicate its formative processes. This is a bit like trying to figure out what happened in a car crash after the fact -- how fast were the cars going, in what directions, and so on."

Clement and his co-authors -- Carnegie's John Chambers, Sean Raymond of the University of Bordeaux, Nathan Kaib of University of Oklahoma, Rogerio Deienno of the Southwest Research Institute, and André Izidoro of Rice University -- conducted 6,000 simulations of our Solar System's evolution, revealing an unexpected detail about Jupiter and Saturn's original relationship.

Jupiter in its infancy was thought to orbit the Sun three times for every two orbits that Saturn completed. But this arrangement is not able to satisfactorily explain the configuration of the giant planets that we see today. The team's models showed that a ratio of two Jupiter orbits to one Saturnian orbit more consistently produced results that look like our familiar planetary architecture.

"This indicates that while our Solar System is a bit of an oddball, it wasn't always the case," explained Clement, who is presenting the team's work at the American Astronomical Society's Division for Planetary Sciences virtual meeting today. "What's more, now that we've established the effectiveness of this model, we can use it to help us look at the formation of the terrestrial planets, including our own, and to perhaps inform our ability to look for similar systems elsewhere that could have the potential to host life."

Read more at Science Daily

Oct 29, 2020

Measuring the expansion of the universe: Researchers focus on velocity

 Ever since the astronomer Edwin Hubble demonstrated that the further apart two galaxies are, the faster they move away from each other, researchers have measured the expansion rate of the Universe (the Hubble constant) and the history of this expansion. Recently, a new puzzle has emerged, as there seems to be a discrepancy between measurements of this expansion using radiation in the early Universe and using nearby objects. Researchers from the Cosmic Dawn Center, at the Niels Bohr Institute, University of Copenhagen, have now contributed to this debate by focusing on velocity measurements. The result has been published in Astrophysical Journal.

The researchers at the Cosmic Dawn Center found that the measurements of velocity used for determining the expansion rate of the Universe may not be reliable. As stated in the publication, this doesn't resolve the discrepancies, but rather hints at an additional inconsistency in the composition of the Universe.

Measuring the expansion rate of the Universe

Currently, astronomers measure the expansion of the Universe using two very different techniques. One is based on measuring the relationship between distance and velocity of nearby galaxies, while the other stems from studying the background radiation from the very early universe. Surprisingly, these two approaches currently find different expansion rates. If this discrepancy is real, a new and rather dramatic reinterpretation of the development of the Universe will be the consequence. However, it is also possible that the difference in the Hubble constant could be from incorrect measurements. It is difficult to measure distances in the Universe, so many studies have focused on improving and recalibrating distance measurements. But in spite of this, over the last 4 years the disagreement has not been resolved.

The velocity of the remote galaxies is easy to measure -- or so we thought

In the recent scientific article, the researchers from the Cosmic Dawn Center now attempt to shine light on a related problem: the measurement of velocity. Depending on the velocity with which a remote object moves away from us, its light shifts to redder colors. With this so-called redshift it is possible to measure the velocity from a spectrum of a remote galaxy. Unlike measurements of distance, until now it was assumed that velocities were relatively easy to measure.

However, when the researchers recently examined distance and velocity measurements from more than 1000 supernovae (exploding stars) collected during the last 25 years, they found a surprising discrepancy in their results. Albert Sneppen, Masters student at the Niels Bohr Institute explains: "We've always believed that measuring velocities was fairly straightforward and precise, but it turns out that we are actually dealing with two types of redshifts."

The first type, measuring the velocity with which the host-galaxy moves away from us, is considered the most reliable. The other type of redshift measures instead the velocity of matter ejected from the exploding star inside the galaxy. Or, more precisely, the matter from the supernova moving towards us with a few percent of the velocity of light (illustration 1). After compensating for this extra movement the redshift -- and velocity -- of the host galaxy can be determined. But this compensation requires a precise model for the explosion. The researchers were able to determine that the results from these two different techniques result in two different expansion histories for the Universe, and therefore two different compositions as well.

Are things "broken in an interesting way?"

So, does this mean that the measurements of the early Universe and newer measurements are ultimately a question of imprecise measurements of velocity? Probably not, says Bidisha Sen, one the authors of the article. "Even if we only use the more reliable redshifts, the supernova measurements not only continue to disagree with the Hubble constant measured from the early Universe -- they also hint at a more general discrepancy regarding the composition of the Universe," she says.

Associate professor at the Niels Bohr Institute Charles Steinhardt, is intrigued by these new results. "If we are actually dealing with two disagreements, it means that our current model would be "broken in an interesting way," he says. "In order to solve two problems, one regarding the composition of the Universe and one regarding the expansion rate of the Universe, rather different physical explanations are required than if we only want to explain a single discrepancy in the expansion rate."

Read more at Science Daily

Study helps explain why motivation to learn declines with age

 As people age, they often lose their motivation to learn new things or engage in everyday activities. In a study of mice, MIT neuroscientists have now identified a brain circuit that is critical for maintaining this kind of motivation.

This circuit is particularly important for learning to make decisions that require evaluating the cost and reward that come with a particular action. The researchers showed that they could boost older mice's motivation to engage in this type of learning by reactivating this circuit, and they could also decrease motivation by suppressing the circuit.

"As we age, it's harder to have a get-up-and-go attitude toward things," says Ann Graybiel, an Institute Professor at MIT and member of the McGovern Institute for Brain Research. "This get-up-and-go, or engagement, is important for our social well-being and for learning -- it's tough to learn if you aren't attending and engaged."

Graybiel is the senior author of the study, which appears today in Cell. The paper's lead authors are Alexander Friedman, a former MIT research scientist who is now an assistant professor at the University of Texas at El Paso, and Emily Hueske, an MIT research scientist.

Evaluating cost and benefit

The striatum is part of the basal ganglia -- a collection of brain centers linked to habit formation, control of voluntary movement, emotion, and addiction. For several decades, Graybiel's lab has been studying clusters of cells called striosomes, which are distributed throughout the striatum. Graybiel discovered striosomes many years ago, but their function had remained mysterious, in part because they are so small and deep within the brain that it is difficult to image them with functional magnetic resonance imaging (fMRI).

In recent years, Friedman, Graybiel, and colleagues including MIT research fellow Ken-ichi Amemori have discovered that striosomes play an important role in a type of decision-making known as approach-avoidance conflict. These decisions involve choosing whether to take the good with the bad -- or to avoid both -- when given options that have both positive and negative elements. An example of this kind of decision is having to choose whether to take a job that pays more but forces a move away from family and friends. Such decisions often provoke great anxiety.

In a related study, Graybiel's lab found that striosomes connect to cells of the substantia nigra, one of the brain's major dopamine-producing centers. These studies led the researchers to hypothesize that striosomes may be acting as a gatekeeper that absorbs sensory and emotional information coming from the cortex and integrates it to produce a decision on how to act. These actions can then be invigorated by the dopamine-producing cells.

The researchers later discovered that chronic stress has a major impact on this circuit and on this kind of emotional decision-making. In a 2017 study performed in rats and mice, they showed that stressed animals were far more likely to choose high-risk, high-payoff options, but that they could block this effect by manipulating the circuit.

In the new Cell study, the researchers set out to investigate what happens in striosomes as mice learn how to make these kinds of decisions. To do that, they measured and analyzed the activity of striosomes as mice learned to choose between positive and negative outcomes.

During the experiments, the mice heard two different tones, one of which was accompanied by a reward (sugar water), and another that was paired with a mildly aversive stimulus (bright light). The mice gradually learned that if they licked a spout more when they heard the first tone, they would get more of the sugar water, and if they licked less during the second, the light would not be as bright.

Learning to perform this kind of task requires assigning value to each cost and each reward. The researchers found that as the mice learned the task, striosomes showed higher activity than other parts of the striatum, and that this activity correlated with the mice's behavioral responses to both of the tones. This suggests that striosomes could be critical for assigning subjective value to a particular outcome.

"In order to survive, in order to do whatever you are doing, you constantly need to be able to learn. You need to learn what is good for you, and what is bad for you," Friedman says.

"A person, or this case a mouse, may value a reward so highly that the risk of experiencing a possible cost is overwhelmed, while another may wish to avoid the cost to the exclusion of all rewards. And these may result in reward-driven learning in some and cost-driven learning in others," Hueske says.

The researchers found that inhibitory neurons that relay signals from the prefrontal cortex help striosomes to enhance their signal-to-noise ratio, which helps to generate the strong signals that are seen when the mice evaluate a high-cost or high-reward option.

Loss of motivation

Next, the researchers found that in older mice (between 13 and 21 months, roughly equivalent to people in their 60s and older), the mice's engagement in learning this type of cost-benefit analysis went down. At the same time, their striosomal activity declined compared to that of younger mice. The researchers found a similar loss of motivation in a mouse model of Huntington's disease, a neurodegenerative disorder that affects the striatum and its striosomes.

When the researchers used genetically targeted drugs to boost activity in the striosomes, they found that the mice became more engaged in performance of the task. Conversely, suppressing striosomal activity led to disengagement.

In addition to normal age-related decline, many mental health disorders can skew the ability to evaluate the costs and rewards of an action, from anxiety and depression to conditions such as PTSD. For example, a depressed person may undervalue potentially rewarding experiences, while someone suffering from addiction may overvalue drugs but undervalue things like their job or their family.

The researchers are now working on possible drug treatments that could stimulate this circuit, and they suggest that training patients to enhance activity in this circuit through biofeedback could offer another potential way to improve their cost-benefit evaluations.

"If you could pinpoint a mechanism which is underlying the subjective evaluation of reward and cost, and use a modern technique that could manipulate it, either psychiatrically or with biofeedback, patients may be able to activate their circuits correctly," Friedman says.

Read more at Science Daily

Average body temperature among healthy adults declined over the past two decades

 In the nearly two centuries since German physician Carl Wunderlich established 98.6°F as the standard "normal" body temperature, it has been used by parents and doctors alike as the measure by which fevers -- and often the severity of illness -- have been assessed.

Over time, however, and in more recent years, lower body temperatures have been widely reported in healthy adults. A 2017 study among 35,000 adults in the United Kingdom found average body temperature to be lower (97.9°F), and a 2019 study showed that the normal body temperature in Americans (those in Palo Alto, California, anyway) is about 97.5°F.

A multinational team of physicians, anthropologists and local researchers led by Michael Gurven, UC Santa Barbara professor of anthropology and chair of the campus's Integrative Anthropological Sciences Unit, and Thomas Kraft, a postdoctoral researcher in the same department, have found a similar decrease among the Tsimane, an indigenous population of forager-horticulturists in the Bolivian Amazon. In the 16 years since Gurven, co-director of the Tsimane Health and Life History Project, and fellow researchers have been studying the population, they have observed a rapid decline in average body temperature -- 0.09°F per year, such that today Tsimane body temperatures are roughly 97.7°F.

"In less than two decades we're seeing about the same level of decline as that observed in the U.S. over approximately two centuries," said Gurven. Their analysis is based on a large sample of 18,000 observations of almost 5,500 adults, and adjust for multiple other factors that might affect body temperature, such as ambient temperature and body mass.

The anthropologists' research appears in the journal Sciences Advances.

"The provocative study showing declines in normal body temperature in the U.S. since the time of the Civil War was conducted in a single population and couldn't explain why the decline happened," said Gurven. "But it was clear that something about human physiology could have changed. One leading hypothesis is that we've experienced fewer infections over time due to improved hygiene, clean water, vaccinations and medical treatment. In our study, we were able to test that idea directly. We have information on clinical diagnoses and biomarkers of infection and inflammation at the time each patient was seen.

While some infections were associated with higher body temperature, adjusting for these did not account for the steep decline in body temperature over time, Gurven noted. "And we used the same type of thermometer for most of the study, so it's not due to changes in instrumentation," he said.

Added Kraft, "No matter how we did the analysis, the decline was still there. Even when we restricted analysis to the <10% of adults who were diagnosed by physicians as completely healthy, we still observed the same decline in body temperature over time."

A key question, then, is why body temperatures have declined over time both for Americans and Tsimane. Extensive data available from the team's long-term research in Bolivia addresses some possibilities. "Declines might be due to the rise of modern health care and lower rates of lingering mild infections now as compared to the past," Gurven explained. "But while health has generally improved over the past two decades, infections are still widespread in rural Bolivia. Our results suggest that reduced infection alone can't explain the observed body temperature declines."

It could be that people are in better condition, so their bodies might be working less to fight infection, he continued. Or greater access to antibiotics and other treatments means the duration of infection is shorter now than in the past. Consistent with that argument, Gurven said, "We found that having a respiratory infection in the early period of the study led to having a higher body temperature than having the same respiratory infection more recently."

It's also possible that greater use of anti-inflammatory drugs like ibuprofen may reduce inflammation, though the researchers found that the temporal decline in body temperature remained even after their analyses accounted for biomarkers of inflammation.

"Another possibility is that our bodies don't have to work as hard to regulate internal temperature because of air conditioning in the summer and heating in the winter," Kraft said. "While Tsimane body temperatures do change with time of year and weather patterns, the Tsimane still do not use any advanced technology for helping to regulate their body temperature. They do, however, have more access to clothes and blankets."

The researchers were initially surprised to find no single "magic bullet" that could explain the decline in body temperature. "It's likely a combination of factors -- all pointing to improved conditions," Gurven said.

According to Gurven, the finding of lower-than-expected body temperatures in the U.S., and the decline over time, had a lot of people scratching their heads. Was it a fluke? In this study, Gurven and his team confirm that body temperatures below 98.6°F are found in places outside the U.S. and the U.K. "The area of Bolivia where the Tsimane live is rural and tropical with minimal public health infrastructure," he noted. "Our study also gives the first indication that body temperatures have declined even in this tropical environment, where infections still account for much morbidity and mortality."

As a vital sign, temperature is an indicator of what's occurring physiologically in the body, much like a metabolic thermostat. "One thing we've known for a while is that there is no universal 'normal' body temperature for everyone at all times, so I doubt our findings will affect how clinicians use body temperature readings in practice" said Gurven. Despite the fixation on 98.6°F, most clinicians recognize that 'normal' temperatures have a range. Throughout the day, body temperature can vary by as much as 1°F, from its lowest in the early morning, to its highest in the late afternoon. It also varies across the menstrual cycle and following physical activity and tends to decrease as we age.

Read more at Science Daily

Denisovan DNA in the genome of early East Asians

 Researchers analyzed the genome of the oldest human fossil found in Mongolia to date and show that the 34,000-year-old woman inherited around 25 percent of her DNA from western Eurasians, demonstrating that people moved across the Eurasian continent shortly after it had first been settled by the ancestors of present-day populations. This individual and a 40,000-year-old individual from China also carried DNA from Denisovans, an extinct form of hominins that inhabited Asia before modern humans arrived.

In 2006, miners discovered a hominin skullcap with peculiar morphological features in the Salkhit Valley of the Norovlin county in eastern Mongolia. It was initially referred to as Mongolanthropus and thought to be a Neandertal or even a Homo erectus. The remains of the "Salkhit" individual represent the only Pleistocene hominin fossil found in the country.

Ancient DNA extracted from the skullcap shows that it belonged to a female modern human who lived 34,000 ago and was more related to Asians than to Europeans. Comparisons to the only other early East Asian individual genetically studied to date, a 40,000-year-old male from Tianyuan Cave outside Beijing (China), show that the two individuals are related to each other. However, they differ insofar that a quarter of the ancestry of the Salkhit individual derived from western Eurasians, probably via admixture with ancient Siberians.

Migration and interaction

"This is direct evidence that modern human communities in East Asia were already quite cosmopolitan earlier than 34,000 years ago," says Diyendo Massilani, lead author of the study and researcher at the Max-Planck Institute for Evolutionary Anthropology. "This rare specimen shows that migration and interactions among populations across Eurasia happened frequently already some 35,000 years ago."

The researchers used a new method developed at the Max-Planck Institute for Evolutionary Anthropology to find segments of DNA from extinct hominins in the Salkhit and Tianyuan genomes. They found that the two genomes contain not only Neandertal DNA but also DNA from Denisovans, an elusive Asian relative of Neandertals. "It is fascinating to see that the ancestors of the oldest humans in East Asia from whom we have been able to obtain genetic data had already mixed with Denisovans, an extinct form of hominins that has contributed ancestry to present-day populations in Asia and Oceania," says Byambaa Gunchinsuren, a researcher at the Institute of Archaeology of the Mongolian Academy of Sciences. "This is direct evidence that Denisovans and modern humans had met and mixed more than 40,000 years ago."

Read more at Science Daily

Oct 28, 2020

Researchers map genomes of agricultural 'monsters'

 The University of Cincinnati is decoding the genetics of agricultural pests in projects that could help boost crop and livestock production to feed millions more people around the world.

Joshua Benoit, an associate professor in UC's College of Arts and Sciences, contributed to genetic studies of New World screwworms that feed on livestock and thrips, tiny insects that can transmit viruses to tomatoes and other plants.

It's the latest international collaboration for Benoit, who previously sequenced the DNA for genomes of dreaded creatures such as bedbugs.

Just in time for Halloween, Benoit's new study subject is no less creepy. The New World screwworm's Latin name means "man-eater." These shiny blue flies with pumpkin-orange eyes lay up to 400 eggs in open cuts or sores of cattle, goats, deer and other mammals. Emerging larvae begin gnawing away on their hosts, feeding on living and dead tissue and creating ghastly wounds.

"Sometimes you'll see a deer missing a chunk of its head. The flies can cause small wounds to become massive injuries," he said.

Benoit and his co-authors sequenced the genome of screwworms and identified ways of slashing populations by targeting particular genes that determine sex and control growth and development or even particular behaviors that help the flies find a suitable animal host.

The study led by entomologist Maxwell Scott at North Carolina State University was published in the journal Communications Biology.

"Our main goal was to use the genomic information to build strains that produce only males for an enhanced sterile-insect program," Scott said.

The New World screwworm is an agricultural menace that causes billions of dollars in livestock losses each year in South America, where it is common. The fly was a scourge in North America as well but was eradicated from the United States in 1982 with intense and ongoing population controls.

Today, a lab operated by Panama and the U.S. Department of Agriculture has established a biological barrier outside Panama City, a geographic choke point between the two continents.

"They rear flies in a lab, sterilize them with chemicals or radiation and dump these sterile male flies into the environment from a plane so they mate with the females and produce no offspring," Benoit said.

Year by year, agriculture experts gradually pushed the screwworm out of Texas, Mexico and most of Central America.

"They just used straight brute force and good science ," Benoit said. "They just drove them down all the way to Panama."

Today, Panama and the United States continue to air-drop sterile screwworms by the millions each week over the choke point to prevent the species from moving north.

A 2016 outbreak in the Florida Keys threatened to wipe out endangered Key deer before the USDA intervened, treating infected animals with a parasite medicine and releasing millions of sterile screwworms on the island chain until they disappeared.

"The U.S. still helps pay for control programs in Panama mainly because we don't want screwworms coming back here. It's the cheapest way to prevent potentially billions of dollars in damage," Benoit said.

One possible way to cut costs would be to raise only male screwworms that are intended for release so the lab wouldn't incur the huge costs of feeding female screwworms. UC's genetic study could help scientists cull females before they hatch.

"So you're left with surviving males. Then you sterilize the males and that would save a lot of money because you'd only have to raise the males for release," he said.

Next, Scott said he wants to understand how the livestock-devouring screwworm Cochliomyia hominivorax evolved as a parasitic meat eater while similar species prefer carrion.

Benoit also contributed to a genomic study in the journal BMC Biology for an insect not much bigger than the dot over the letter i. Thrips, a tiny winged insect, are legion around the world and feed on a wide variety of crops, including soybeans, tomatoes -- even cannabis. They can destroy crops both by eating them and transmitting harmful viruses.

In a study led by entomologist Dorith Rotenberg at North Carolina State University, researchers mapped 16,859 genes that helped understand the thrips' sensory and immune systems and the salivary glands that transmit the viruses.

"The genome provides the essential tools and knowledge for developing genetic pest management strategies for suppressing thrips pest populations," Rotenberg said.

One thrips virus is a particular agricultural concern: the spotted wilt virus, which studies have found can reduce a crop's yield by as much as 96%.

"We're talking hundreds of millions of dollars in losses," Benoit said.

The study found that thrips can be finicky eaters and have unique genetic adaptations that likely allows them to feed on many insects. They pierce the plant and suck the juices.

And thrips have surprisingly sophisticated immune systems, the study found. Researchers identified 96 immune genes, more than many other insects studied to date.

"We mapped the genome, but we also characterized immune aspects and how they feed," Benoit said. "It was the first study of its kind to explain what underlies their reproductive mechanisms. It was far more detailed than previous genomic studies we've done."

The study was funded in part by the National Science Foundation and a UC faculty development research grant.

Benoit said the solution to a thrips infestation predictably has been pesticides. But the UC study could help find better environmental solutions, he said.

Read more at Science Daily

Scientists map structure of potent antibody against coronavirus

 Scientists at Fred Hutchinson Cancer Research Center in Seattle have shown that a potent antibody from a COVID-19 survivor interferes with a key feature on the surface of the coronavirus's distinctive spikes and induces critical pieces of those spikes to break off in the process.

The antibody -- a tiny, Y-shaped protein that is one of the body's premier weapons against pathogens including viruses -- was isolated by the Fred Hutch team from a blood sample received from a Washington state patient in the early days of the pandemic.

The team led by Drs. Leo Stamatatos, Andrew McGuire and Marie Pancera previously reported that, among dozens of different antibodies generated naturally by the patient, this one -- dubbed CV30 -- was 530 times more potent than any of its competitors.

Using tools derived from high-energy physics, Hutch structural biologist Pancera and her postdoctoral fellow Dr. Nicholas Hurlburt have now mapped the molecular structure of CV30. They and their colleagues published their results online today in the journal Nature Communications.

The product of their research is a set of computer-generated 3D images that look to the untrained eye as an unruly mass of noodles. But to scientists they show the precise shapes of proteins comprising critical surface structures of antibodies, the coronavirus spike and the spike's binding site on human cells. The models depict how these structures can fit together like pieces of a 3D puzzle.

"Our study shows that this antibody neutralizes the virus with two mechanisms. One is that it overlaps the virus's target site on human cells, the other is that it induces shedding or dissociation of part of the spike from the rest," Pancera said.

On the surface of the complex structure of the antibody is a spot on the tips of each of its floppy, Y-shaped arms. This infinitesimally small patch of molecules can neatly stretch across a spot on the coronavirus spike, a site that otherwise works like a grappling hook to grab onto a docking site on human cells.

The target for those hooks is the ACE2 receptor, a protein found on the surfaces of cells that line human lung tissues and blood vessels. But if CV30 antibodies cover those hooks, the coronavirus cannot dock easily with the ACE2 receptor. Its ability to infect cells is blunted.

This very effective antibody not only jams the business end of the coronavirus spike, it apparently causes a section of that spike, known as S1, to shear off. Hutch researcher McGuire and his laboratory team performed an experiment showing that, in the presence of this antibody, there is reduction of antibody binding over time, suggesting the S1 section was shed from the spike surface.

The S1 protein plays a crucial role in helping the coronavirus to enter cells. Research indicates that after the spike makes initial contact with the ACE2 receptor, the S1 protein swings like a gate to help the virus fuse with the captured cell surface and slip inside. Once within a cell, the virus hijacks components of its gene and protein-making machinery to make multiple copies of itself that are ultimately released to infect other target cells.

The incredibly small size of antibodies is difficult to comprehend. These proteins are so small they would appear to swarm like mosquitos around a virus whose structure can only be seen using the most powerful of microscopes. The tiny molecular features Pancera's team focused on the tips of the antibody protein are measured in nanometers -- billionths of a meter.

Yet structural biologists equipped with the right tools can now build accurate 3D images of these proteins, deduce how parts of these structures fit like puzzle pieces, and even animate their interactions.

Fred Hutch structural biologists developed 3D images of an antibody fished from the blood of an early COVID-19 survivor that efficiently neutralized the coronavirus.

Dr. Nicholas Hurlburt, who helped develop the images, narrates this short video showing how that antibody interacts with the notorious spikes of the coronavirus, blocking their ability to bind to a receptor on human cells that otherwise presents a doorway to infection.

Key to building models of these nanoscale proteins is the use of X-ray crystallography. Structural biologists determine the shapes of proteins by illuminating frozen, crystalized samples of these molecules with extremely powerful X-rays. The most powerful X-rays come from a gigantic instrument known as a synchrotron light source. Born from atom-smashing experiments dating back to the 1930s, a synchrotron is a ring of massively powerful magnets that are used to accelerate a stream of electrons around a circular track at close to the speed of light. Synchrotrons are so costly that only governments can build and operate them. There are only 40 of them in the world.

Pancera's work used the Advanced Photon Source, a synchrotron at Argonne National Laboratory near Chicago, which is run by the University of Chicago and the U.S. Department of Energy. Argonne's ring is 1,200 feet in diameter and sits on an 80-acre site.

As the electrons whiz around the synchrotron ring, they give off enormously powerful X-rays -- far brighter than the sun but delivered in flashes of beams smaller than a pinpoint.

Structural biologists from around the world rely on these brilliant X-ray beamlines to illuminate frozen crystals of proteins. They reveal their structure in the way these bright beams are bent as they pass though the molecules. It takes powerful computers to translate the data readout from these synchrotron experiments into the images of proteins that are eventually completed by structural biologists.

The Fred Hutch team's work on CV30 builds on that of other structural biologists who are studying a growing family of potent neutralizing antibodies against the coronavirus. The goal of most coronavirus vaccine candidates is to stimulate and train the immune system to make similar neutralizing antibodies, which can recognize the virus as an invader and stop COVID-19 infections before they can take hold.

Neutralizing antibodies from the blood of recovered COVID-19 patients may also be infused into infected patients -- an experimental approach known as convalescent plasma therapy. The donated plasma contains a wide variety of different antibodies of varying potency. Although once thought promising, recent studies have cast doubt on its effectiveness.

However, pharmaceutical companies are experimenting with combinations of potent neutralizing antibodies that can be grown in a laboratory. These "monoclonal antibody cocktails" can be produced at industrial scale for delivery by infusion to infected patients or given as prophylactic drugs to prevent infection. After coming down with COVID-19, President Trump received an experimental monoclonal antibody drug being tested in clinical trials by the biotech company Regeneron, and he attributes his apparently quick recovery to the advanced medical treatment he received.

The Fred Hutch research team holds out hope that the protein they discovered, CV30, may prove to be useful in the prevention or treatment of COVID-19. To find out, this antibody, along with other candidate proteins their team is studying, need to be tested preclinically and then in human trials.

Read more at Science Daily

Astronomers discover activity on distant planetary object

 Centaurs are minor planets believed to have originated in the Kuiper Belt in the outer solar system. They sometimes have comet-like features such as tails and comae -- clouds of dust particles and gas -- even though they orbit in a region between Jupiter and Neptune where it is too cold for water to readily sublimate, or transition, directly from a solid to a gas.

Only 18 active Centaurs have been discovered since 1927, and much about them is still poorly understood. Discovering activity on Centaurs is also observationally challenging because they are faint, telescope time-intensive and because they are rare.

A team of astronomers, led by doctoral student and Presidential Fellow Colin Chandler in Northern Arizona University's Astronomy and Planetary Science PhD program, earlier this year announced their discovery of activity emanating from Centaur 2014 OG392, a planetary object first found in 2014. They published their findings in a paper in The Astrophysical Journal Letters, "Cometary Activity Discovered on a Distant Centaur: A Nonaqueous Sublimation Mechanism." Chandler is the lead author, working with four NAU co-authors, graduate student Jay Kueny, associate professor Chad Trujillo, professor David Trilling and PhD student William Oldroyd.

The team's research involved developing a database search algorithm to locate archival images of the Centaur as well as a follow-up observational campaign.

"Our paper reports the discovery of activity emanating from Centaur 2014 OG392, based on archival images we uncovered," Chandler said, "plus our own new observational evidence acquired with the Dark Energy Camera at the Inter-American Observatory in Cerro Tololo, Chile, the Walter Baade Telescope at the Las Campanas Observatory in Chile and the Large Monolithic Imager at Lowell Observatory's Discovery Channel Telescope in Happy Jack, Ariz."

"We detected a coma as far as 400,000 km from 2014 OG392," he said, "and our analysis of sublimation processes and dynamical lifetime suggest carbon dioxide and/or ammonia are the most likely candidates for causing activity on this and other active Centaurs."

"We developed a novel technique," Chandler said, "that combines observational measurements, for example, color and dust mass, with modeling efforts to estimate such characteristics as the object's volatile sublimation and orbital dynamics."

As a result of the team's discovery, the Centaur has recently been reclassified as a comet, and will be known as "C/2014 OG392 (PANSTARRS)."

"I'm very excited that the Minor Planet Center awarded a new comet designation befitting the activity we discovered on this unusual object," he said.

Read more at Science Daily

Juno data indicates 'sprites' or 'elves' frolic in Jupiter's atmosphere

 New results from NASA's Juno mission at Jupiter suggest that either "sprites" or "elves" could be dancing in the upper atmosphere of the solar system's largest planet. It is the first time these bright, unpredictable and extremely brief flashes of light -- formally known as transient luminous events, or TLE's -- have been observed on another world. The findings were published on Oct. 27, 2020, in the Journal of Geophysical Research: Planets.

Scientists predicted these bright, superfast flashes of light should also be present in Jupiter's immense roiling atmosphere, but their existence remained theoretical. Then, in the summer of 2019, researchers working with data from Juno's ultraviolet spectrograph instrument (UVS) discovered something unexpected: a bright, narrow streak of ultraviolet emission that disappeared in a flash.

"UVS was designed to characterize Jupiter's beautiful northern and southern lights," said Giles, a Juno scientist and the lead author of the paper. "But we discovered UVS images that not only showed Jovian aurora, but also a bright flash of UV light over in the corner where it wasn't supposed to be. The more our team looked into it, the more we realized Juno may have detected a TLE on Jupiter."

Brief and Brilliant

Named after a mischievous, quick-witted character in English folklore, sprites are transient luminous events triggered by lightning discharges from thunderstorms far below. On Earth, they occur up to 60 miles (97 kilometers) above intense, towering thunderstorms and brighten a region of the sky tens of miles across, yet last only a few milliseconds (a fraction of the time it takes you to blink an eye).

Almost resembling a jellyfish, sprites feature a central blob of light (on Earth, it's 15 to 30 miles, or 24 to 48 kilometers, across), with long tendrils extending both down toward the ground and upward. Elves (short for Emission of Light and Very Low Frequency perturbations due to Electromagnetic Pulse Sources) appear as a flattened disk glowing in Earth's upper atmosphere. They, too, brighten the sky for mere milliseconds but can grow larger than sprites -- up to 200 miles (320 kilometers) across on Earth.

Their colors are distinctive as well. "On Earth, sprites and elves appear reddish in color due to their interaction with nitrogen in the upper atmosphere," said Giles. "But on Jupiter, the upper atmosphere mostly consists of hydrogen, so they would likely appear either blue or pink."

Location, Location, Location

The occurrence of sprites and elves at Jupiter was predicted by several previously published studies. Synching with these predictions, the 11 large-scale bright events Juno's UVS instrument has detected occurred in a region where lightning thunderstorms are known to form. Juno scientists could also rule out that these were simply mega-bolts of lightning because they were found about 186 miles (300 kilometers) above the altitude where the majority of Jupiter's lightning forms -- its water-cloud layer. And UVS recorded that the spectra of the bright flashes were dominated by hydrogen emissions.

A rotating, solar-powered spacecraft, Juno, arrived at Jupiter in 2016 after making a five-year journey. Since then, it has made 29 science flybys of the gas giant, each orbit taking 53 days.

"We're continuing to look for more telltale signs of elves and sprites every time Juno does a science pass," said Giles. "Now that we know what we are looking for, it will be easier to find them at Jupiter and on other planets. And comparing sprites and elves from Jupiter with those here on Earth will help us better understand electrical activity in planetary atmospheres."

Read more at Science Daily

Oct 27, 2020

Gran Telescopio Canarias finds the farthest black hole that belongs to a rare family of galaxies

 An international team of astronomers has identified one of the rarest known classes of gamma-ray emitting galaxies, called BL Lacertae, within the first 2 billion years of the age of the Universe. The team, that has used one of the largest optical telescope in the world, Gran Telescopio Canarias (GTC), located at the Observatorio del Roque de los Muchachos (Garafía, La Palma), consists of researchers from the Universidad Complutense de Madrid (UCM, Spain), DESY (Germany), University of California Riverside and Clemson University (USA). The finding is published in The Astrophysical Journal Letters.

Only a small fraction of the galaxies emits gamma rays, which is the most extreme form of light. Astronomers believe that these highly energetic photons originate from the vicinity of a supermassive black hole residing at the centers of these galaxies. When this happens, they are known as active galaxies. The black hole swallows matter from its surroundings and emits jets or, in other words, collimated streams of matter and radiation. Few of these active galaxies (less than 1%) have their jets pointing by chance toward Earth. Scientists call them blazars and are one of the most powerful sources of radiation in the universe.

Blazars come in two flavors: BL Lacertae (BL Lac) and flat-spectrum radio-quasars (FSRQs). Our current understanding about these mysterious astronomical objects is that FSRQs are relatively young active galaxies, rich in dust and gas that surround the central black hole. As time passes, the amount of matter available to feed the black hole is consumed and the FSRQ evolves to become a BL Lac object. "In other words, BL Lacs may represent the elderly and evolved phase of a blazar's life, while FSRQs resemble an adult," explains Vaidehi Paliya, a DESY researcher who participated in this program.

"Since the speed of light is limited, the farther we look, the earlier in the age of the Universe we investigate," says Alberto Domínguez of the Institute of Physics of Particles and the Cosmos (IPARCOS) at UCM and co-author of the study. Astronomers believe that the current age of the Universe is around 13.8 billion years. The most distant FSRQ was identified at a distance when the age of the universe was merely 1 billion years. For a comparison, the farthest BL Lac that is known was found when the age of the Universe was around 2.5 billion years. Therefore, the hypothesis of the evolution from FSRQ to BL Lacs appears to be valid.

Now, the team of international scientists has discovered a new BL Lac object, named 4FGL J1219.0+3653, much farther away than the previous record holder. "We have discovered a BL Lac existing even 800 million years earlier, this is when the Universe was less than 2 billion years old," states Cristina Cabello, a graduate student at IPARCOS-UCM. "This finding challenges the current scenario that BL Lacs are actually an evolved phase of FSRQ," adds Nicolás Cardiel, a professor at IPARCOS-UCM. Jesús Gallego, also a professor at the same institution and a co-author of the study concludes: "This discovery has challenged our knowledge of the cosmic evolution of blazars and active galaxies in general."

The researchers have used the OSIRIS and EMIR instruments, designed and built by the Instituto de Astrofísica de Canarias (IAC) and mounted on GTC, also known as Grantecan. "These results are a clear example of how the combination of the large collecting area of ??GTC, the world's largest optical-infrared telescope, together with the unique capabilities of complementary instruments installed in the telescope are providing breakthrough results to improve our understanding of the Universe," underlines Romano Corradi, director of Grantecan.

Read more at Science Daily

Over 80 percent of COVID-19 patients have vitamin D deficiency, study finds

 Over 80 percent of 200 COVID-19 patients in a hospital in Spain have vitamin D deficiency, according to a new study published in the Endocrine Society's Journal of Clinical Endocrinology & Metabolism.

Vitamin D is a hormone the kidneys produce that controls blood calcium concentration and impacts the immune system. Vitamin D deficiency has been linked to a variety of health concerns, although research is still underway into why the hormone impacts other systems of the body. Many studies point to the beneficial effect of vitamin D on the immune system, especially regarding protection against infections.

"One approach is to identify and treat vitamin D deficiency, especially in high-risk individuals such as the elderly, patients with comorbidities, and nursing home residents, who are the main target population for the COVID-19," said study co-author José L. Hernández, Ph.D., of the University of Cantabria in Santander, Spain. "Vitamin D treatment should be recommended in COVID-19 patients with low levels of vitamin D circulating in the blood since this approach might have beneficial effects in both the musculoskeletal and the immune system."

The researchers found 80 percent of 216 COVID-19 patients at the Hospital Universitario Marqués de Valdecilla had vitamin D deficiency, and men had lower vitamin D levels than women. COVID-19 patients with lower vitamin D levels also had raised serum levels of inflammatory markers such as ferritin and D-dimer.

Read more at Science Daily

Scientists discover how a common mutation leads to 'night owl' sleep disorder

 A new study by researchers at UC Santa Cruz shows how a genetic mutation throws off the timing of the biological clock, causing a common sleep syndrome called delayed sleep phase disorder.

People with this condition are unable to fall asleep until late at night (often after 2 a.m.) and have difficulty getting up in the morning. In 2017, scientists discovered a surprisingly common mutation that causes this sleep disorder by altering a key component of the biological clock that maintains the body's daily rhythms. The new findings, published October 26 in Proceedings of the National Academy of Sciences, reveal the molecular mechanisms involved and point the way toward potential treatments.

"This mutation has dramatic effects on people's sleep patterns, so it's exciting to identify a concrete mechanism in the biological clock that links the biochemistry of this protein to the control of human sleep behavior," said corresponding author Carrie Partch, professor of chemistry and biochemistry at UC Santa Cruz.

Daily cycles in virtually every aspect of our physiology are driven by cyclical interactions of clock proteins in our cells. Genetic variations that change the clock proteins can alter the timing of the clock and cause sleep phase disorders. A shortened clock cycle causes people to go to sleep and wake up earlier than normal (the "morning lark" effect), while a longer clock cycle makes people stay up late and sleep in (the "night owl" effect).

Most of the mutations known to alter the clock are very rare, Partch said. They are important to scientists as clues to understanding the mechanisms of the clock, but a given mutation may only affect one in a million people. The genetic variant identified in the 2017 study, however, was found in around one in 75 people of European descent.

How often this particular mutation is involved in delayed sleep phase disorder remains unclear, Partch said. Sleep behavior is complex -- people stay up late for many different reasons -- and disorders can be hard to diagnose. So the discovery of a relatively common genetic variation associated with a sleep phase disorder was a striking development.

"This genetic marker is really widespread," Partch said. "We still have a lot to understand about the role of lengthened clock timing in delayed sleep onset, but this one mutation is clearly an important cause of late night behavior in humans."

The mutation affects a protein called cryptochrome, one of four main clock proteins. Two of the clock proteins (CLOCK and BMAL1) form a complex that turns on the genes for the other two (period and cryptochrome), which then combine to repress the activity of the first pair, thus turning themselves off and starting the cycle again. This feedback loop is the central mechanism of the biological clock, driving daily fluctuations in gene activity and protein levels throughout the body.

The cryptochrome mutation causes a small segment on the "tail" of the protein to get left out, and Partch's lab found that this changes how tightly cryptochrome binds to the CLOCK:BMAL1 complex.

"The region that gets snipped out actually controls the activity of cryptochrome in a way that leads to a 24-hour clock," Partch explained. "Without it, cryptochrome binds more tightly and stretches out the length of the clock each day."

The binding of these protein complexes involves a pocket where the missing tail segment normally competes and interferes with the binding of the rest of the complex.

"How tightly the complex partners bind to this pocket determines how quickly the clock runs," Partch explained. "This tells us we should be looking for drugs that bind to that pocket and can serve the same purpose as the cryptochrome tail."

Partch's lab is currently doing just that, conducting screening assays to identify molecules that bind to the pocket in the clock's molecular complex. "We know now that we need to target that pocket to develop therapeutics that could shorten the clock for people with delayed sleep phase disorder," she said.

Partch has been studying the molecular structures and interactions of the clock proteins for years. In a study published earlier this year, her lab showed how certain mutations can shorten clock timing by affecting a molecular switch mechanism, making some people extreme morning larks.

Read more at Science Daily

For vampire bats, social distancing while sick comes naturally

 

Common vampire bats
New research shows that when vampire bats feel sick, they socially distance themselves from groupmates in their roost -- no public health guidance required.

The researchers gave wild vampire bats a substance that activated their immune system and made them feel sick for several hours, and then returned the bats to their roost. A control group of bats received a placebo.

Data on the behavior of these bats was transmitted to scientists by custom-made "backpack" computers that were glued to the animals' backs, recording the vampire bats' social encounters.

Compared to control bats in their hollow-tree home, sick bats interacted with fewer bats, spent less time near others and were overall less interactive with individuals that were well-connected with others in the roost.

Healthy bats were also less likely to associate with a sick bat, the data showed.

"Social distancing during the COVID-19 pandemic, when we feel fine, doesn't feel particularly normal. But when we're sick, it's common to withdraw a bit and stay in bed longer because we're exhausted. And that means we're likely to have fewer social encounters," said Simon Ripperger, co-lead author of the study and a postdoctoral researcher in evolution, ecology and organismal biology at The Ohio State University.

"That's the same thing we were observing in this study: In the wild, vampire bats -- which are highly social animals -- keep their distance when they're sick or living with sick groupmates. And it can be expected that they reduce the spread of disease as a result."

The study was published today (Oct. 27, 2020) in the journal Behavioral Ecology.

Ripperger works in the lab of co-lead author Gerald Carter, assistant professor of evolution, ecology and organismal biology at Ohio State. The two scientists and their co-author on this paper, University of Texas at Austin graduate student Sebastian Stockmaier, are also affiliated with the Smithsonian Tropical Research Institute in Panama.

Carter and Ripperger have partnered on numerous studies of social behavior in vampire bats. Among their previous findings: Vampire bats make friends through a gradual buildup of trust, and vampire bat moms maintained social connections to their offspring even when both felt sick.

For this work, the researchers captured 31 female common vampire bats living inside a hollow tree in Lamanai, Belize. They injected 16 bats with the molecule that induced the immune challenge -- but did not cause disease -- and 15 with saline, a placebo.

After returning the bats to their roost, the scientists analyzed social behaviors in the colony over three days, including a "treatment period" from three to nine hours after the injections during which the researchers attributed behavior changes to the effects of treated bats feeling sick.

"We focused on three measures of the sick bats' behaviors: how many other bats they encountered, how much total time they spent with others, and how well-connected they were to the whole social network," Carter said.

On average, compared to control bats, the sick bats associated with four fewer groupmates over the six-hour treatment period and spent 25 fewer minutes interacting per partner, and the time any two bats spent near each other was shortest if the encounter involved at least one sick bat.

"One reason that the sick vampire bats encountered fewer groupmates is simply because they were lethargic and moved around less," Carter said. "In captivity, we saw that sick bats also groom others less and make fewer contact calls. These simple changes in behavior can create social distance even without any cooperation or avoidance by healthy bats. We had previously studied this in the lab. Our goal here was to measure the outcomes of these sickness behaviors in a natural setting.

"The effects we showed here are probably common in many other animals. But it is important to remember that changes in behavior also depend on the pathogen. We did not use a real virus or bacteria, because we wanted to isolate the effect of sickness behavior. Some real diseases might make interactions more likely, not less, or they might lead to sick bats being avoided."

Although the study did not document the spread of an actual disease, combining the social encounter data with known links between exposure time and pathogen transmission allows researchers to predict how sickness behavior can influence the spread of a pathogen in a social network.

Clearly identifying each bat's behavior in the colony's social network was possible only because the proximity sensors -- miniaturized computers that weigh less than a penny and fall off within a week or two -- took measures every few seconds of associations involving sick or healthy bats or a combination of the two. Visualizations of the proximity sensors' recordings showed growth in the number of connections made in the colony's social network from the treatment period to 48 hours later.

Read more at Science Daily

Oct 26, 2020

Tiny moon shadows may harbor hidden stores of ice

 Hidden pockets of water could be much more common on the surface of the moon than scientists once suspected, according to new research led by the University of Colorado Boulder. In some cases, these tiny patches of ice might exist in permanent shadows no bigger than a penny.

"If you can imagine standing on the surface of the moon near one of its poles, you would see shadows all over the place," said Paul Hayne, assistant professor in the Laboratory of Atmospheric and Space Physics at CU Boulder. "Many of those tiny shadows could be full of ice."

In a study published today in the journal Nature Astronomy, Hayne and his colleagues explored phenomena on the moon called "cold traps" -- shadowy regions of the surface that exist in a state of eternal darkness.

Many have gone without a single ray of sunlight for potentially billions of years. And these nooks and crannies may be a lot more numerous than previous data suggest. Drawing on detailed data from NASA's Lunar Reconnaissance Orbiter, the researchers estimate that the moon could harbor roughly 15,000 square miles of permanent shadows in various shapes and sizes -- reservoirs that, according to theory, might also be capable of preserving water via ice.

Future lunar residents, in other words, may be in luck.

"If we're right, water is going to be more accessible for drinking water, for rocket fuel, everything that NASA needs water for," said Hayne, also of the Department of Astrophysical and Planetary Sciences.

Visiting a crater

To understand cold traps, first take a trip to Shackleton Crater near the moon's south pole. This humungous impact crater reaches several miles deep and stretches about 13 miles across. Because of the moon's position in relation to the sun, much of the crater's interior is permanently in shadow -- a complete lack of direct sunlight that causes temperatures inside to hover at around minus 300 degrees Fahrenheit.

"You look down into Shackleton Crater or Shoemaker Crater, you're looking into this vast, dark inaccessible region," Hayne said. "It's very forbidding."

That forbidding nature, however, may also be key to these craters' importance for planned lunar bases. Scientists have long believed that such cold traps could be ideal environments for hosting ice -- a valuable resource that is scarce on the moon but is occasionally delivered in large quantities when water-rich comets or asteroids crash down.

"The temperatures are so low in cold traps that ice would behave like a rock," Hayne said. "If water gets in there, it's not going anywhere for a billion years."

In their latest research, however, Hayne and his colleagues wanted to know how common such traps might be. Do they only exist in big craters, or do they spread over the face of the moon?

To find out, the team pulled data from real-life observations of the moon, then used mathematical tools to recreate what its surface might look like at a very small scale. The answer: a bit like a golf ball.

Based on the team's calculations, the moon's north and south poles could contain a tremendous number of bumps and knicks capable of hosting permanent shadows -- many of them just a centimeter wide. Previous estimates pegged the area of cold traps on the moon at around 7,000 square miles, about half of what Hayne and his colleagues have predicted.

Mining for water

Hayne notes that his team can't prove that these shadows actually hold pockets of ice -- the only way to do that would be to go there in person or with rovers and dig.

But the results are promising, and future missions could shed even more light, literally, on the moon's water resources. Hayne, for example, is leading a NASA effort called the Lunar Compact Infrared Imaging System (L-CIRiS) that will take heat-sensing panoramic images of the moon's surface near its south pole in 2022.

If his team's findings bear out, locating the ingredients for a hot shower on the moon may have just gotten a lot easier.

"Astronauts may not need to go into these deep, dark shadows," Hayne said. "They could walk around and find one that's a meter wide and that might be just as likely to harbor ice."

Read more at Science Daily

Mythbusting: Five common misperceptions surrounding the environmental impacts of single-use plastics

 Stand in the soda pop aisle at the supermarket, surrounded by rows of brightly colored plastic bottles and metal cans, and it's easy to conclude that the main environmental problem here is an overabundance of single-use containers: If we simply recycled more of them, we'd go a long way toward minimizing impacts.

In reality, most of the environmental impacts of many consumer products, including soft drinks, are tied to the products inside, not the packaging, according to University of Michigan environmental engineer Shelie Miller.

And when it comes to single-use plastics in particular, the production and disposal of packaging often represents only a few percent of a product's lifetime environmental impacts, according to Miller, author of an article scheduled for publication Oct. 26 in the journal Environmental Science & Technology.

"Consumers tend to focus on the impact of the packaging, rather than the impact of the product itself," said Miller, an associate professor at the School for Environment and Sustainability and director of the U-M Program in the Environment. "But mindful consumption that reduces the need for products and eliminates wastefulness is far more effective at reducing overall environmental impact than recycling.

"Nevertheless, it is fundamentally easier for consumers to recycle the packaging of a product than to voluntarily reduce their demand for that product, which is likely one reason why recycling efforts are so popular."

The mistaken belief about the central role of plastic packaging is one of five myths that Miller attempts to debunk in her conventional wisdom-shattering paper, "Five misperceptions surrounding the environmental impacts of single-use plastic."

The five common misperceptions, along with Miller's insights about them, are:

     Plastic packaging is the largest contributor to a product's environmental impact. In reality, the product inside the package usually has a much greater environmental impact.

    The environmental impacts of plastics are greater than any other packaging material. Actually, plastic generally has lower overall environmental impacts than single-use glass or metal in most impact categories.

    Reusable products are always better than single-use plastics. Actually, reusable products have lower environmental impacts only when they are reused enough times to offset the materials and energy used to make them.

    Recycling and composting should be the highest priority. Truth be told, the environmental benefits associated with recycling and composting tend to be small when compared with efforts to reduce overall consumption.

    "Zero waste" efforts that eliminate single-use plastics minimize the environmental impacts of an event. In reality, the benefits of diverting waste from the landfill are small. Waste reduction and mindful consumption, including a careful consideration of the types and quantities of products consumed, are far larger factors dictating the environmental impact of an event.

In her review article, Miller challenges beliefs unsupported by current scientific knowledge while urging other environmental scientists and engineers to broaden the conversation -- in their own research and in discussions that shape public policy.

"Efforts to reduce the use of single-use plastics and to increase recycling may distract from less visible and often more damaging environmental impacts associated with energy use, manufacturing and resource extraction," she said. "We need to take a much more holistic view that considers larger environmental issues."

Miller stresses that she is not trying to downplay environmental concerns associated with plastics and plastic waste. But to place the plastic-waste problem in proper context, it's critical to examine the environmental impacts that occur at every stage of a product's lifetime -- from the extraction of natural resources and the energy needed to make the item to its ultimate disposal or reuse.

Life-cycle assessment, or LCA, is a tool that researchers like Miller use to quantify lifetime environmental impacts in multiple categories, including climate change and energy use, water and resource depletion, biodiversity loss, solid waste generation, and human and ecological toxicity.

It's easy for consumers to focus on packaging waste because they see boxes, bottles and cans every day, while a wide range of other environmental impacts are largely invisible to them. But LCA analyses systematically evaluate the entire supply chain, measuring impacts that might otherwise be overlooked, Miller said.

Packaged food products, for example, embody largely invisible impacts that can include intensive agricultural production, energy generation, and refrigeration and transportation throughout the supply chain, along with the processing and manufacturing associated with the food and its packaging, she said.

Miller points out that the well-worn adage "reduce, reuse, recycle," commonly known as the 3Rs, was created to provide an easy-to-remember hierarchy of the preferable ways to lessen environmental impact.

Yet most environmental messaging does not emphasize the inherent hierarchy of the 3Rs -- the fact that reducing and reusing are listed ahead of recycling. As a result, consumers often over-emphasize the importance of recycling packaging instead of reducing product consumption to the extent possible and reusing items to extend their lifetime.

"Although the use of single-use plastics has created a number of environmental problems that need to be addressed, there are also numerous upstream consequences of a consumer-oriented society that will not be eliminated, even if plastic waste is drastically reduced," she said.

Read more at Science Daily

How exercise stalls cancer growth through the immune system

 People with cancer who exercise generally have a better prognosis than inactive patients. Now, researchers at Karolinska Institutet in Sweden have found a likely explanation of why exercise helps slow down cancer growth in mice: Physical activity changes the metabolism of the immune system's cytotoxic T cells and thereby improves their ability to attack cancer cells. The study is published in the journal eLife.

"The biology behind the positive effects of exercise can provide new insights into how the body maintains health as well as help us design and improve treatments against cancer," says Randall Johnson, professor at the Department of Cell and Molecular Biology, Karolinska Institutet, and the study's corresponding author.

Prior research has shown that physical activity can prevent unhealth as well as improve the prognosis of several diseases including various forms of cancer. Exactly how exercise exerts its protective effects against cancer is, however, still unknown, especially when it comes to the biological mechanisms. One plausible explanation is that physical activity activates the immune system and thereby bolsters the body's ability to prevent and inhibit cancer growth.

In this study, researchers at Karolinska Institutet expanded on this hypothesis by examining how the immune system's cytotoxic T cells, that is white blood cells specialized in killing cancer cells, respond to exercise.

They divided mice with cancer into two groups and let one group exercise regularly in a spinning wheel while the other remained inactive. The result showed that cancer growth slowed and mortality decreased in the trained animals compared with the untrained.

Next, the researchers examined the importance of cytotoxic T cells by injecting antibodies that remove these T cells in both trained and untrained mice. The antibodies knocked out the positive effect of exercise on both cancer growth and survival, which according to the researchers demonstrates the significance of these T cells for exercise-induced suppression of cancer.

The researchers also transferred cytotoxic T cells from trained to untrained mice with tumors, which improved their prospects compared with those who got cells from untrained animals.

To examine how exercise influenced cancer growth, the researchers isolated T cells, blood and tissue samples after a training sessions and measured levels of common metabolites that are produced in muscle and excreted into plasma at high levels during exertion. Some of these metabolites, such as lactate, altered the metabolism of the T cells and increased their activity. The researchers also found that T cells isolated from an exercised animal showed an altered metabolism compared to T cells from resting animals.

In addition, the researchers examined how these metabolites change in response to exercise in humans. They took blood samples from eight healthy men after 30 minutes of intense cycling and noticed that the same training-induced metabolites were released in humans.

"Our research shows that exercise affects the production of several molecules and metabolites that activate cancer-fighting immune cells and thereby inhibit cancer growth," says Helene Rundqvist, senior researcher at the Department of Laboratory Medicine, Karolinska Institutet, and the study's first author. "We hope these results may contribute to a deeper understanding of how our lifestyle impacts our immune system and inform the development of new immunotherapies against cancer."

Read more at Science Daily

NASA's SOFIA discovers water on sunlit surface of Moon

 

Moon
NASA's Stratospheric Observatory for Infrared Astronomy (SOFIA) has confirmed, for the first time, water on the sunlit surface of the Moon. This discovery indicates that water may be distributed across the lunar surface, and not limited to cold, shadowed places.

SOFIA has detected water molecules (H2O) in Clavius Crater, one of the largest craters visible from Earth, located in the Moon's southern hemisphere. Previous observations of the Moon's surface detected some form of hydrogen, but were unable to distinguish between water and its close chemical relative, hydroxyl (OH). Data from this location reveal water in concentrations of 100 to 412 parts per million -- roughly equivalent to a 12-ounce bottle of water -- trapped in a cubic meter of soil spread across the lunar surface. The results are published in the latest issue of Nature Astronomy.

"We had indications that H2O -- the familiar water we know -- might be present on the sunlit side of the Moon," said Paul Hertz, director of the Astrophysics Division in the Science Mission Directorate at NASA Headquarters in Washington. "Now we know it is there. This discovery challenges our understanding of the lunar surface and raises intriguing questions about resources relevant for deep space exploration."

As a comparison, the Sahara desert has 100 times the amount of water than what SOFIA detected in the lunar soil. Despite the small amounts, the discovery raises new questions about how water is created and how it persists on the harsh, airless lunar surface.

Water is a precious resource in deep space and a key ingredient of life as we know it. Whether the water SOFIA found is easily accessible for use as a resource remains to be determined. Under NASA's Artemis program, the agency is eager to learn all it can about the presence of water on the Moon in advance of sending the first woman and next man to the lunar surface in 2024 and establishing a sustainable human presence there by the end of the decade.

SOFIA's results build on years of previous research examining the presence of water on the Moon. When the Apollo astronauts first returned from the Moon in 1969, it was thought to be completely dry. Orbital and impactor missions over the past 20 years, such as NASA's Lunar Crater Observation and Sensing Satellite, confirmed ice in permanently shadowed craters around the Moon's poles. Meanwhile, several spacecraft -- including the Cassini mission and Deep Impact comet mission, as well as the Indian Space Research Organization's Chandrayaan-1 mission -- and NASA's ground-based Infrared Telescope Facility, looked broadly across the lunar surface and found evidence of hydration in sunnier regions. Yet those missions were unable to definitively distinguish the form in which it was present -- either H2O or OH.

"Prior to the SOFIA observations, we knew there was some kind of hydration," said Casey Honniball, the lead author who published the results from her graduate thesis work at the University of Hawaii at Mānoa in Honolulu. "But we didn't know how much, if any, was actually water molecules -- like we drink every day -- or something more like drain cleaner."

SOFIA offered a new means of looking at the Moon. Flying at altitudes of up to 45,000 feet, this modified Boeing 747SP jetliner with a 106-inch diameter telescope reaches above 99% of the water vapor in Earth's atmosphere to get a clearer view of the infrared universe. Using its Faint Object infraRed CAmera for the SOFIA Telescope (FORCAST), SOFIA was able to pick up the specific wavelength unique to water molecules, at 6.1 microns, and discovered a relatively surprising concentration in sunny Clavius Crater.

"Without a thick atmosphere, water on the sunlit lunar surface should just be lost to space," said Honniball, who is now a postdoctoral fellow at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "Yet somehow we're seeing it. Something is generating the water, and something must be trapping it there."

Several forces could be at play in the delivery or creation of this water. Micrometeorites raining down on the lunar surface, carrying small amounts of water, could deposit the water on the lunar surface upon impact. Another possibility is there could be a two-step process whereby the Sun's solar wind delivers hydrogen to the lunar surface and causes a chemical reaction with oxygen-bearing minerals in the soil to create hydroxyl. Meanwhile, radiation from the bombardment of micrometeorites could be transforming that hydroxyl into water.

How the water then gets stored -- making it possible to accumulate -- also raises some intriguing questions. The water could be trapped into tiny beadlike structures in the soil that form out of the high heat created by micrometeorite impacts. Another possibility is that the water could be hidden between grains of lunar soil and sheltered from the sunlight -- potentially making it a bit more accessible than water trapped in beadlike structures.

For a mission designed to look at distant, dim objects such as black holes, star clusters, and galaxies, SOFIA's spotlight on Earth's nearest and brightest neighbor was a departure from business as usual. The telescope operators typically use a guide camera to track stars, keeping the telescope locked steadily on its observing target. But the Moon is so close and bright that it fills the guide camera's entire field of view. With no stars visible, it was unclear if the telescope could reliably track the Moon. To determine this, in August 2018, the operators decided to try a test observation.

"It was, in fact, the first time SOFIA has looked at the Moon, and we weren't even completely sure if we would get reliable data, but questions about the Moon's water compelled us to try," said Naseem Rangwala, SOFIA's project scientist at NASA's Ames Research Center in California's Silicon Valley. "It's incredible that this discovery came out of what was essentially a test, and now that we know we can do this, we're planning more flights to do more observations."

SOFIA's follow-up flights will look for water in additional sunlit locations and during different lunar phases to learn more about how the water is produced, stored, and moved across the Moon. The data will add to the work of future Moon missions, such as NASA's Volatiles Investigating Polar Exploration Rover (VIPER), to create the first water resource maps of the Moon for future human space exploration.

In the same issue of Nature Astronomy, scientists have published a paper using theoretical models and NASA's Lunar Reconnaissance Orbiter data, pointing out that water could be trapped in small shadows, where temperatures stay below freezing, across more of the Moon than currently expected. The results can be found here.

"Water is a valuable resource, for both scientific purposes and for use by our explorers," said Jacob Bleacher, chief exploration scientist for NASA's Human Exploration and Operations Mission Directorate. "If we can use the resources at the Moon, then we can carry less water and more equipment to help enable new scientific discoveries."

Read more at Science Daily