Aug 24, 2019

Tech time not to blame for teens' mental health problems

Teenagers using smartphones
A new study, published in the journal Clinical Psychological Science, suggests that the time adolescents are spending on their phones and online is not that bad.

The study tracked young adolescents on their smartphones to test whether more time spent using digital technology was linked to worse mental health outcomes. The researchers -- Candice Odgers, professor of psychological science at the University of California, Irvine; Michaeline Jensen, assistant professor of psychology at the University of North Carolina at Greensboro; Madeleine George, postdoctoral researcher at Purdue University; and Michael Russell, assistant professor of behavioral health at Pennsylvania State University -- found little evidence of longitudinal or daily linkages between digital technology use and adolescent mental health.

"It may be time for adults to stop arguing over whether smartphones and social media are good or bad for teens' mental health and start figuring out ways to best support them in both their offline and online lives," Odgers said.

"Contrary to the common belief that smartphones and social media are damaging adolescents' mental health, we don't see much support for the idea that time spent on phones and online is associated with increased risk for mental health problems," Jensen said.

The study surveyed more than 2,000 youth and then intensively tracked a subsample of nearly 400 teens on their smartphones multiple times a day for two weeks. Adolescents in the study were between 10 and 15 years old and represented the economically and racially diverse population of youth attending North Carolina public schools.

The researchers collected reports of mental health symptoms from the adolescents three times a day and they also reported on their daily technology usage each night. They asked whether youth who engaged more with digital technologies were more likely to experience later mental health symptoms and whether days that adolescents spent more time using digital technology for a wide range of purposes were also days when mental health problems were more common. In both cases, increased digital technology use was not related to worse mental health.

Read more at Science Daily

How memories form and fade

Memories in the brain concept
Why is it that you can remember the name of your childhood best friend that you haven't seen in years yet easily forget the name of a person you just met a moment ago? In other words, why are some memories stable over decades, while others fade within minutes?

Using mouse models, Caltech researchers have now determined that strong, stable memories are encoded by "teams" of neurons all firing in synchrony, providing redundancy that enables these memories to persist over time. The research has implications for understanding how memory might be affected after brain damage, such as by strokes or Alzheimer's disease.

The work was done in the laboratory of Carlos Lois, research professor of biology, and is described in a paper that appears in the August 23 of the journal Science. Lois is also an affiliated faculty member of the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech.

Led by postdoctoral scholar Walter Gonzalez, the team developed a test to examine mice's neural activity as they learn about and remember a new place. In the test, a mouse was placed in a straight enclosure, about 5 feet long with white walls. Unique symbols marked different locations along the walls -- for example, a bold plus sign near the right-most end and an angled slash near the center. Sugar water (a treat for mice) was placed at either end of the track. While the mouse explored, the researchers measured the activity of specific neurons in the mouse hippocampus (the region of the brain where new memories are formed) that are known to encode for places.

When an animal was initially placed in the track, it was unsure of what to do and wandered left and right until it came across the sugar water. In these cases, single neurons were activated when the mouse took notice of a symbol on the wall. But over multiple experiences with the track, the mouse became familiar with it and remembered the locations of the sugar. As the mouse became more familiar, more and more neurons were activated in synchrony by seeing each symbol on the wall. Essentially, the mouse was recognizing where it was with respect to each unique symbol.

To study how memories fade over time, the researchers then withheld the mice from the track for up to 20 days. Upon returning to the track after this break, mice that had formed strong memories encoded by higher numbers of neurons remembered the task quickly. Even though some neurons showed different activity, the mouse's memory of the track was clearly identifiable when analyzing the activity of large groups of neurons. In other words, using groups of neurons enables the brain to have redundancy and still recall memories even if some of the original neurons fall silent or are damaged.

Gonzalez explains: "Imagine you have a long and complicated story to tell. In order to preserve the story, you could tell it to five of your friends and then occasionally get together with all of them to re-tell the story and help each other fill in any gaps that an individual had forgotten. Additionally, each time you re-tell the story, you could bring new friends to learn and therefore help preserve it and strengthen the memory. In an analogous way, your own neurons help each other out to encode memories that will persist over time."

Memory is so fundamental to human behavior that any impairment to memory can severely impact our daily life. Memory loss that occurs as part of normal aging can be a significant handicap for senior citizens. Moreover, memory loss caused by several diseases, most notably Alzheimer's, has devastating consequences that can interfere with the most basic routines including recognizing relatives or remembering the way back home. This work suggests that memories might fade more rapidly as we age because a memory is encoded by fewer neurons, and if any of these neurons fail, the memory is lost. The study suggests that one day, designing treatments that could boost the recruitment of a higher number of neurons to encode a memory could help prevent memory loss.

"For years, people have known that the more you practice an action, the better chance that you will remember it later," says Lois. "We now think that this is likely, because the more you practice an action, the higher the number of neurons that are encoding the action. The conventional theories about memory storage postulate that making a memory more stable requires the strengthening of the connections to an individual neuron. Our results suggest that increasing the number of neurons that encode the same memory enables the memory to persist for longer."

Read more at Science Daily

Aug 23, 2019

Genetic diversity couldn't save Darwin's finches

Researchers at the University of Cincinnati found that Charles Darwin's famous finches defy what has long been considered a key to evolutionary success: genetic diversity.

The study of the finches of the Galapagos Islands could change the way conservation biologists think about species with naturally fragmented populations to understand their potential for extinction.

UC graduate Heather Farrington and UC biologists Kenneth Petren and Lucinda Lawson found that genetic diversity was not a good predictor of whether populations of finches would survive. A UC lab analysis of century-old museum specimens found that six of eight extinct populations had more genetic diversity than similar museum specimens from which descendants survive today. In most other species, low genetic diversity is a signal of a population in decline.

Researchers examined 212 tissue samples from museum specimens and living birds. Some of the museum specimens in the study were collected by Darwin himself in 1835. Only one of the extinct populations, a species called the vegetarian finch, had lower genetic diversity compared to modern survivors.

Lawson said the findings are explained by the fact that these birds can migrate in between populations.

Specifically, researchers believe a biological phenomenon called sink-source dynamics is at play in which larger populations of birds from other islands act as a "source" of immigrants to the island population that is naturally shrinking, the "sink." Without these immigrant individuals, the natural population on the island likely would continue to dwindle to local extinction. The immigrants have diverse genetics because they are coming from a variety of healthier islands, giving this struggling "sink" population inflated genetic diversity.

Petren said the findings serve as a warning that the genetics of individuals in fragmented populations might not tell the whole story about a species. And that is important for scientists who increasingly use genetics to account for the flow of genes between populations when determining a threatened species' likelihood of extinction.

"The promise of genetics is to sample a few individuals to understand the whole population. But it's a cautionary note that you might be sampling a fragment. You could be misled," he said.

Petren has been studying the birds for 25 years at UC's McMicken College of Arts and Sciences. He said the island's 18 recognized species of finches are unusual for other reasons. Some finches that look most different are actually closely related, he said. And similar-looking finches that birders might have trouble telling apart are actually far apart on the evolutionary family tree.

"It's a paradox. If Darwin fully understood what was going on, it might have blown his mind," Petren said. "These finches are not the first case you would pick to formulate the notion that species can change over time because the patterns of change are so complicated."

The UC study was published in August in the journal Conservation Genetics after first appearing online in April. It suggests that genetic diversity may not be the best predictor of extinction risk for mobile species like the island-hopping finches. That's because healthier populations may contribute individuals to declining ones.

Lawson said factors such as historical diversity or the possibility of gene flow between populations should be considered in addition to the snapshot view provided by a genetic analysis for a fuller understanding of a species' potential for extinction.

"Typically, we would expect populations with high genetic diversity to have a greater potential for long-term survival," she said. "Meanwhile, the low-diversity populations would be more likely to go extinct because that's a common pattern as populations decline to few individuals. Surprisingly, we found that most of the extinct populations had higher genetic diversity."

The study was sponsored in part by the National Science Foundation, Sigma Xi, the American Ornithologists' Union and UC's Office of Research.

Darwin's "On the Origin of Species" was groundbreaking in our understanding of evolution through natural selection. "Survival of the fittest" is a household phrase and a shorthand description of any competition.

While scientists today know more about how new species are formed, the principles Darwin developed remain the foundation of evolutionary biology, Petren said.

Read more at Science Daily

Here's how early humans evaded immunodeficiency viruses

Mangabey monkeys
For hundreds of thousands of years, monkeys and apes have been plagued by simian immunodeficiency virus (SIV), which still devastates primate groups in Africa.

Luckily, as humans evolved from these early primates, we picked up a mutation that made us immune from SIV -- at least until the early 20th century, when the virus evolved to get around our defenses, giving rise to human immunodeficiency virus (HIV) and an AIDS pandemic that today affects an estimated 38 million people worldwide.

University of California, Berkeley, researchers have now discovered how that long-ago human mutation interfered with SIV infection, a finding that could provide clues for the development of new therapies to thwart HIV and similar viral infections.

"The main importance for this paper is that it tells us what was one of the last major barriers before the crossover to humans happened," said James Hurley, a UC Berkeley professor of molecular and cell biology. "The current paper is an archeological look at how this happened."

The barrier was a mutation in human cells that blocked SIV from forcing these cells to shed thousands more copies of the virus. As a result, humans could not re-infect one another.

This genetic mutation interfered with the ability of an SIV protein to tightly bind two human proteins and send them for destruction within the cell, instead of fighting the virus. The researchers used cryo-electron microscopy, or cryoEM, to determine the structure of this protein complex and discovered that the mutation so effectively disrupted the protein binding sites that it took SIV a long time to find a work-around.

"The binding site involved is structurally very complex, so essentially it is not possible to adapt to it once the tight binding is lost. The virus had to invent a completely different way to do the same thing, which took a long time in evolution," Hurley said. "This conferred an advantage on our prehistoric ancestors: From chimps on down, every primate was susceptible to SIV, but humans were immune. That gave humans probably a grace period of tens to hundreds of thousands of years to develop without having to deal with this disease. I tend to think that really gave a leg up to humans in early evolution."

Though the SIV virus -- in this case, from a monkey called the sooty mangabey, the source of the less virulent HIV-2 strain in humans -- differs in several ways from the HIV strains that afflict humans, the findings could pinpoint targets for drugs as researchers look for "functional" cures for AIDS. These would be one-time treatments that prevent flare-ups of the disease, even if the virus remains in the body.

"The overall strategy in our lab is to try to find regions in the structures of human proteins that are attacked by viruses, but are not needed for normal purposes by the host, so that a drug can be designed to attack that region," Hurley said. "The virus will typically respond by mutating, which means it evolves drug resistance, but this new finding suggests that with the right point of attack, it could take SIV or HIV, in some cases, tens of thousands of years of evolution to catch up."

The work will be published in the Sept. 11 issue of the journal Cell Host & Microbe and was posted online Aug. 22.

Sooty mangabeys

SIV and HIV, which are lentiviruses, are hard to root out the body because they insert their DNA into the genomes of host cells, where it sits like a ticking time bomb, ready at any moment to revive, take over the host cell's machinery to makes copies of itself and send out thousands of these copies -- called virions -- to infect other cells.

These virions are formed when the newly copied viral DNA wraps itself in a piece of the host cell's membrane and buds off, safely ensconced in a bubble until it can reinfect.

Because budding is an important step in the spread of many viruses, primates long ago evolved natural defenses, including proteins on the surface of cells that staple the budding virions to the cell and prevent them from leaving. As they accumulate, the immune system recognizes these unbudded virions as abnormal and destroys the whole cell, virus and all.

In monkey, ape and human cells, the staple is called tetherin, because it tethers the budding virion to the cell membrane.

In the constant arms race between host and pathogen, SIV evolved a countermeasure that exploits another normal cell function: its recycling system. Cells have ways to remove proteins sitting on the surface, through which cells constantly take up and recycle tetherin if there's no indication it is needed to fight an invading virus. It does this by dimpling the membrane inward to form a little bubble inside the cell, capturing tetherin and other surface proteins in this vesicle and then digesting all the contents, including tetherin.

SIV's countermeasure was to produce a protein, called Nef, that revs up the recycling of tetherin, even during an infection. This enables virions to bud off and search for new victims.

Hurley and project scientist Xuefeng "Snow" Ren found that Nef forms a tight wedge between tetherin and a protein in the vesicle called AP-2, preventing tetherin from escaping the vesicle and dooming it to recycling.

"Nef is a bridge between AP2 and tetherin to recruit them into endocytosis, dragging the tetherin into the vesicle," Ren said. "So it tricks our own cells' machinery for getting rid of stuff we don't want into getting rid of stuff the virus doesn't want."

The five amino acids that humans lost in the tetherin protein -- the mutation that gave humans immunity against SIV -- loosened the binding between tetherin, Nef and AP-2, which allowed tetherin to escape recycling. This blocked the crossover of zoonotic virus transmission, Ren said, because the structural rearrangement was so extensive that SIV couldn't fix it by simple mutations in Nef.

SIV developed a new trick

Some variants of SIV did eventually find a way around this hurdle, however. At some point, a few SIVs acquired a second protein, Vpu, to do what Nef also did -- wedge itself between proteins to cement connections helpful to the virus. At some point, perhaps a hundred years ago, this strain of SIV moved into humans from chimpanzees, and a slight mutation in Vpu reignited the recycling of tetherin in humans, unleashing what we know today as group M HIV-1, the most virulent form of HIV worldwide.

"There were probably many crossovers into humans that failed, but eventually, some hunter in Africa, perhaps in the course of butchering a chimp, was exposed to the blood, and the virus then acquired an additional mutation, a small step that turned SIV into HIV," Hurley said.

Next up, Hurley, Ren and their colleagues plan to use cryoEM to determine the structure of the three-protein complex in gorilla variants of SIV, which evolved into the O strain of HIV-1, a less virulent strain that originated in the African country of Cameroon.

Read more at Science Daily

Do single people suffer more?

Researchers at the University of Health Sciences, Medical Informatics and Technology (UMIT, Hall, Austria) and the University of the Balearic Islands (Palma de Mallorca, Spain) have confirmed the analgesic effects of social support -- even without verbal or physical contact.

The short communication, entitled "Dispositional empathy is associated with experimental pain reduction during provision of social support by romantic partners" by Stefan Duschek, Lena Nassauer, Casandra I. Montoro, Angela Bair and Pedro Montoya has recently been published in the Scandinavian Journal of Pain.

The authors assessed sensitivity to pressure pain in 48 heterosexual couples with each participant tested alone and in the passive presence of their partner. Dispositional empathy was quantified by a questionnaire.

In the presence, as compared to the absence, of their partners both men and women exhibited higher pain thresholds and tolerance as well as lower sensory and affective pain ratings on constant pressure stimuli. Partner empathy was positively associated with pain tolerance and inversely associated with sensory pain experience.

"Repeatedly, talking and touching have been shown to reduce pain, but our research shows that even the passive presence of a romantic partner can reduce it and that partner empathy may buffer affective distress during pain exposure," said Professor Stefan Duschek of UMIT, speaking on behalf of the authors.

From Science Daily

Scorpion toxin that targets 'wasabi receptor' may help solve mystery of chronic pain

Black rock scorpion
Researchers at UC San Francisco and the University of Queensland have discovered a scorpion toxin that targets the "wasabi receptor," a chemical-sensing protein found in nerve cells that's responsible for the sinus-jolting sting of wasabi and the flood of tears associated with chopping onions. Because the toxin triggers a pain response through a previously unknown mechanism, scientists think it can be used as a tool for studying chronic pain and inflammation, and may eventually lead to the development of new kinds of non-opioid pain relievers.

The scientists isolated the toxin, a short protein (or peptide) that they dubbed the "wasabi receptor toxin" (WaTx), from the venom of the Australian Black Rock scorpion. The discovery came as the researchers were conducting a systematic search for compounds in animal venom that could activate, and therefore be used to probe and study, the wasabi receptor -- a sensory protein officially named TRPA1 (pronounced "trip A1") that's embedded in sensory nerve endings throughout the body. When activated, TRPA1 opens to reveal a channel that allows sodium and calcium ions to flow into the cell, which can induce pain and inflammation.

"Think of TRPA1 as the body's 'fire alarm' for chemical irritants in the environment," said John Lin King, a doctoral student in UCSF's Neuroscience Graduate Program and lead author of a study published August 22, 2019 in Cell, which describes the toxin and its surprising mode of action. "When this receptor encounters a potentially harmful compound -- specifically, a class of chemicals known as 'reactive electrophiles,' which can cause significant damage to cells -- it is activated to let you know you're being exposed to something dangerous that you need to remove yourself from."

Cigarette smoke and environmental pollutants, for example, are rich in reactive electrophiles which can trigger TRPA1 in the cells that line the surface of the body's airway, which can induce coughing fits and sustained airway inflammation. The receptor can also be activated by chemicals in pungent foods like wasabi, onions, mustard, ginger and garlic -- compounds that, according to Lin King, may have evolved to discourage animals from eating these plants. WaTx appears to have evolved for the same reason.

Though many animals use venom to paralyze or kill their prey, WaTx seems to serve a purely defensive purpose. Virtually all animals, from worms to humans, have some form of TRPA1. But the researchers found that WaTx can only activate the version found in mammals, which aren't on the menu for Black Rock scorpions, suggesting that the toxin is mainly used to ward off mammalian predators.

"Our results provide a beautiful and striking example of convergent evolution, whereby distantly related life forms -- plants and animals -- have developed defensive strategies that target the same mammalian receptor through completely distinct strategies," said David Julius, PhD, professor and chair of UCSF's Department of Physiology, and senior author of the new study.

But what the researchers found most interesting about WaTx was its mode of action. Though it triggers TRPA1, just as the compounds found in pungent plants do -- and even targets the very same site on that receptor -- the way it activates the receptor was novel and unexpected.

First, WaTx forces its way into the cell, circumventing the standard routes that place strict limits on what's allowed in and out. Most compounds, from tiny ions to large molecules, are either ingested by the cell through a complex process known as "endocytosis," or they gain entry by passing through one of the many protein channels that stud the cell's surface and act as gatekeepers.

But WaTx contains an unusual sequence of amino acids that allows it to simply penetrate the cell's membrane and pass right through to the cell's interior. Few other proteins are capable of the same feat. The most famous example is an HIV protein called Tat, but surprisingly, WaTx contains no sequences similar to those found in Tat or in any other protein that can pass through the cell's membrane.

"It was surprising to find a toxin that can pass directly through membranes. This is unusual for peptide toxins," Lin King said. "But it's also exciting because if you understand how these peptides get across the membrane, you might be able to use them to carry things -- drugs, for example -- into the cell that can't normally get across membranes."

Once inside the cell, WaTx attaches itself to a site on TRPA1 known as the "allosteric nexus," the very same site targeted by pungent plant compounds and environmental irritants like smoke. But that's where the similarities end.

Plant and environmental irritants alter the chemistry of the allosteric nexus, which causes the TRPA1 channel to rapidly flutter open and closed. This allows positively charged sodium and calcium ions to flow into the cell, triggering pain. Though both ions are able to enter when TRPA1 is activated by these irritants, the channel exhibits a strong preference for calcium and lets much more of it into the cell, which leads to inflammation. By contrast, WaTx wedges itself into the allosteric nexus and props the channel open. This abolishes its preference for calcium. As a result, overall ion levels are high enough to trigger a pain response, but calcium levels remain too low to initiate inflammation.

To demonstrate this, the researchers injected either mustard oil, a plant irritant known to activate the wasabi receptor, or WaTx into the paws of mice. With mustard oil, they observed acute pain, hypersensitivity to temperature and touch -- key hallmarks of chronic pain -- and inflammation, as evidenced by significant swelling. But with WaTx, they observed acute pain and pain hypersensitivities, but no swelling.

"When triggered by calcium, nerve cells can release pro-inflammatory signals that tell the immune system that something's wrong and needs to be repaired," Lin King said. "This 'neurogenic inflammation' is one of the key processes that becomes dysregulated in chronic pain. Our results suggest that you can decouple the protective acute pain response from the inflammation that establishes chronic pain. Achieving this goal, if only in principle, has been a longstanding aim in the field."

The researchers believe their findings will lead to a better understanding of acute pain, as well as the link between chronic pain and inflammation, which were previously thought to be experimentally indistinguishable. The findings may even lay the groundwork for the development of new pain drugs.

"The discovery of this toxin provides scientists with a new tool that can be used to probe the molecular mechanisms of pain, in particular, to selectively probe the processes that lead to pain hypersensitivity," Lin King said. "And for those interested in drug discovery, our findings underscore the promise of TRPA1 as a target for new classes of non-opioid analgesics to treat chronic pain."

Read more at Science Daily

Aug 22, 2019

Mission to Jupiter's icy moon confirmed

A 2016 artist's concept of the Europa Clipper spacecraft. The design is changing as the spacecraft is developed.
An icy ocean world in our solar system that could tell us more about the potential for life on other worlds is coming into focus with confirmation of the Europa Clipper mission's next phase. The decision allows the mission to progress to completion of final design, followed by the construction and testing of the entire spacecraft and science payload.

"We are all excited about the decision that moves the Europa Clipper mission one key step closer to unlocking the mysteries of this ocean world," said Thomas Zurbuchen, associate administrator for the Science Mission Directorate at NASA Headquarters in Washington. "We are building upon the scientific insights received from the flagship Galileo and Cassini spacecraft and working to advance our understanding of our cosmic origin, and even life elsewhere."

The mission will conduct an in-depth exploration of Jupiter's moon Europa and investigate whether the icy moon could harbor conditions suitable for life, honing our insights into astrobiology. To develop this mission in the most cost-effective fashion, NASA is targeting to have the Europa Clipper spacecraft complete and ready for launch as early as 2023. The agency baseline commitment, however, supports a launch readiness date by 2025.

NASA's Jet Propulsion Laboratory in Pasadena, California, leads the development of the Europa Clipper mission in partnership with the Johns Hopkins University Applied Physics Laboratory for the Science Mission Directorate. Europa Clipper is managed by the Planetary Missions Program Office at NASA's Marshall Space Flight Center in Huntsville, Alabama.

From Science Daily

Temperatures of 800 billion degrees in the cosmic kitchen

When two neutron stars collide, the matter at their core enters extreme states. An international research team has now studied the properties of matter compressed in such collisions. The HADES long-term experiment, involving more than 110 scientists, has been investigating forms of cosmic matter since 1994. With the investigation of electromagnetic radiation arising when stars collide, the team has now focused attention on the hot, dense interaction zone between two merging neutron stars.

Simulation of electromagnetic radiation
Collisions between stars cannot be directly observed -- not least of all because of their extreme rarity. According to estimates, none has ever happened in our galaxy, the Milky Way. The densities and temperatures in merging processes of neutron stars are similar to those occurring in heavy ion collisions, however. This enabled the HADES team to simulate the conditions in merging stars at the microscopic level in the heavy ion accelerator at the Helmholtzzentrum für Schwerionenforschung (GSI) in Darmstadt.

As in a neutron star collision, when two heavy ions are slammed together at close to the speed of light, electromagnetic radiation is produced. It takes the form of virtual photons that turn back into real particles after a very short time. However, the virtual photons occur very rarely in experiments using heavy ions. "We had to record and analyze about 3 billion collisions to finally reconstruct 20,000 measurable virtual photons," says Dr. Jürgen Friese, the former spokesman of the HADES collaboration and researcher at Laura Fabbietti's Professorship on Dense and Strange Hadronic Matter at TUM.

Photon camera shows collision zone

To detect the rare and transient virtual photons, researchers at TUM developed a special 1.5 square meter digital camera. This instrument records the Cherenkov effect: the name given to certain light patterns generated by decay products of the virtual photons. "Unfortunately the light emitted by the virtual photons is extremely weak. So the trick in our experiment was to find the light patterns," says Friese. "They could never be seen with the naked eye. We therefore developed a pattern recognition technique in which a 30,000 pixel photo is rastered in a few microseconds using electronic masks. That method is complemented with neural networks and artificial intelligence."

Read more at Science Daily

Brain finds order amidst chaos

How does the brain find order amidst a sea of noise and chaos? Researchers have found the answer by using advanced simulation techniques to investigate the way neurons talk to each other. They found that by working as a team, cortical neurons can respond even to weak input against the backdrop of noise and chaos, allowing the brain to find order.

Neurons communicate with each other by sending out rapid pulses of electrical signals called spikes. At first glance, the generation of these spikes can be very reliable: when an isolated neuron is repeatedly given exactly the same electrical input, we find the same pattern of spikes. Why, then, does the activity of cortical neurons in a live animal fluctuate and actually seem so variable?

There are two reasons for this. Firstly, when transmitting a signal to another neuron, the process can sometimes fail and these failures are unpredictable -- like rolling a die to decide on an outcome. "We estimate the chance of a synapse between two cortical pyramidal neurons passing a chemical neurotransmitter signal can be as low as 10%," explains lead researcher Max Nolte. This uncertainty means that a neuron will hear the same message sent by connected neurons differently every time.

Secondly, when the two fundamental types of cortical neurons (excitatory and inhibitory) are interconnected in a network, small uncertainties in activity patterns become amplified. This leads to unpredictable patterns, a behavior that is called chaos.

This backdrop of noise and chaos suggests that individual cortical neurons cannot find order and fire reliable spikes, and so the brain has to 'average' the activity of many neurons for certainty -- listen to the whole choir instead of individual singers.

Simulation neuroscience finds the answer

The experimental manipulations required to untangle the noise sources in the brain and evaluate their impact on neuronal activity are currently impossible to perform in a live animal in vivo, or even in separated brain tissue in vitro. "For the moment, it is simply not possible to monitor all of the thousands of brain-wide inputs to a neuron in vivo, nor to turn on and off different noise sources," says Nolte. The closest approximation of cortical tissue to date in a model is the Blue Brain Project's biologically detailed digital reconstruction of rat neocortical microcircuitry (Cell 2015). This computer model provided the ideal platform for the researchers to study to what degree the voices of individual neurons can be understood, as it contains data-constrained models of the unreliable signal transmission between neurons.

Using this model, they found that activity that is spontaneously generated from the interconnected neurons is highly noisy and chaotic, depicting very different spike times in each repetition. "We studied the origin and nature of cortical internal variability with a biophysical neocortical microcircuit model with biologically realistic noise sources," reveals Nolte. "We observed that the unreliable neurotransmitter signals are amplified by recurrent network dynamics, causing a rapidly decaying memory of the past -- a sea of noise and chaos."

Reliable responses amidst noise and chaos


But, of course the mammalian brain does not have a rapidly decaying memory. In fact, perhaps the most fascinating insight from the findings is that spike times that were highly unreliable during spontaneous activity became highly reliable when the circuit received external inputs. This phenomenon was not simply a result of strong external input directly driving the neurons to reliable responses. Even weak thalamocortical input could switch the network briefly to a regime of highly reliable spiking. At that point, the interactions between the neurons that otherwise amplify uncertainty and chaos conversely amplify reliability and allow the brain to find order.

Read more at Science Daily

Memory research: Fruit flies learn their body size once for an entire lifetime

In order to orient themselves and survive in their environment, animals must develop a concept of their own body size. Researchers at Johannes Gutenberg University Mainz (JGU) have shown that the fruit fly Drosophila melanogaster develops a very stable long-term memory for its own body size and the reach of its extremities after it has hatched from the pupal case. The fruit fly acquires this memory through visual feedback obtained when walking, but in the first two hours after training the memory is still susceptible to the effects of stress and not yet firmly anchored. "Once the memory has consolidated, it appears from our observations that it remains intact for life," said Professor Roland Strauss of the Institute of Developmental Biology and Neurobiology at JGU. "The insects seem to have calibrated themselves for the rest of their lives." However, it is still puzzling why they are only able to access the acquired knowledge 12 hours after training. The researchers still don't know what happens in the brain in the interim.

The research group of Professor Roland Strauss uses Drosophila melanogaster to investigate memory retention and consolidation processes that are known to occur to some extent also in humans. Earlier studies have shown that the short-term memory of fruit flies declines with age and that a protein is involved in this process which is similar to that playing a role in humans.

In the new study, Tammo Krause and Laura Spindler analyzed the body-size memory of Drosophila. Fruit flies are insects that undergo a complete metamorphosis. They undergo three larval stages, during which they grow. Finally, at the end of pupation, the mature fruit fly emerges. Due to Drosophila's hard exoskeleton, the body size can no longer change; it can vary however, as the actual size of a fly is determined by the availability and the quality of food during the larval stages.

Leg-over-head behavior indicates an intention to climb

"In our recent study, we wanted to find out how the insects acquire information about their body size and remember it later," explained Tammo Krause and Laura Spindler. They observed how, under various conditions, the insects try to overcome a small gap exceeding their step size. Drosophila displays stereotypical behavior in such situations: In order to initiate the attempt to climb, the insect begins by making search movements with its forelegs lifted over the head. If the gap is far too wide, this typical behavior is not observed. The fly just turns away.

Knowledge of body size is gained through visual feedback

The results show that Drosophila learns to estimate the distance across a gap and the reach of its legs by linking visual information from the environment, such as a striped pattern, with their body size. Newly hatched fruit flies raised in the dark overestimate their body size and try to overcome gaps that are far too big significantly more frequently than animals raised in the usual light-dark cycle. The motion parallax created on the retina by structures in the surroundings when walking is used for the learning process. Another experiment in which the motion parallax was manipulated and artificially reduced during the learning process confirmed this. As a result, the flies underestimated their body size and undertook fewer climbing attempts.

"When fruit flies hatch from the pupal case and move through space, the resulting motion is captured by the eye and the fly can calibrate itself," explained Professor Roland Strauss. "Once this calibration has occurred, the knowledge is retained for life." The research team proved this in an additional experiment in which the fruit flies had to spend 21 days in the dark after three days under normal light conditions. Even after this long period, they still undertook the same number of climbing attempts as three-day-old flies. "We therefore assume that memory of their own body size is the most permanent form of recall detected in Drosophila to date," the two lead authors Tammo Krause and Laura Spindler concluded in their article in Current Biology.

Read more at Science Daily

Storms on Jupiter are disturbing the planet's colorful belts

Jupiter
Storm clouds rooted deep in Jupiter's atmosphere are affecting the planet's white zones and colorful belts, creating disturbances in their flow and even changing their color.

Thanks to coordinated observations of the planet in January 2017 by six ground-based optical and radio telescopes and NASA's Hubble Space Telescope, a University of California, Berkeley, astronomer and her colleagues have been able to track the effects of these storms -- visible as bright plumes above the planet's ammonia ice clouds -- on the belts in which they appear.

The observations will ultimately help planetary scientists understand the complex atmospheric dynamics on Jupiter, which, with its Great Red Spot and colorful, layer cake-like bands, make it one of the most beautiful and changeable of the giant gas planets in the solar system.

One such plume was noticed by amateur astronomer Phil Miles in Australia a few days before the first observations by the Atacama Large Millimeter/Submillimeter Array (ALMA) in Chile, and photos captured a week later by Hubble showed that the plume had spawned a second plume and left a downstream disturbance in the band of clouds, the South Equatorial Belt. The rising plumes then interacted with Jupiter's powerful winds, which stretched the clouds east and west from their point of origin.

Three months earlier, four bright spots were seen slightly north of the North Equatorial Belt. Though those plumes had disappeared by 2017, the belt had since widened northward, and its northern edge had changed color from white to orangish brown.

"If these plumes are vigorous and continue to have convective events, they may disturb one of these entire bands over time, though it may take a few months," said study leader Imke de Pater, a UC Berkeley professor emerita of astronomy. "With these observations, we see one plume in progress and the aftereffects of the others."

The analysis of the plumes supports the theory that they originate about 80 kilometers below the cloud tops at a place dominated by clouds of liquid water. A paper describing the results has been accepted for publication in the Astronomical Journal and is now online.

Into the stratosphere

Jupiter's atmosphere is mostly hydrogen and helium, with trace amounts of methane, ammonia, hydrogen sulfide and water. The top-most cloud layer is made up of ammonia ice and comprises the brown belts and white zones we see with the naked eye. Below this outer cloud layer sits a layer of solid ammonium hydrosulfide particles. Deeper still, at around 80 kilometers below the upper cloud deck, is a layer of liquid water droplets.

The storm clouds de Pater and her team studied appear in the belts and zones as bright plumes and behave much like the cumulonimbus clouds that precede thunderstorms on Earth. Jupiter's storm clouds, like those on Earth, are often accompanied by lightning.

Optical observations cannot see below the ammonia clouds, however, so de Pater and her team have been probing deeper with radio telescopes, including ALMA and also the Very Large Array (VLA) in New Mexico, which is operated by the National Science Foundation-funded National Radio Astronomy Observatory.

ALMA array's first observations of Jupiter were between Jan. 3 and 5 of 2017, a few days after one of these bright plumes was seen by amateur astronomers in the planet's South Equatorial Belt. A week later, Hubble, the VLA, the Gemini, Keck and Subaru observatories in Hawaii and the Very Large Telescope (VLT) in Chile captured images in the visible, radio and mid-infrared ranges.

De Pater combined the ALMA radio observations with the other data, focused specifically on the newly brewed storm as it punched through the upper deck clouds of ammonia ice.

The data showed that these storm clouds reached as high as the tropopause -- the coldest part of the atmosphere -- where they spread out much like the anvil-shaped cumulonimbus clouds that generate lightning and thunder on Earth.

"Our ALMA observations are the first to show that high concentrations of ammonia gas are brought up during an energetic eruption," de Pater said.

The observations are consistent with one theory, called moist convection, about how these plumes form. According to this theory, convection brings a mix of ammonia and water vapor high enough -- about 80 kilometers below the cloud tops -- for the water to condense into liquid droplets. The condensing water releases heat that expands the cloud and buoys it quickly upward through other cloud layers, ultimately breaking through the ammonia ice clouds at the top of the atmosphere.

The plume's momentum carries the supercooled ammonia cloud above the existing ammonia-ice clouds until the ammonia freezes, creating a bright, white plume that stands out against the colorful bands encircling Jupiter.

"We were really lucky with these data, because they were taken just a few days after amateur astronomers found a bright plume in the South Equatorial Belt," said de Pater. "With ALMA, we observed the whole planet and saw that plume, and since ALMA probes below the cloud layers, we could actually see what was going on below the ammonia clouds."

Hubble took images a week after ALMA and captured two separate bright spots, which suggests that the plumes originate from the same source and are carried eastward by the high altitude jet stream, leading to the large disturbances seen in the belt.

Coincidentally, three months before, bright plumes had been observed north of the Northern Equatorial Belt. The January 2017 observations showed that that belt had expanded in width, and the band where the plumes had first been seen turned from white to orange. De Pater suspects that the northward expansion of the North Equatorial Belt is a result of gas from the ammonia-depleted plumes falling back into the deeper atmosphere.

De Pater's colleague and co-author Robert Sault of the University of Melbourne in Australia used special computer software to analyze the ALMA data to obtain radio maps of the surface that are comparable to visible-light photos taken by Hubble.

"Jupiter's rotation once every 10 hours usually blurs radio maps, because these maps take many hours to observe," Sault said. "In addition, because of Jupiter's large size, we had to 'scan' the planet, so we could make a large mosaic in the end. We developed a technique to construct a full map of the planet."

Read more at Science Daily

Aug 21, 2019

Stone Age boat building site has been discovered underwater

The Maritime Archaeological Trust has discovered a new 8,000 year old structure next to what is believed to be the oldest boat building site in the world on the Isle of Wight.

Director of the Maritime Archaeological Trust, Garry Momber, said "This new discovery is particularly important as the wooden platform is part of a site that doubles the amount of worked wood found in the UK from a period that lasted 5,500 years."

The site lies east of Yarmouth, and the new platform is the most intact, wooden Middle Stone Age structure ever found in the UK. The site is now 11 meters below sea level and during the period there was human activity on the site, it was dry land with lush vegetation. Importantly, it was at a time before the North Sea was fully formed and the Isle of Wight was still connected to mainland Europe.

The site was first discovered in 2005 and contains an arrangement of trimmed timbers that could be platforms, walkways or collapsed structures. However, these were difficult to interpret until the Maritime Archaeological Trust used state of the art photogrammetry techniques to record the remains. During the late spring the new structure was spotted eroding from within the drowned forest. The first task was to create a 3D digital model of the landscape so it could be experienced by non-divers. It was then excavated by the Maritime Archaeological Trust during the summer and has revealed a cohesive platform consisting of split timbers, several layers thick, resting on horizontally laid round-wood foundations.

Garry continued "The site contains a wealth of evidence for technological skills that were not thought to have been developed for a further couple of thousand years, such as advanced wood working. This site shows the value of marine archaeology for understanding the development of civilisation.

Yet, being underwater, there are no regulations that can protect it. Therefore, it is down to our charity, with the help of our donors, to save it before it is lost forever."

The Maritime Archaeological Trust is working with the National Oceanography Centre (NOC) to record and study, reconstruct and display the collection of timbers. Many of the wooden artefacts are being stored in the British Ocean Sediment Core Research facility (BOSCORF), operated by the National Oceanography Centre.

As with sediment cores, ancient wood will degrade more quickly if it is not kept in a dark, wet and cold setting. While being kept cold, dark and wet, the aim is to remove salt from within wood cells of the timber, allowing it to be analysed and recorded. This is important because archaeological information, such as cut marks or engravings, are most often found on the surface of the wood and are lost quickly when timber degrades. Once the timbers have been recorded and have desalinated, the wood can be conserved for display.

Dr Suzanne Maclachlan, the curator at BOSCORF, said "It has been really exciting for us to assist the Trust's work with such unique and historically important artefacts. This is a great example of how the BOSCORF repository is able to support the delivery of a wide range of marine science."

When diving on the submerged landscape Dan Snow, the history broadcaster and host of History Hit, one of the world's biggest history podcasts, commented that he was both awestruck by the incredible remains and shocked by the rate of erosion.

This material, coupled with advanced wood working skills and finely crafted tools suggests a European, Neolithic (New Stone Age) influence. The problem is that it is all being lost. As the Solent evolves, sections of the ancient land surface are being eroded by up to half a metre per year and the archaeological evidence is disappearing.

Read more at Science Daily

Extreme wildfires threaten to turn boreal forests from carbon sinks to carbon sources

Carbon reservoirs in the soil of boreal forests are being released by more frequent and larger wildfires, according to a new study involving a University of Guelph researcher.

As wildfires continue to ravage northern areas across the globe, a research team investigated the impact of these extreme fires on previously intact carbon stores by studying the soil and vegetation of the boreal forest and how they changed after a record-setting fire season.

"Northern fires are happening more often, and their impacts are changing," said U of G Prof. Merritt Turetsky, who holds the Canada Research Chair in Integrative Biology. She worked on the study with lead authors Xanthe Walker and Michelle Mack from the Center for Ecosystem Science and Society at Northern Arizona University (NAU), as well as a team of Canadian scientists including Wilfrid Laurier University professor Jennifer Baltzer.

The work was supported by NASA, the Natural Sciences and Engineering Research Council of Canada and the Government of the Northwest Territories.

In 2014, the Northwest Territories suffered its largest fire season in recorded history. This series of mega-fires created the ideal environment to study whether carbon stores are being combusted by these types of fires.

"Between fires, boreal soils accumulate carbon, and in most cases only some of this carbon is released when the forests experience the next fire," said Baltzer. "Over time, this explains why the boreal forest is a globally significant carbon sink. We wanted to see whether the extreme 2014 fires tapped into these old-legacy carbon layers or whether they were still preserved in the ground."

For the study, published in the journal Nature, the research team collected soil samples from more than 200 forest and wetland plots across the territory. They applied a novel radiocarbon dating approach to estimate the age of the carbon in the samples.

"Carbon accumulates in these soils like tree rings, with the newest carbon at the surface and the oldest carbon at the bottom," said Mack. "We thought we could use this layering to see how far back in time, in the history of the forest, fires were burning."

The researchers found combustion of legacy carbon in nearly half of the samples taken from young forests (less than 60 years old). This carbon had escaped burning during the previous fire cycle but not during the record-setting fire season of 2014.

"In older stands that burn, this carbon is protected by thick organic soils," said NAU's Walker. "But in younger stands that burn, the soil does not have time to re-accumulate after the previous fire, making legacy carbon vulnerable to burning. This pattern could shift boreal forests into a new domain of carbon cycling, where they become a carbon source instead of a sink."

As wildfires are expected to occur more frequently and burn more intensely, old carbon may be released to the atmosphere more often.

"Understanding the fate of this stockpile of boreal carbon is really important in the context of atmosphere greenhouse gases and the Earth's climate," Turetsky said. "This is carbon the atmosphere lost hundreds or sometimes even thousands of years ago. Fire is one mechanism that can release that old carbon back to the atmosphere quickly where it can contribute to the greenhouse gas effect."

She said the potential switch of the boreal forest from carbon storage to carbon source directly impacts global climate and is not well represented in global models.

"In the context of territorial and pan-Canadian planning for climate change adaptation and mitigation, the Government of Northwest Territories recognizes the critical need to understand the role of our boreal forests in carbon storage, sequestration and release, and how our forest management practices can affect these processes," said Erin Kelly, the territory's assistant deputy minister of environment and natural resources.

Read more at Science Daily

Fake news can lead to false memories

Voters may form false memories after seeing fabricated news stories, especially if those stories align with their political beliefs, according to research in Psychological Science, a journal of the Association for Psychological Science.

The research was conducted in the week preceding the 2018 referendum on legalizing abortion in Ireland, but the researchers suggest that fake news is likely to have similar effects in other political contexts, including the U.S. presidential race in 2020.

"In highly emotional, partisan political contests, such as the 2020 US Presidential election, voters may 'remember' entirely fabricated news stories," says lead author Gillian Murphy of University College Cork. "In particular, they are likely to 'remember' scandals that reflect poorly on the opposing candidate.'

The study is novel because it examines misinformation and false memories in relation to a real-world referendum, Murphy explains.

She and her colleagues, including leading memory researcher Elizabeth Loftus of the University of California, Irvine, recruited 3,140 eligible voters online and asked them whether and how they planned to vote in the referendum.

Next, the experimenters presented each participant with six news reports, two of which were made-up stories that depicted campaigners on either side of the issue engaging in illegal or inflammatory behavior. After reading each story, participants were asked if they had heard about the event depicted in the story previously; if so, they reported whether they had specific memories about it.

The researchers then informed the eligible voters that some of the stories they read had been fabricated, and invited the participants to identify any of the reports they believed to be fake. Finally, the participants completed a cognitive test.

Nearly half of the respondents reported a memory for at least one of the made-up events; many of them recalled rich details about a fabricated news story. The individuals in favor of legalizing abortion were more likely to remember a falsehood about the referendum opponents; those against legalization were more likely to remember a falsehood about the proponents. Many participants failed to reconsider their memory even after learning that some of the information could be fictitious. And several participants recounted details that the false news reports did not include.

"This demonstrates the ease with which we can plant these entirely fabricated memories, despite this voter suspicion and even despite an explicit warning that they may have been shown fake news," Murphy says.

Participants who scored lower on the cognitive test were no more prone to forming false memories than were higher scorers, but low scorers were more likely to remember false stories that aligned with their opinions. This finding suggests that people with higher cognitive ability may be more likely to question their personal biases and their news sources, the researchers say.

Other collaborators on the project include Rebecca Hofstein Grady and Linda J. Levine at UC Irvine and Ciara Greene of University College Dublin. The researchers say they plan to expand on this study by investigating the influence of false memories related to the Brexit referendum and the "#MeToo movement."

Loftus says understanding the psychological effects of fake news is critical given that sophisticated technology is making it easier to create not only phony news reports and images, but fake video, as well.

Read more at Science Daily

First of its kind mapping model tracks how hate spreads and adapts online

Online hate thrives globally through self-organized, scalable clusters that interconnect to form resilient networks spread across multiple social media platforms, countries and languages, according to new research published today in the journal Nature. Researchers at the George Washington University developed a mapping model, the first of its kind, to track how these online hate clusters thrive. They believe it could help social media platforms and law enforcement in the battle against hate online.

With the explosion of social media, individuals are able to connect with other likeminded people in a matter of a few clicks. Clusters of those with common interests form readily and easily. Recently, online hate ideologies and extremist narratives have been linked to a surge in crimes around the world. To thwart this, researchers led by Neil Johnson, a professor of physics at GW, set out to better understand how online hate evolves and if it can be stopped.

"Hate destroys lives, not only as we've seen in El Paso, Orlando and New Zealand, but psychologically through online bullying and rhetoric," Dr. Johnson said. "We set out to get to the bottom of online hate by looking at why it is so resilient and how it can be better tackled. Instead of love being in the air, we found hate is in the ether."

To understand how hate evolves online, the team began by mapping how clusters interconnect to spread their narratives and attract new recruits. Focusing on social media platforms Facebook and its central European counterpart, VKontakte, the researchers started with a given hate cluster and looked outward to find a second one that was strongly connected to the original. They discovered that hate crosses boundaries of specific internet platforms, including Instagram, Snapchat and WhatsApp; geographic location, including the United States, South Africa and parts of Europe; and languages, including English and Russian.

The researchers saw clusters creating new adaptation strategies in order to regroup on other platforms and/or reenter a platform after being banned. For example, clusters can migrate and reconstitute on other platforms or use different languages to avoid detection. This allows the cluster to quickly bring back thousands of supporters to a platform on which they have been banned and highlights the need for crossplatform cooperation to limit online hate groups.

"The analogy is no matter how much weed killer you place in a yard, the problem will come back, potentially more aggressively. In the online world, all yards in the neighborhood are interconnected in a highly complex way -- almost like wormholes. This is why individual social media platforms like Facebook need new analysis such as ours to figure out new approaches to push them ahead of the curve," Dr. Johnson said.

The team, which included researchers at the University of Miami, used insights from its online hate mapping to develop four intervention strategies that social media platforms could immediately implement based on situational circumstances:

  • Reduce the power and number of large clusters by banning the smaller clusters that feed into them.
  • Attack the Achilles' heel of online hate groups by randomly banning a small fraction of individual users in order to make the global cluster network fall a part.
  • Pit large clusters against each other by helping anti-hate clusters find and engage directly with hate clusters.
  • Set up intermediary clusters that engage hate groups to help bring out the differences in ideologies between them and make them begin to question their stance.

The researchers noted each of their strategies can be adopted on a global scale and simultaneously across all platforms without having to share the sensitive information of individual users or commercial secrets, which has been a stumbling block before.

Read more at Science Daily

Aug 20, 2019

Black hole holograms

A research team from Osaka University, Nihon University and Chuo University has proposed a novel theoretical framework whose experiment could be performed in a laboratory to better understand the physics of black holes. This project can shed light on the fundamental laws that govern the cosmos on both unimaginably small and vastly large scales.

Recently, the world was transfixed when the first ever images of a black hole were released by the Event Horizon Telescope. Or, to be more precise, the pictures showed the bright circle, called an Einstein ring, made by the light that just barely escaped the grasp of the black hole's immense gravity. This ring of light was due to fact that, according the theory of general relativity, the fabric of spacetime itself becomes so contorted by the mass of the black hole that it acts like a huge lens.

Unfortunately, our understanding of black holes remains incomplete, because the theory of general relativity -- which is used to describe the laws of nature at the scale of stars and galaxies -- is not currently compatible with quantum mechanics, our best theory of how the Universe operates on very small scales. Since black holes, by definition, have a huge mass compressed into a tiny space, reconciling these wildly successful but thus far conflicting theories is necessary to understand them.

One possible approach for solving this conundrum is called string theory, which holds that all matter is made of very tiny vibrating strings. One version of this theory predicts a correspondence between the laws of physics we perceive in our familiar four dimensions (three dimensions of space plus time) and strings in a space with an extra dimension. This is sometimes called a "holographic duality," because it is reminiscent of a two-dimensional holographic plate that holds all the information of a 3D-object.

In the newly published research, the authors, Koji Hashimoto (Osaka University), Keiju Murata (Nihon University) and Shunichiro Kinoshita (Chuo University) apply this concept to show how the surface of a sphere, which has two dimensions, can be used in a tabletop experiment to model a black hole in three dimensions. In this setup, light emanating from a source at one point of the sphere is measured at another, which should show the black hole if the spherical material allows holography.

"The holographic image of a simulated black hole, if observed by this tabletop experiment, may serve as an entrance to the world of quantum gravity" says the author Hashimoto. The researchers also calculated the radius of the Einstein ring that would be observed if this theory is correct.

"Our hope is that this project shows the way forward towards a better understanding of how our Universe truly operates on a fundamental level," says the author Keiju Murata.

From Science Daily

Humans migrated to Mongolia much earlier than previously believed

Mongolia highlighted on globe
Stone tools uncovered in Mongolia by an international team of archaeologists indicate that modern humans traveled across the Eurasian steppe about 45,000 years ago, according to a new University of California, Davis, study. The date is about 10,000 years earlier than archaeologists previously believed.

The site also points to a new location for where modern humans may have first encountered their mysterious cousins, the now extinct Denisovans, said Nicolas Zwyns, an associate professor of anthropology and lead author of the study.

Zwyns led excavations from 2011 to 2016 at the Tolbor-16 site along the Tolbor River in the Northern Hangai Mountains between Siberia and northern Mongolia.

The excavations yielded thousands of stone artifacts, with 826 stone artifacts associated with the oldest human occupation at the site. With long and regular blades, the tools resemble those found at other sites in Siberia and Northwest China -- indicating a large-scale dispersal of humans across the region, Zwyns said.

"These objects existed before, in Siberia, but not to such a degree of standardization," Zwyns said. "The most intriguing (aspect) is that they are produced in a complicated yet systematic way -- and that seems to be the signature of a human group that shares a common technical and cultural background."

That technology, known in the region as the Initial Upper Palaeolithic, led the researchers to rule out Neanderthals or Denisovans as the site's occupants. "Although we found no human remains at the site, the dates we obtained match the age of the earliest Homo sapiens found in Siberia," Zwyns said. "After carefully considering other options, we suggest that this change in technology illustrates movements of Homo sapiens in the region."

Their findings were published online in an article in Scientific Reports.

The age of the site -- determined by luminescence dating on the sediment and radiocarbon dating of animal bones found near the tools -- is about 10,000 years earlier than the fossil of a human skullcap from Mongolia, and roughly 15,000 years after modern humans left Africa.

Evidence of soil development (grass and other organic matter) associated with the stone tools suggests that the climate for a period became warmer and wetter, making the normally cold and dry region more hospitable to grazing animals and humans.

Preliminary analysis identifies bone fragments at the site as large (wild cattle or bison) and medium size bovids (wild sheep, goat) and horses, which frequented the open steppe, forests and tundra during the Pleistocene -- another sign of human occupation at the site.

The dates for the stone tools also match the age estimates obtained from genetic data for the earliest encounter between Homo sapiens and the Denisovans.

"Although we don't know yet where the meeting happened, it seems that the Denisovans passed along genes that will later help Homo sapiens settling down in high altitude and to survive hypoxia on the Tibetan Plateau," Zwyns said. "From this point of view, the site of Tolbor-16 is an important archaeological link connecting Siberia with Northwest China on a route where Homo sapiens had multiple possibilities to meet local populations such as the Denisovans."

Read more at Science Daily

Roadmap for detecting changes in ocean due to climate change

Ocean and sky
Sea temperature and ocean acidification have climbed during the last three decades to levels beyond what is expected due to natural variation alone, a new study led by Princeton researchers finds. Meanwhile other impacts from climate change, such as changes in the activity of ocean microbes that regulate the Earth's carbon and oxygen cycles, will take several more decades to a century to appear. The report was published Aug. 19 online in the journal Nature Climate Change.

The study looked at physical and chemical changes to the ocean that are associated with rising atmospheric carbon dioxide due to human activities. "We sought to address a key scientific question: When, why and how will important changes become detectable above the normal variations that we expect to see in the global ocean?" said Sarah Schlunegger, a postdoctoral research associate at Princeton University's Program in Atmospheric and Oceanic Sciences (AOS).

The study confirms that outcomes tied directly to the escalation of atmospheric carbon dioxide have already emerged in the existing 30-year observational record. These include sea surface warming, acidification and increases in the rate at which the ocean removes carbon dioxide from the atmosphere.

In contrast, processes tied indirectly to the ramp up of atmospheric carbon dioxide through the gradual modification of climate and ocean circulation will take longer, from three decades to more than a century. These include changes in upper-ocean mixing, nutrient supply, and the cycling of carbon through marine plants and animals.

"What is new about this study is that it gives a specific timeframe for when ocean changes will occur," said Jorge Sarmiento, the George J. Magee Professor of Geoscience and Geological Engineering, Emeritus. "Some changes will take a long time while others are already detectable."

The ocean provides a climate service to the planet by absorbing excess heat and carbon from the atmosphere, thereby slowing the pace of rising global temperatures, Schlunegger said. This service, however, comes with a penalty -- namely ocean acidification and ocean warming, which alter how carbon cycles through the ocean and impacts marine ecosystems.

Acidification and ocean warming can harm the microbial marine organisms that serve as the base of the marine food web that feeds fisheries and coral reefs, produce oxygen and contribute to the draw-down of atmospheric carbon dioxide concentration.

The study aimed to sift out ocean changes linked to human-made climate change from those due to natural variability. Natural fluctuations in the climate can disguise changes in the ocean, so researchers looked at when the changes would be so dramatic that they would stand out above the natural variability.

Climate research is often divided into two categories, modeling and observations -- those scientists who analyze observations of the real Earth, and those who use models to predict what changes are to come. This study leverages the predictions made by climate models to inform observational efforts of what changes are likely, and where and when to look for them, Schlunegger said.

The researchers conducted modeling that simulates potential future climate states that could result from a combination of human-made climate change and random chance. These experiments were performed with the Earth System Model, a climate model which has an interactive carbon cycle, so that changes in the climate and carbon cycle can be considered in tandem.

Use of the Earth System Model was facilitated by John Dunne, who leads ocean carbon modeling activities at the National Oceanic and Atmospheric Administration (NOAA)'s Geophysical Fluid Dynamics Laboratory in Princeton. The Princeton team included Richard Slater, senior earth system modeler in AOS; Keith Rodgers, an AOS research oceanographer now at Pusan National University in South Korea; and Jorge Sarmiento, the George J. Magee Professor of Geoscience and Geological Engineering, Emeritus. The team also included Thomas Frölicher, a professor at the University of Bern and a former postdoctoral fellow at Princeton, and Masao Ishii of the Japan Meteorological Agency.

The finding of a 30- to 100-year delay in the emergence of effects suggests that ocean observation programs should be maintained for many decades into the future to effectively monitor the changes occurring in the ocean. The study also indicates that the detectability of some changes in the ocean would benefit from improvements to the current observational sampling strategy. These include looking deeper into the ocean for changes in phytoplankton, and capturing changes in both summer and winter, rather than just the annual mean, for the ocean-atmosphere exchange of carbon dioxide.

Read more at Science Daily

Stardust in the Antarctic snow

Antarctica illustration
The rare isotope iron-60 is created in massive stellar explosions. Only a very small amount of this isotope reaches Earth from distant stars. Now, a research team with significant involvement from the Technical University of Munich (TUM) has discovered iron-60 in Antarctic snow for the first time. The scientists suggest that the iron isotope comes from the interstellar neighborhood.

The quantity of cosmic dust that trickles down to Earth each year ranges between several thousand and ten thousand tons. Most of the tiny particles come from asteroids or comets within our solar system. However, a small percentage comes from distant stars. There are no natural terrestrial sources for the iron-60 isotope contained therein; it originates exclusively as a result of supernova explosions or through the reactions of cosmic radiation with cosmic dust.

Antarctic Snow Travels around the World

The first evidence of the occurrence of iron-60 on Earth was discovered in deep-sea deposits by a TUM research team 20 years ago. Among the scientists on the team was Dr. Gunther Korschinek, who hypothesized that traces of stellar explosions could also be found in the pure, untouched Antarctic snow. In order to verify this assumption, Dr. Sepp Kipfstuhl from the Alfred Wegener Institute collected 500 kg of snow at the Kohnen Station, a container settlement in the Antarctic, and had it transported to Munich for analysis. There, a TUM team melted the snow and separated the meltwater from the solid components, which were processed at the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) using various chemical methods, so that the iron needed for the subsequent analysis was present in the milligram range, and the samples could be returned to Munich.

Korschinek and Dominik Koll from the research area Nuclear, Particle and Astrophysics at TUM found five iron-60 atoms in the samples using the accelerator laboratory in Garching near Munich. "Our analyses allowed us to rule out cosmic radiation, nuclear weapons tests or reactor accidents as sources of the iron-60," states Koll. "As there are no natural sources for this radioactive isotope on Earth, we knew that the iron-60 must have come from a supernova."

Stardust Comes from the Interstellar Neighborhood

The research team was able to make a relatively precise determination as to when the iron-60 has been deposited on Earth: The snow layer that was analyzed was not older than 20 years. Moreover, the iron isotope that was discovered did not seem to come from particularly distant stellar explosions, as the iron-60 dust would have dissipated too much throughout the universe if this had been the case. Based on the half-life of iron-60, any atoms originating from the formation of Earth would have completely decayed by now. Koll therefore assumes that the iron-60 in the Antarctic snow originates from the interstellar neighborhood, for example from an accumulation of gas clouds in which our solar system is currently located.

Read more at Science Daily

Hurricanes drive the evolution of more aggressive spiders

Palm trees in storm
Researchers at McMaster University who rush in after storms to study the behaviour of spiders have found that extreme weather events such as tropical cyclones may have an evolutionary impact on populations living in storm-prone regions, where aggressive spiders have the best odds of survival.

Raging winds can demolish trees, defoliate entire canopies and scatter debris across forest floors, radically altering the habitats and reshaping the selective pressures on many organisms, suggests a new study published today in the journal Nature Ecology & Evolution.

"It is tremendously important to understand the environmental impacts of these 'black swan' weather events on evolution and natural selection," says lead author Jonathan Pruitt, an evolutionary biologist and Canada 150 Chair in McMaster's Department of Psychology, Neuroscience & Behaviour.

"As sea levels rise, the incidence of tropical storms will only increase. Now more than ever we need to contend with what the ecological and evolutionary impacts of these storms will be for non-human animals," he says.

Pruitt and his team examined female colonies of the spider known as Anelosimus studiosus, which lives along the Gulf and Atlantic coasts of the United States and Mexico, directly in the path of tropical cyclones that form in the Atlantic basin from May to November.

To conduct the research, scientists had to tackle many logistical and methodological challenges which included anticipating the trajectory of the tropical cyclones. Once a storm's path was determined, they sampled populations before landfall, then returned to the sites within 48 hours.

They sampled 240 colonies throughout the storm-prone coastal regions, and compared them to control sites, with particular interest in determining if extreme weather -- in this case areas disturbed in 2018 by subtropical storm Alberto, Hurricane Florence and Hurricane Michael -- caused particular spider traits to prevail over others.

As a species, A. studiosus is divided into two sets of inherited personality traits: docile and aggressive. The aggressiveness of a colony is determined by the speed and number of attackers that respond to prey, the tendency to cannibalize males and eggs, the vulnerability to infiltration by predatory foreign spiders, among other characteristics.

Aggressive colonies, for example, are better at acquiring resources when scarce but are also more prone to infighting when deprived of food for long periods of time or when colonies become overheated.

"Tropical cyclones likely impact both of these stressors by altering the numbers of flying prey and increasing sun exposure from a more open canopy layer," explains Pruitt. "Aggressiveness is passed down through generations in these colonies, from parent to daughter, and is a major factor in their survival and ability to reproduce."

Read more at Science Daily

Aug 19, 2019

Lab-based dark energy experiment narrows search options for elusive force

An experiment to test a popular theory of dark energy has found no evidence of new forces, placing strong constraints on related theories.

Dark energy is the name given to an unknown force that is causing the universe to expand at an accelerating rate.

Some physicists propose dark energy is a 'fifth' force that acts on matter, beyond the four already known -- gravitational, electromagnetic, and the strong and weak nuclear forces. However, researchers think this fifth force may be 'screened' or 'hidden' for large objects like planets or weights on Earth, making it difficult to detect.

Now, researchers at Imperial College London and the University of Nottingham have tested the possibility that this fifth force is acting on single atoms, and found no evidence for it in their most recent experiment.

This could rule out popular theories of dark energy that modify the theory of gravity, and leaves fewer places to search for the elusive fifth force.

The experiment, performed at Imperial College London and analysed by theorists at the University of Nottingham, is reported today in Physical Review Letters.

Professor Ed Copeland, from the Centre for Astronomy & Particle Physics at the University of Nottingham, said: "This experiment, connecting atomic physics and cosmology, has allowed us to rule out a wide class of models that have been proposed to explain the nature of dark energy, and will enable us to constrain many more dark energy models.''

The experiment tested theories of dark energy that propose the fifth force is comparatively weaker when there is more matter around -- the opposite of how gravity behaves.

This would mean it is strong in a vacuum like space, but is weak when there is lots of matter around. Therefore, experiments using two large weights would mean the force becomes too weak to measure.

The researchers instead tested a larger weight with an incredibly small weight -- a single atom -- where the force should have been observed if it exists.

The team used an atom interferometer to test whether there were any extra forces that could be the fifth force acting on an atom. A marble-sized sphere of metal was placed in a vacuum chamber and atoms were allowed to free-fall inside the chamber.

The theory is, if there is a fifth force acting between the sphere and atom, the atom's path will deviate slightly as it passes by the sphere, causing a change in the path of the falling atom. However, no such force was found.

Read more at Science Daily

Uncertainty in greenhouse gas emissions estimates

National or other emissions inventories of greenhouse gases that are used to develop strategies and track progress in terms of emissions reductions for climate mitigation contain a certain amount of uncertainty, which inevitably has an impact on the decisions they inform. IIASA researchers contributed to several studies in a recently published volume that aims to enhance understanding of uncertainty in emissions inventories.

Estimates of greenhouse gas (GHG) emissions are important for many reasons, but it is crucial to acknowledge that these values have a certain level of uncertainty that has to be taken into account. If, for example, two estimates of emissions from a country are different, it does not necessarily imply that one or both are wrong -- it simply means that there is an uncertainty that needs to be recognized and dealt with. A special issue of the Springer journal Mitigation and Adaptation Strategies for Global Change, aims to enhance understanding of uncertainty in estimating GHG emissions and to provide guidance on dealing with the resulting challenges. IIASA researchers and colleagues from other international institutions including the Lviv Polytechnic National University in Ukraine, the Systems Research Institute at the Polish Academy of Sciences, and Appalachian State University in the US, contributed to the 13 papers featured in the publication, addressing questions such as the size of the uncertainty dealt with, how to deal with this, and how uncertainty might be decreased.

According to the researchers, there are ways to decrease uncertainty but these are often difficult and ultimately expensive. In their respective papers, they point out that there are seven important issues that currently dominate our understanding of uncertainty. These include 1) verification; 2) avoidance of systemic surprises; 3) uncertainty informing policy; 4) minimizing the impact of uncertainty; 5) full GHG accounting; 6) compliance versus reporting; and 7) changes in emissions versus changes in the atmosphere.

In terms of how uncertainty in observations and modeling results can influence policy decisions on climate change mitigation, some of the papers also looked at how decision-making procedures can be improved to produce more fair rules for checking compliance and how information around emission inventories can be communicated to make it more transparent and easier to understand. The authors explain that understanding the uncertainties is very important both for those who do the calculations or modeling and for the consumers of this information, like policymakers or consultants, as it provides an indication of how much they can rely on the data, in other words, how "strong" the conclusions are and how sure the decisions derived from the data can be.

"Uncertainty is higher for some GHGs and some sectors of an inventory than for others. This raises the option that, when future policy agreements are being designed, some components of a GHG inventory could be treated differently from others. The approach of treating subsystems individually and differently would allow emissions and uncertainty to be looked at simultaneously and would thus allow for differentiated emission reduction policies," explains Matthias Jonas, an IIASA researcher in the Advanced Systems Analysis Program and one of the editors of the special issue. "The current policy approach of ignoring inventory uncertainty altogether (inventory uncertainty was monitored, but not regulated, under the Kyoto Protocol) is problematic. Being aware of the uncertainties involved, including those resulting from our systems views, will help to strengthen future political decision making."

The authors all agree that dealing with uncertainty is often not a quick exercise but rather involves a commitment that is painstaking and long-term. Proper treatment of uncertainty can be costly in terms of both time and effort because it necessitates taking the step from "simple" to "complex" in order to grasp a wider and more holistic systems view. Only after that step has been taken, is it possible to consider simplifications that may be warranted.

"Decision makers want certainty, the public wants certainty, but certainty is not achievable. We can work with the best information available and we have to keep moving forward and learning. I think that we need to convince data users such as policymakers or the public that uncertainty in these kinds of numbers is normal and expected and does not mean that the numbers are not useful," says study author Gregg Marland from Appalachian State University in the US.

Read more at Science Daily

Facts and stories: Great stories undermine strong facts

Some research shows facts are better received when presented on their own. Other studies show facts are more accepted when interwoven with stories; stories can help bridge emotional connections. If someone is trying to persuade or influence others, should they use a story or stick to the facts? According to research from social psychologists at Northwestern University, stories can increase the persuasiveness of weak facts, but actually decrease the persuasiveness of strong facts.

"Stories persuade, at least in part, by disrupting the ability to evaluate facts, rather than just biasing a person to think positively," says Rebecca Krause, who coauthored the paper with Derek Rucker.

The research appears in Personality and Social Psychology Bulletin, a publication of the Society for Personality and Social Psychology.

Prior psychological research on storytelling and persuasion demonstrated that stories led to more persuasion.

However, why stories were more persuasive was less clear. It could be that stories focused people on good aspects of a message and away from the negative ones. Alternatively, stories might have disrupted people's ability to process information in an elaborated manner. This distinction is important because these two accounts imply different predictions for when stories will be more or less persuasive.

To test this interplay between facts, stories, and persuasion, Krause and Rucker had 397 U.S. adults evaluate a set of either all strong or all weak facts about a fictitious brand of cell phone called Moonstone. Half of the people read only facts about the phone, while the other half read a story about the phone that had the facts embedded within it. For a strong fact, they used "The phone can withstand a fall of up to 30 feet." For a weak fact, they used "The phone can withstand a fall of up to 3 feet."

Krause and Rucker found that when facts were weak, a story with the facts embedded within it led to greater persuasion than facts alone. But when facts were strong, the opposite effect occurred: facts alone led to more persuasion than a story with the facts embedded within it. This result suggests that stories don't just direct people away from weak information; they reduce people's general processing of information. As a consequence, stories help persuasion when facts are weak, but they hurt persuasion when facts are strong.

Krause replicated the first study, this time with 389 U.S. adults, and observed similar results.

A third study occurred in a lab setting, and changed the content. In the third experiment, 293 people read about a fictitious flu medicine, either on its own or embedded within a story, and were asked whether they would provide their email to receive more information. While people are generally protective of sharing their email, people's willingness to share that information varied in a manner similar to the first two studies.

Specifically, stories once again undermined the persuasive appeal of strong facts. In the absence of a story, 34% of participants agreed to provide their email address in response to strong facts. However, when these same strong facts were included in a story, only 18% of participants agreed to provide their email address.

Krause notes that avoiding stories isn't the message they are trying to send.

Read more at Science Daily

Optic nerve stimulation to aid the blind

Scientists from EPFL in Switzerland and Scuola Superiore Sant'Anna in Italy are developing technology for the blind that bypasses the eyeball entirely and sends messages to the brain. They do this by stimulating the optic nerve with a new type of intraneural electrode called OpticSELINE. Successfully tested in rabbits, they report their results in Nature Biomedical Engineering.

"We believe that intraneural stimulation can be a valuable solution for several neuroprosthetic devices for sensory and motor function restoration. The translational potentials of this approach are indeed extremely promising," explains Silvestro Micera, EPFL's Bertarelli Foundation Chair in Translational Neuroengineering, and Professor of Bioelectronics at Scuola Superiore Sant'Anna, who continues to innovate in hand prosthetics for amputees using intraneural electrodes.

Blindness affects an estimated 39 million people in the world. Many factors can induce blindness, like genetics, retinal detachment, trauma, stroke in the visual cortex, glaucoma, cataract, inflammation or infection. Some blindness is temporary and can be treated medically. How do you help someone who is permanently blind?

The idea is to produce phosphenes, the sensation of seeing light in the form of white patterns, without seeing light directly. Retinal implants, a prosthetic device for helping the blind, suffer from exclusion criteria. For example, ½ million people worldwide are blind due to Retinitis pigmentosa, a genetic disorder, but only a few hundred patients qualify for retinal implants for clinical reasons. A brain implant that stimulates the visual cortex directly is another strategy albeit risky. A priori, the new intraneural solution minimizes exclusion criteria since the optic nerve and the pathway to the brain are often intact.

Previous attempts to stimulate the optic nerve in the 1990s provided inconclusive results. EPFL's Medtronic Chair in Neuroengineering Diego Ghezzi explains, "Back then, they used cuff nerve electrodes. The problem is that these electrodes are rigid and they move around, so the electrical stimulation of the nerve fibers becomes unstable. The patients had a difficult time interpreting the stimulation, because they kept on seeing something different. Moreover, they probably have limited selectivity because they recruited superficial fibers."

Intraneural electrodes may indeed be the answer for providing rich visual information to the subjects. They are also stable and less likely to move around once implanted in a subject, according to the scientists. Cuff electrodes are surgically placed around the nerve, whereas intraneural electrodes pierce through the nerve.

Together, Ghezzi, Micera and their teams engineered the OpticSELINE, an electrode array of 12 electrodes. In order to understand how effective these electrodes are at stimulating the various nerve fibers within the optic nerve, the scientists delivered electric current to the optic nerve via OpticSELINE and measured the brain's activity in the visual cortex. They developed an elaborate algorithm to decode the cortical signals. They showed that each stimulating electrode induces a specific and unique pattern of cortical activation, suggesting that intraneural stimulation of the optic nerve is selective and informative.

As a preliminary study, the visual perception behind these cortical patterns remains unknown. Ghezzi continues, "For now, we know that intraneural stimulation has the potential to provide informative visual patterns. It will take feedback from patients in future clinical trials in order to fine-tune those patterns. From a purely technological perspective, we could do clinical trials tomorrow."

Read more at Science Daily