Sep 21, 2019

Plasma flow near sun's surface explains sunspots, other solar phenomena

Sunspots on sun's surface
For 400 years people have tracked sunspots, the dark patches that appear for weeks at a time on the sun's surface. They have observed but been unable to explain why the number of spots peaks every 11 years.

A University of Washington study published this month in the journal Physics of Plasmas proposes a model of plasma motion that would explain the 11-year sunspot cycle and several other previously mysterious properties of the sun.

"Our model is completely different from a normal picture of the sun," said first author Thomas Jarboe, a UW professor of aeronautics and astronautics. "I really think we're the first people that are telling you the nature and source of solar magnetic phenomena -- how the sun works."

The authors created a model based on their previous work with fusion energy research. The model shows that a thin layer beneath the sun's surface is key to many of the features we see from Earth, like sunspots, magnetic reversals and solar flow, and is backed up by comparisons with observations of the sun.

"The observational data are key to confirming our picture of how the sun functions," Jarboe said.

In the new model, a thin layer of magnetic flux and plasma, or free-floating electrons, moves at different speeds on different parts of the sun. The difference in speed between the flows creates twists of magnetism, known as magnetic helicity, that are similar to what happens in some fusion reactor concepts.

"Every 11 years, the sun grows this layer until it's too big to be stable, and then it sloughs off," Jarboe said. Its departure exposes the lower layer of plasma moving in the opposite direction with a flipped magnetic field.

When the circuits in both hemispheres are moving at the same speed, more sunspots appear. When the circuits are different speeds, there is less sunspot activity. That mismatch, Jarboe says, may have happened during the decades of little sunspot activity known as the "Maunder Minimum."

"If the two hemispheres rotate at different speeds, then the sunspots near the equator won't match up, and the whole thing will die," Jarboe said.

"Scientists had thought that a sunspot was generated down at 30 percent of the depth of the sun, and then came up in a twisted rope of plasma that pops out," Jarboe said. Instead, his model shows that the sunspots are in the "supergranules" that form within the thin, subsurface layer of plasma that the study calculates to be roughly 100 to 300 miles (150 to 450 kilometers) thick, or a fraction of the sun's 430,000-mile radius.

"The sunspot is an amazing thing. There's nothing there, and then all of a sudden, you see it in a flash," Jarboe said.

The group's previous research has focused on fusion power reactors, which use very high temperatures similar to those inside the sun to separate hydrogen nuclei from their electrons. In both the sun and in fusion reactors the nuclei of two hydrogen atoms fuse together, releasing huge amounts of energy.

The type of reactor Jarboe has focused on, a spheromak, contains the electron plasma within a sphere that causes it to self-organize into certain patterns. When Jarboe began to consider the sun, he saw similarities, and created a model for what might be happening in the celestial body.

"For 100 years people have been researching this," Jarboe said. "Many of the features we're seeing are below the resolution of the models, so we can only find them in calculations."

Other properties explained by the theory, he said, include flow inside the sun, the twisting action that leads to sunspots and the total magnetic structure of the sun. The paper is likely to provoke intense discussion, Jarboe said.

"My hope is that scientists will look at their data in a new light, and the researchers who worked their whole lives to gather that data will have a new tool to understand what it all means," he said.

Read more at Science Daily

Antimicrobial resistance is drastically rising

Chickens
The world is experiencing unprecedented economic growth in low- and middle-income countries. An increasing number of people in India, China, Latin America and Africa have become wealthier, and this is reflected in their consumption of meat and dairy products. In Africa, meat consumption has risen by more than half; in Asia and Latin America it is up by two-thirds.

To meet this growing demand, animal husbandry has been intensified, with among other things, an increased reliance on the use of antimicrobials. Farmers use antimicrobials to treat and prevent infections for animals raised in crowded conditions but these drugs are also used to increase weight gain, and thus improve profitability.

This excessive and indiscriminate use of antimicrobials has serious consequences: the proportion of bacteria resistant to antimicrobials is rapidly increasing around the world. Drugs are losing their efficacy, with important consequences for the health of animals but also potentially for humans.

Mapping resistance hotspots

Low- and middle income countries have limited surveillance capacities to track antimicrobial use and resistance on farms. Antimicrobial use is typically less regulated and documented there than in wealthy industrialized countries with established surveillance systems.

The team of researchers led by Thomas Van Boeckel, SNF Assistant Professor of Health Geography and Policy at ETH Zurich, has recently published a map of antimicrobial resistance in animals in low- and middle-income countries in the journal Science.

The team assembled a large literature database and found out where, and in which animals species resistance occurred for the common foodborne bacteria Salmonella, E. coli, Campylobacter and Staphylococcus.

According to this study, the regions associated with high rates of antimicrobial resistance in animals are northeast China, northeast India, southern Brazil, Iran and Turkey. In these countries, the bacteria listed above are now resistant to a large number of drug that are used not only in animals but also in human medicine. An important finding of the study is that so far, few resistance hotspots have emerged in Africa with the exception of Nigeria and the surroundings of Johannesburg.

The highest resistance rates were associated with the antimicrobials most frequently used in animals: tetracyclines, sulphonamides, penicillins and quinolones. In certain regions, these compounds have almost completely lost their efficacy to treat infections.

Alarming trend in multi-drug resistance

The researchers introduced a new index to track the evolution of resistance to multiple drugs: the proportion of drugs tested in each region with resistance rates higher than 50%. Globally, this index has almost tripled for chicken and pigs over the last 20 years. Currently, one third of drugs fail 50% of the time in chicken and one quarter of drug fail in 50% of the time in pigs.

"This alarming trend shows that the drugs used in animal farming are rapidly losing their efficacy," Van Boeckel says. This will affect the sustainability of the animal industry and potentially the health of consumers.

It is of particular concern that antimicrobial resistance is rising in developing and emerging countries because this is where meat consumption is growing the fastest, while access to veterinary antimicrobials remains largely unregulated. "Antimicrobial resistance is a global problem. There is little point in making considerable efforts to reduce it on one side of the world if it is increasing dramatically on the other side," the ETH researcher says.

Input from thousands of studies

For their current study, the team of researchers from ETH, Princeton University and the Free University of Brussels gathered thousands of publications as well as unpublished veterinary reports from around the world. The researchers used this database to produce the maps of antimicrobial resistance.

However, the maps do not cover the entire research area; there are large gaps in particular in South America, which researchers attribute to a lack of publicly available data. "There are hardly any official figures or data from large parts of South America," says co-author and ETH postdoctoral fellow Joao Pires. He said this surprised him, as much more data is available from some African countries , despite resources for conducting surveys being more limited than in South America.

Open-access web platform

The team has created an open-access web platform resistancebank.org to share their findings and gather additional data on resistance in animals. For example, veterinarians and state-authorities can upload data on resistance in their region to the platform and share it with other people who are interested.

Van Boeckel hopes that scientists from countries with more limited resources for whom publishing cost in academic journal can be a barrier will be able to share their findings and get recognition for their work on the platform. "In this way, we can ensure that the data is not just stuffed away in a drawer" he says, "because there are many relevant findings lying dormant, especially in Africa or India, that would complete the global picture of resistance that we try to draw in this first assessment. The platform could also help donors to identify the regions most affected by resistance in order to be able to finance specific interventions.

Read more at Science Daily

Sep 20, 2019

Researchers revolutionize 3D printed products with data-driven design method

Additive manufacturing (AM), also known as three-dimensional printing, is a process that fabricates parts in a layer-by-layer manner by adding and processing materials. Advancements in AM technology have enabled the processing of a wide range of materials to create products in varying scales which span from medical implants to aircraft engine parts. These products, which can be rich in shape, material, hierarchical and functional complexities, offer high potential to revolutionize existing product development processes.

However, it can be a difficult process to fully realize the potential of AM's unique capabilities for product development as it requires product designers to change their design mindsets.

In conventional manufacturing processes, the main task for designers is tailoring their designs to eliminate manufacturing difficulties and minimize costs. On the contrary, AM has fewer manufacturing constraints while offering designers with much more design freedom to explore. Therefore, designers must search for optimal design solutions out of millions of design alternatives that are different in geometry, topology, structure, and material. This can be a tedious task with current design methods and computer-aided design (CAD) tools due to the lack of ability to rapidly explore and exploit such a high dimensional design space.

To address this issue, researchers from Digital Manufacturing and Design (DManD) Centre from the Singapore University of Technology and Design (SUTD) proposed a holistic approach that applies data-driven methods in design search and optimization at successive stages of a design process for AM products.

First, they used simple and computationally inexpensive surrogate models in the design exploration process to approximate and replace complex high-fidelity engineering analysis models for rapidly narrowing down the high-dimensional design space. Next, they conducted design optimization based on refined surrogate models to obtain a single optimal design. These surrogate models are trained based on an updated dataset using the Markov Chain Monte Carlo resampling method.

This design approach was demonstrated in the design of an AM fabricated ankle brace (refer to image) that has a tunable mechanical performance for facilitating the recovery process of joints. In this design, the researchers selected a metamaterial which has a horseshoe-like structure, where its stiffness can be tailored. The proposed design approach was applied to optimize the orientation and dimensions of the horseshoe-like structure's geometry in different areas to achieve desired stiffness distributions.

Such geometry complexities enabled by AM offer the ankle brace design unique and favorable behaviors. The ankle brace is very soft within the allowable range of motions which provides comfort to patients. However, once the movement is out of the permissible range, it becomes stiff enough to protect the users' joints from extreme load conditions due to its geometrical design.

"Previously, it was hard for designers to imagine a design of such complex geometry due to the limitations in conventional manufacturing, but now this design is easily achievable with AM. Our new approach allows designers to embrace the design freedom in AM that comes with the shift in design paradigm and create more optimal products similar to the ankle brace," said first author Dr. Yi Xiong, Research Fellow from SUTD.

With the design space exploration and exploitation capability developed, the research team is working towards a more ambitious goal -- to develop a next-generation CAD system for AM.

Read more at Science Daily

Alcohol-producing gut bacteria could cause liver damage even in people who don't drink

Non-alcoholic fatty liver disease (NAFLD) is the build-up of fat in the liver due to factors other than alcohol. It affects about a quarter of the adult population globally, but its cause remains unknown. Now, researchers have linked NAFLD to gut bacteria that produce a large amount of alcohol in the body, finding these bacteria in over 60% of non-alcoholic fatty liver patients. Their findings, publishing September 19 in the journal Cell Metabolism, could help develop a screening method for early diagnosis and treatment of non-alcoholic fatty liver.

"We were surprised that bacteria can produce so much alcohol," says lead author Jing Yuan at Capital Institute of Pediatrics. "When the body is overloaded and can't break down the alcohol produced by these bacteria, you can develop fatty liver disease even if you don't drink."

Yuan and her team discovered the link between gut bacteria and NAFLD when they encountered a patient with severe liver damage and a rare condition called auto-brewery syndrome (ABS). Patients with ABS would become drunk after eating alcohol-free and high-sugar food. The condition has been associated with yeast infection, which can produce alcohol in the gut and lead to intoxication.

"We initially thought it was because of the yeast, but the test result for this patient was negative," Yuan says. "Anti-yeast medicine also didn't work, so we suspected [his disease] might be caused by something else."

By analyzing the patient's feces, the team found he had several strains of the bacteria Klebsiella pneumonia in his gut that produced high levels of alcohol. K. pneumonia is a common type of commensal gut bacteria. Yet, the strains isolated from the patient's gut can generate about four to six times more alcohol than strains found in healthy people.

Moreover, the team sampled the gut microbiota from 43 NAFLD patients and 48 healthy people. They found about 60% of NAFLD patients had high- and medium-alcohol-producing K. pneumonia in their gut, while only 6% of healthy controls carry these strains.

To investigate if K. pneumonia would cause fatty liver, researchers fed germ-free mice with high-alcohol-producing K. pneumonia isolated from the ABS patient for 3 months. These mice started to develop fatty liver after the first month. By 2 months, their livers showed signs of scarring, which means long-term liver damage had been made. The progression of liver disease in these mice was comparable to that of mice fed with alcohol. When the team gave bacteria-fed mice with an antibiotic that killed K. pneumonia, their condition was reversed.

"NAFLD is a heterogenous disease and may have many causes," Yuan says. "Our study shows K. pneumonia is very likely to be one of them. These bacteria damage your liver just like alcohol, except you don't have a choice."

However, it remains unknown why some people have high-alcohol-producing K. pneumonia strain in their gut while others don't.

"It's likely that these particular bacteria enter people's body via some carriers from the environment, like food," says co-author Di Liu at the Chinese Academy of Sciences. "But I don't think the carriers are prevalent -- otherwise we would expect much higher rate of NAFLD. Also, some people may have a gut environment that's more suitable for the growth and colonization of K. pneumonia than others because of their genetics. We don't understand what factors would make someone more susceptible to these particular K. pneumonia, and that's what we want to find out next."

This finding could also help diagnose and treat bacteria-related NAFLD, Yuan says. Because K. pneumonia produce alcohol using sugar, patients who carry these bacteria would have a detectable amount of alcohol in their blood after drinking a simple glucose solution. "In the early stages, fatty liver disease is reversible. If we can identify the cause sooner, we could treat and even prevent liver damage."

Read more at Science Daily

Perception of musical pitch varies across cultures

People who are accustomed to listening to Western music, which is based on a system of notes organized in octaves, can usually perceive the similarity between notes that are same but played in different registers -- say, high C and middle C. However, a longstanding question is whether this a universal phenomenon or one that has been ingrained by musical exposure.

This question has been hard to answer, in part because of the difficulty in finding people who have not been exposed to Western music. Now, a new study led by researchers from MIT and the Max Planck Institute for Empirical Aesthetics has found that unlike residents of the United States, people living in a remote area of the Bolivian rainforest usually do not perceive the similarities between two versions of the same note played at different registers (high or low).

The findings suggest that although there is a natural mathematical relationship between the frequencies of every "C," no matter what octave it's played in, the brain only becomes attuned to those similarities after hearing music based on octaves, says Josh McDermott, an associate professor in MIT's Department of Brain and Cognitive Sciences.

"It may well be that there is a biological predisposition to favor octave relationships, but it doesn't seem to be realized unless you are exposed to music in an octave-based system," says McDermott, who is also a member of MIT's McGovern Institute for Brain Research and Center for Brains, Minds and Machines.

The study also found that members of the Bolivian tribe, known as the Tsimane', and Westerners do have a very similar upper limit on the frequency of notes that they can accurately distinguish, suggesting that that aspect of pitch perception may be independent of musical experience and biologically determined.

McDermott is the senior author of the study, which appears in the journal Current Biology on Sept. 19. Nori Jacoby, a former MIT postdoc who is now a group leader at the Max Planck Institute for Empirical Aesthetics, is the paper's lead author. Other authors are Eduardo Undurraga, an assistant professor at the Pontifical Catholic University of Chile; Malinda McPherson, a graduate student in the Harvard/MIT Program in Speech and Hearing Bioscience and Technology; Joaquin Valdes, a graduate student at the Pontifical Catholic University of Chile; and Tomas Ossandon, an assistant professor at the Pontifical Catholic University of Chile.

Octaves apart

Cross-cultural studies of how music is perceived can shed light on the interplay between biological constraints and cultural influences that shape human perception. McDermott's lab has performed several such studies with the participation of Tsimane' tribe members, who live in relative isolation from Western culture and have had little exposure to Western music.

In a study published in 2016, McDermott and his colleagues found that Westerners and Tsimane' had different aesthetic reactions to chords, or combinations of notes. To Western ears, the combination of C and F# is very grating, but Tsimane' listeners rated this chord just as likeable as other chords that Westerners would interpret as more pleasant, such as C and G.

Later, Jacoby and McDermott found that both Westerners and Tsimane' are drawn to musical rhythms composed of simple integer ratios, but the ratios they favor are different, based on which rhythms are more common in the music they listen to.

In their new study, the researchers studied pitch perception using an experimental design in which they play a very simple tune, only two or three notes, and then ask the listener to sing it back. The notes that were played could come from any octave within the range of human hearing, but listeners sang their responses within their vocal range, usually restricted to a single octave.

Western listeners, especially those who were trained musicians, tended to reproduce the tune an exact number of octaves above or below what they heard, though they were not specifically instructed to do so. In Western music, the pitch of the same note doubles with each ascending octave, so tones with frequencies of 27.5 hertz, 55 hertz, 110 hertz, 220 hertz, and so on, are all heard as the note A.

Western listeners in the study, all of whom lived in New York or Boston, accurately reproduced sequences such as A-C-A, but in a different register, as though they hear the similarity of notes separated by octaves. However, the Tsimane' did not.

"The relative pitch was preserved (between notes in the series), but the absolute pitch produced by the Tsimane' didn't have any relationship to the absolute pitch of the stimulus," Jacoby says. "That's consistent with the idea that perceptual similarity is something that we acquire from exposure to Western music, where the octave is structurally very important."

The ability to reproduce the same note in different octaves may be honed by singing along with others whose natural registers are different, or singing along with an instrument being played in a different pitch range, Jacoby says.

Limits of perception

The study findings also shed light on the upper limits of pitch perception for humans. It has been known for a long time that Western listeners cannot accurately distinguish pitches above about 4,000 hertz, although they can still hear frequencies up to nearly 20,000 hertz. In a traditional 88-key piano, the highest note is about 4,100 hertz.

People have speculated that the piano was designed to go only that high because of a fundamental limit on pitch perception, but McDermott thought it could be possible that the opposite was true: That is, the limit was culturally influenced by the fact that few musical instruments produce frequencies higher than 4,000 hertz.

The researchers found that although Tsimane' musical instruments usually have upper limits much lower than 4,000 hertz, Tsimane' listeners could distinguish pitches very well up to about 4,000 hertz, as evidenced by accurate sung reproductions of those pitch intervals. Above that threshold, their perceptions broke down, very similarly to Western listeners.

"It looks almost exactly the same across groups, so we have some evidence for biological constraints on the limits of pitch," Jacoby says.

One possible explanation for this limit is that once frequencies reach about 4,000 hertz, the firing rates of the neurons of our inner ear can't keep up and we lose a critical cue with which to distinguish different frequencies.

Jacoby and McDermott now hope to expand their cross-cultural studies to other groups who have had little exposure to Western music, and to perform more detailed studies of pitch perception among the Tsimane'.

Such studies have already shown the value of including research participants other than the Western-educated, relatively wealthy college undergraduates who are the subjects of most academic studies on perception, McDermott says. These broader studies allow researchers to tease out different elements of perception that cannot be seen when examining only a single, homogenous group.

Read more at Science Daily

Why is the brain disturbed by harsh sounds?

Why do the harsh sounds emitted by alarms or human shrieks grab our attention? What is going on in the brain when it detects these frequencies? Neuroscientists from the University of Geneva (UNIGE) and Geneva University Hospitals (HUG), Switzerland, have been analysing how people react when they listen to a range of different sounds, the aim being to establish the extent to which repetitive sound frequencies are considered unpleasant. The scientists also studied the areas inside the brain that were stimulated when listening to these frequencies. Surprisingly, their results -- which are published in Nature Communications -- showed not only that the conventional sound-processing circuit is activated but also that the cortical and sub-cortical areas involved in the processing of salience and aversion are also solicited. This is a first, and it explains why the brain goes into a state of alert on hearing this type of sound.

Alarm sounds, whether artificial (such as a car horn) or natural (human screams), are characterised by repetitive sound fluctuations, which are usually situated in frequencies of between 40 and 80 Hz. But why were these frequencies selected to signal danger? And what happens in the brain to hold our attention to such an extent? Researchers from UNIGE and HUG played repetitive sounds of between 0 and 250 Hz to 16 participants closer and closer together in order to define the frequencies that the brain finds unbearable. "We then asked participants when they perceived the sounds as being rough (distinct from each other) and when they perceived them as smooth (forming one continuous and single sound)," explains Luc Arnal, a researcher in the Department of Basic Neurosciences in UNIGE's Faculty of Medicine.

Based on the responses of participants, the scientists were able to establish that the upper limit of sound roughness is around 130 Hz. "Above this limit," continues Arnal, "the frequencies are heard as forming only one continuous sound." But why does the brain judge rough sounds to be unpleasant? In an attempt to answer this question, the neuroscientists asked participants to listen to different frequencies, which they had to classify on a scale of 1 to 5, 1 being bearable and 5 unbearable. "The sounds considered intolerable were mainly between 40 and 80 Hz, i.e. in the range of frequencies used by alarms and human screams, including those of a baby," says Arnal. Since these sounds are perceptible from a distance, unlike a visual stimulus, it is crucial that attention can be captured from a survival perspective. "That's why alarms use these rapid repetitive frequencies to maximise the chances that they are detected and gain our attention," says the researcher. In fact, when the repetitions are spaced less than about 25 milliseconds apart, the brain cannot anticipate them and therefore suppress them. It is constantly on alert and attentive to the stimulus.

Harsh sounds fall outside the conventional auditory system


The researchers then attempted to find out what actually happens in the brain: why are these harsh sounds so unbearable? "We used an intracranial EEG, which records brain activity inside the brain itself in response to sounds," explains Pierre Mégevand, a neurologist and researcher in the Department of Basic Neurosciences in the UNIGE Faculty of Medicine and at HUG.

When the sound is perceived as continuous (above 130 Hz), the auditory cortex in the upper temporal lobe is activated. "This is the conventional circuit for hearing," says Mégevand. But when sounds are perceived as harsh (especially between 40 and 80 Hz), they induce a persistent response that additionally recruits a large number of cortical and sub-cortical regions that are not part of the conventional auditory system. "These sounds solicit the amygdala, hippocampus and insula in particular, all areas related to salience, aversion and pain. This explains why participants experienced them as being unbearable," says Arnal, who was surprised to learn that these regions were involved in processing sounds.

Read more at Science Daily

Sep 19, 2019

Long lost human relative unveiled

If you could travel back in time to 100,000 years ago, you'd find yourself living among several different groups of humans, including Modern Humans (those anatomically similar to us), Neanderthals, and Denisovans. We know quite a bit about Neanderthals, thanks to numerous remains found across Europe and Asia. But exactly what our Denisovan relatives might have looked like had been anyone's guess for a simple reason: the entire collection of Denisovan remains includes three teeth, a pinky bone and a lower jaw. Now, as reported in the scientific journal Cell, a team led by Hebrew University of Jerusalem (HUJI) researchers Professor Liran Carmel and Dr. David Gokhman (currently a postdoc at Stanford) has produced reconstructions of these long-lost relatives based on patterns of methylation (chemical changes) in their ancient DNA.

"We provide the first reconstruction of the skeletal anatomy of Denisovans," says lead author Carmel of HUJI's Institute of Life Sciences. "In many ways, Denisovans resembled Neanderthals but in some traits they resembled us and in others they were unique."

Denisovan remains were first discovered in 2008 and have fascinated human evolution researchers ever since. They lived in Siberia and Eastern Asia, and went extinct approximately 50,000 years ago. We don't yet know why. That said, up to 6% of present-day Melanesians and Aboriginal Australians contain Denisovan DNA. Further, Denisovan DNA likely contributed to modern Tibetans' ability to live in high altitudes and to Inuits' ability to withstand freezing temperatures.

Overall, Carmel and his team identified 56 anatomical features in which Denisovans differ from modern humans and/or Neanderthals, 34 of them in the skull. For example, the Denisovan's skull was probably wider than that of modern humans' or Neanderthals'. They likely also had a longer dental arch and no chin.

The researchers came to these conclusions after three years of intense work studying DNA methylation maps. DNA methylation refers to chemical modifications that affect a gene's activity but not its underlying DNA sequence. The researchers first compared DNA methylation patterns among the three human groups to find regions in the genome that were differentially methylated. Next, they looked for evidence about what those differences might mean for anatomical features -- based on what's known about human disorders in which those same genes lose their function.

"In doing so, we got a prediction as to what skeletal parts are affected by differential regulation of each gene and in what direction that skeletal part would change -- for example, a longer or shorter femur bone," Dr. Gokhman explained.

To test this ground-breaking method, the researchers applied it to two species whose anatomy is known: the Neanderthal and the chimpanzee. They found that roughly 85% of their trait reconstructions were accurate in predicting which traits diverged and in which direction they diverged. Then, they applied this method to the Denisovan and were able to produce the first reconstructed anatomical profile of the mysterious Denisovan.

As for the accuracy of their Denisovan profile, Carmel shared, "One of the most exciting moments happened a few weeks after we sent our paper to peer-review. Scientists had discovered a Denisovan jawbone! We quickly compared this bone to our predictions and found that it matched perfectly. Without even planning on it, we received independent confirmation of our ability to reconstruct whole anatomical profiles using DNA that we extracted from a single fingertip."

In their Cell paper, Carmel and his colleagues predict many Denisovan traits that resemble Neanderthals', such as a sloping forehead, long face and large pelvis, and others that are unique among humans, for example, a large dental arch and very wide skull. Do these traits shed light on the Denisovan lifestyle? Could they explain how Denisovans survived the extreme cold of Siberia?

Read more at Science Daily

Study of ancient climate suggests future warming could accelerate

The rate at which the planet warms in response to the ongoing buildup of heat-trapping carbon dioxide gas could increase in the future, according to new simulations of a comparable warm period more than 50 million years ago.

Researchers at the University of Michigan and the University of Arizona used a state-of-the-art climate model to successfully simulate -- for the first time -- the extreme warming of the Early Eocene Period, which is considered an analog for Earth's future climate.

They found that the rate of warming increased dramatically as carbon dioxide levels rose, a finding with far-reaching implications for Earth's future climate, the researchers report in a paper scheduled for publication Sept. 18 in the journal Science Advances.

Another way of stating this result is that the climate of the Early Eocene became increasingly sensitive to additional carbon dioxide as the planet warmed.

"We were surprised that the climate sensitivity increased as much as it did with increasing carbon dioxide levels," said first author Jiang Zhu, a postdoctoral researcher at the U-M Department of Earth and Environmental Sciences.

"It is a scary finding because it indicates that the temperature response to an increase in carbon dioxide in the future might be larger than the response to the same increase in CO2 now. This is not good news for us."

The researchers determined that the large increase in climate sensitivity they observed -- which had not been seen in previous attempts to simulate the Early Eocene using similar amounts of carbon dioxide -- is likely due to an improved representation of cloud processes in the climate model they used, the Community Earth System Model version 1.2, or CESM1.2.

Global warming is expected to change the distribution and types of clouds in the Earth's atmosphere, and clouds can have both warming and cooling effects on the climate. In their simulations of the Early Eocene, Zhu and his colleagues found a reduction in cloud coverage and opacity that amplified CO2-induced warming.

The same cloud processes responsible for increased climate sensitivity in the Eocene simulations are active today, according to the researchers.

"Our findings highlight the role of small-scale cloud processes in determining large-scale climate changes and suggest a potential increase in climate sensitivity with future warming," said U-M paleoclimate researcher Christopher Poulsen, a co-author of the Science Advances paper.

"The sensitivity we're inferring for the Eocene is indeed very high, though it's unlikely that climate sensitivity will reach Eocene levels in our lifetimes," said Jessica Tierney of the University of Arizona, the paper's third author.

The Early Eocene (roughly 48 million to 56 million years ago) was the warmest period of the past 66 million years. It began with the Paleocene-Eocene Thermal Maximum, which is known as the PETM, the most severe of several short, intensely warm events.

The Early Eocene was a time of elevated atmospheric carbon dioxide concentrations and surface temperatures at least 14 degrees Celsius (25 degrees Fahrenheit) warmer, on average, than today. Also, the difference between temperatures at the equator and the poles was much smaller.

Geological evidence suggests that atmospheric carbon dioxide levels reached 1,000 parts per million in the Early Eocene, more than twice the present-day level of 412 ppm. If nothing is done to limit carbon emissions from the burning of fossil fuels, CO2 levels could once again reach 1,000 ppm by the year 2100, according to climate scientists.

Until now, climate models have been unable to simulate the extreme surface warmth of the Early Eocene -- including the sudden and dramatic temperature spikes of the PETM -- by relying solely on atmospheric CO2 levels. Unsubstantiated changes to the models were required to make the numbers work, said Poulsen, a professor in the U-M Department of Earth and Environmental Sciences and associate dean for natural sciences.

"For decades, the models have underestimated these temperatures, and the community has long assumed that the problem was with the geological data, or that there was a warming mechanism that hadn't been recognized," he said.

But the CESM1.2 model was able to simulate both the warm conditions and the low equator-to-pole temperature gradient seen in the geological records.

"For the first time, a climate model matches the geological evidence out of the box -- that is, without deliberate tweaks made to the model. It's a breakthrough for our understanding of past warm climates," Tierney said.

CESM1.2 was one of the climate models used in the authoritative Fifth Assessment Report from the Intergovernmental Panel on Climate Change, finalized in 2014. The model's ability to satisfactorily simulate Early Eocene warming provides strong support for CESM1.2's prediction of future warming, which is expressed through a key climate parameter called equilibrium climate sensitivity.

Read more at Science Daily

Even short-lived solar panels can be economically viable

A new study shows that, contrary to widespread belief within the solar power industry, new kinds of solar cells and panels don't necessarily have to last for 25 to 30 years in order to be economically viable in today's market.

Rather, solar panels with initial lifetimes of as little as 10 years can sometimes make economic sense, even for grid-scale installations -- thus potentially opening the door to promising new solar photovoltaic technologies that have been considered insufficiently durable for widespread use.

The new findings are described in a paper in the journal Joule, by Joel Jean, a former MIT postdoc and CEO of startup company Swift Solar; Vladimir Bulović, professor of electrical engineering and computer science and director of MIT.nano; and Michael Woodhouse of the National Renewable Energy Laboratory (NREL) in Colorado.

"When you talk to people in the solar field, they say any new solar panel has to last 25 years," Jean says. "If someone comes up with a new technology with a 10-year lifetime, no one is going to look at it. That's considered common knowledge in the field, and it's kind of crippling."

Jean adds that "that's a huge barrier, because you can't prove a 25-year lifetime in a year or two, or even 10." That presumption, he says, has left many promising new technologies stuck on the sidelines, as conventional crystalline silicon technologies overwhelmingly dominate the commercial solar marketplace. But, the researchers found, that does not need to be the case.

"We have to remember that ultimately what people care about is not the cost of the panel; it's the levelized cost of electricity," he says. In other words, it's the actual cost per kilowatt-hour delivered over the system's useful lifetime, including the cost of the panels, inverters, racking, wiring, land, installation labor, permitting, grid interconnection, and other system components, along with ongoing maintenance costs.

Part of the reason that the economics of the solar industry look different today than in the past is that the cost of the panels (also known as modules) has plummeted so far that now, the "balance of system" costs -- that is, everything except the panels themselves -- exceeds that of the panels. That means that, as long as newer solar panels are electrically and physically compatible with the racking and electrical systems, it can make economic sense to replace the panels with newer, better ones as they become available, while reusing the rest of the system.

"Most of the technology is in the panel, but most of the cost is in the system," Jean says. "Instead of having a system where you install it and then replace everything after 30 years, what if you replace the panels earlier and leave everything else the same? One of the reasons that might work economically is if you're replacing them with more efficient panels," which is likely to be the case as a wide variety of more efficient and lower-cost technologies are being explored around the world.

He says that what the team found in their analysis is that "with some caveats about financing, you can, in theory, get to a competitive cost, because your new panels are getting better, with a lifetime as short as 15 or even 10 years."

Although the costs of solar cells have come down year by year, Bulović says, "the expectation that one had to demonstrate a 25-year lifetime for any new solar panel technology has stayed as a tautology. In this study we show that as the solar panels get less expensive and more efficient, the cost balance significantly changes."

He says that one aim of the new paper is to alert the researchers that their new solar inventions can be cost-effective even if relatively short lived, and hence may be adopted and deployed more rapidly than expected. At the same time, he says, investors should know that they stand to make bigger profits by opting for efficient solar technologies that may not have been proven to last as long, knowing that periodically the panels can be replaced by newer, more efficient ones.

"Historical trends show that solar panel technology keeps getting more efficient year after year, and these improvements are bound to continue for years to come," says Bulović. Perovskite-based solar cells, for example, when first developed less than a decade ago, had efficiencies of only a few percent. But recently their record performance exceeded 25 percent efficiency, compared to 27 percent for the record silicon cell and about 20 percent for today's standard silicon modules, according to Bulović. Importantly, in novel device designs, a perovskite solar cell can be stacked on top of another perovskite, silicon, or thin-film cell, to raise the maximum achievable efficiency limit to over 40 percent, which is well above the 30 percent fundamental limit of today's silicon solar technologies. But perovskites have issues with longevity of operation and have not yet been shown to be able to come close to meeting the 25-year standard.

Bulović hopes the study will "shift the paradigm of what has been accepted as a global truth." Up to now, he says, "many promising technologies never even got a start, because the bar is set too high" on the need for durability.

For their analysis, the team looked at three different kinds of solar installations: a typical 6-kilowatt residential system, a 200-kilowatt commercial system, and a large 100-megawatt utility-scale system with solar tracking. They used NREL benchmark parameters for U.S. solar systems and a variety of assumptions about future progress in solar technology development, financing, and the disposal of the initial panels after replacement, including recycling of the used modules. The models were validated using four independent tools for calculating the levelized cost of electricity (LCOE), a standard metric for comparing the economic viability of different sources of electricity.

In all three installation types, they found, depending on the particulars of local conditions, replacement with new modules after 10 to 15 years could in many cases provide economic advantages while maintaining the many environmental and emissions-reduction benefits of solar power. The basic requirement for cost-competitiveness is that any new solar technology that is to be installed in the U.S should start with a module efficiency of at least 20 percent, a cost of no more than 30 cents per watt, and a lifetime of at least 10 years, with the potential to improve on all three.

Jean points out that the solar technologies that are considered standard today, mostly silicon-based but also thin-film variants such as cadmium telluride, "were not very stable in the early years. The reason they last 25 to 30 years today is that they have been developed for many decades." The new analysis may now open the door for some of the promising newer technologies to be deployed at sufficient scale to build up similar levels of experience and improvement over time and to make an impact on climate change earlier than they could without module replacement, he says.

Read more at Science Daily

How people with psychopathic traits control their 'dark impulses'

People with psychopathic traits are predisposed toward antisocial behavior that can result in "unsuccessful" outcomes such as incarceration. However, many individuals with psychopathic traits are able to control their antisocial tendencies and avoid committing the antagonistic acts that can result.

A team of researchers at Virginia Commonwealth University and the University of Kentucky set out to explore what mechanisms might explain why certain people with psychopathic traits are able to successfully control their antisocial tendencies while others are not. Using neuroimaging technology, they investigated the possibility that "successful" psychopathic individuals -- those who control their antisocial tendencies -- have more developed neural structures that promote self-regulation.

Over two structural MRI studies of "successful" psychopathic individuals, the researchers found that participants had greater levels of gray matter density in the ventrolateral prefrontal cortex, one of the brain regions involved in self-regulatory processes, including the down-regulation of more primitive and reactive emotions, such as fear or anger.

"Our findings indicating that this region is denser in people higher on certain psychopathic traits suggests that these individuals may have a greater capacity for self-control," said Emily Lasko, a doctoral student in theDepartment of Psychologyin VCU'sCollege of Humanities and Sciences, who led the study. "This is important because it is some of the first evidence pointing us to a biological mechanism that can potentially explain how some psychopathic people are able to be 'successful' whereas others aren't."

The team's findings will be described in an article, "An Investigation of the Relationship Between Psychopathy and Greater Gray Matter Density in Lateral Prefrontal Cortex," that will be published in a forthcoming edition of the journal Personality Neuroscience.

The first study involved 80 adults in long-term relationships who were placed in an MRI scanner at VCU's Collaborative Advanced Research Imaging center, where researchers took a high-resolution scan of their brain. Afterwards, participants completed a battery of questionnaires, including one that measured the "dark triad" of personality traits, individually assessing psychopathy (e.g., "it's true that I can be mean to others"), narcissism (e.g., "I like to get acquainted with important people"), and Machiavellianism (e.g., "it's not wise to tell your secrets").

The second looked at another "successful" population: undergraduate students. The researchers recruited 64 undergraduate students who were assessed for psychopathic traits and tendencies using an assessment tool designed for use in community and student populations, measuring primary psychopathy (e.g., "I enjoy manipulating other people's feelings") and secondary psychopathy (e.g., "I quickly lose interest in the tasks I start"). The participants were then scanned at the University of Kentucky's Magnetic Resonance Imaging and Spectroscopy Center.

In both studies, the researchers observed that gray matter density in the ventrolateral prefrontal cortex -- which the researchers call "a hub for self-regulation" -- was positively associated with psychopathic traits.

The researchers say their findings support a compensatory model of psychopathy, in which "successful" psychopathic individuals develop inhibitory mechanisms to compensate for their antisocial tendencies.

"Most neuroscientific models of psychopathy emphasize deficits in brain structure and function. These new findings lend preliminary support to the growing notion that psychopathic individuals have some advantages compared to others, not just deficiencies," said study co-authorDavid Chester, Ph.D., an assistant professor in the Department of Psychology who runs theSocial Psychology and Neuroscience Lab, which conducts research on psychopathy, aggression and why people try to harm others.

Across the two samples of individuals who varied widely in their psychopathic tendencies, Chester said, the team found greater structural integrity in brain regions that facilitate impulse control.

"Such neural advantages may allow psychopathic individuals to counteract their selfish and hostile tendencies, allowing them to coexist with others in spite of their antisocial impulses," he said. "To fully understand and effectively treat psychopathic traits in the human population, we need to understand both the shortfalls and the surpluses inherent in psychopathy. These new results are an important, though preliminary, step in that direction."

The compensatory model of psychopathy offers a more optimistic alternative to the traditional view that focuses more on the deficits associated with psychopathy, Lasko said. The finding that the ventrolateral prefrontal cortex is denser in these individuals lends support for the compensatory model because that region is linked to self-regulatory and inhibitory behaviors, she said.

"Psychopathy is a highly nuanced construct and this framework helps to acknowledge those nuances," she said. "People high in psychopathy have 'dark' impulses, but some of these individuals are able to either inhibit them or find a socially acceptable outlet for them. The compensatory model posits that these individuals have enhanced self-regulation abilities, which are able to compensate for their antisocial impulses and facilitate their 'success.'"

Past research has indicated that approximately 1% of the general population, and 15% to 25% of incarcerated people, would meet the clinical criteria for psychopathy. By gaining a deeper understanding of the neurological advantages associated with "successful" psychopathic individuals, researchers may unlock new treatments and rehabilitation strategies for them, Lasko said.

"We believe that it is critical to understand these potential 'advantages' because if we are able to identify biomarkers of psychopathy, and importantly, factors that could be informative in determining an individual's potential for violent behavior and potential for rehabilitation, we will be better equipped to develop effective intervention and treatment strategies," she said.

Lasko emphasized that the researchers' findings are preliminary.

"Although the findings are novel and definitely provide a promising avenue for future research, they still need to be replicated," she said. "They are also correlational so we currently aren't able to make any causal inferences about the [ventrolateral prefrontal cortex]-psychopathy relationship."

Read more at Science Daily

Persistent headache or back pain 'twice as likely' in the presence of the other

People with persistent back pain or persistent headaches are twice as likely to suffer from both disorders, a new study from the University of Warwick has revealed.

The results, published in the Journal of Headache and Pain, suggest an association between the two types of pain that could point to a shared treatment for both.

The researchers from Warwick Medical School who are funded by the National Institute for Health Research (NIHR) led a systematic review of fourteen studies with a total of 460,195 participants that attempt to quantify the association between persistent headaches and persistent low back pain. They found an association between having persistent low back pain and having persistent (chronic) headaches, with patients experiencing one typically being twice as likely to experience the other compared to people without either headaches or back pain. The association is also stronger for people affected by migraine.

The researchers focused on people with chronic headache disorders, those who will have had headaches on most days for at least three months, and people with persistent low back pain that experience that pain day after day. These are two very common disorders that are leading causes of disability worldwide.

Around one in five people have persistent low back pain and one in 30 have chronic headaches. The researchers estimate that just over one in 100 people (or well over half a million people) in the UK have both.

Professor Martin Underwood, from Warwick Medical School, said: "In most of the studies we found that the odds were about double -- either way, you're about twice as likely to have headaches or chronic low back pain in the presence of the other. Which is very interesting because typically these have been looked as separate disorders and then managed by different people. But this makes you think that there might be, at least for some people, some commonality in what is causing the problem.

"There may be something in the relationship between how people react to the pain, making some people more sensitive to both the physical causes of the headache, particularly migraine, and the physical causes in the back, and how the body reacts to that and how you become disabled by it. There may also be more fundamental ways in how the brain interprets pain signals, so the same amount of input into the brain may be felt differently by different people.

"It suggests the possibility of an underpinning biological relationship, at least in some people with headache and back pain, that could also be a target for treatment."

Currently, there are specific drug treatments for patients with persistent migraine. For back pain, treatment focuses on exercise and manual therapy, but can also include cognitive behavioural approaches and psychological support approaches for people who are very disabled with back pain. The researchers suggest that those types of behavioural support systems may also help people living with chronic headaches.

Professor Underwood added: "A joint approach would be appropriate because there are specific treatments for headaches and people with migraine. Many of the ways we approach chronic musculoskeletal pain, particularly back pain, are with supportive management by helping people to live better with their pain.

Read more at Science Daily

Sep 18, 2019

Six galaxies undergoing sudden, dramatic transitions

Galaxies come in a wide variety of shapes, sizes and brightnesses, ranging from humdrum ordinary galaxies to luminous active galaxies. While an ordinary galaxy is visible mainly because of the light from its stars, an active galaxy shines brightest at its center, or nucleus, where a supermassive black hole emits a steady blast of bright light as it voraciously consumes nearby gas and dust.

Sitting somewhere on the spectrum between ordinary and active galaxies is another class, known as low-ionization nuclear emission-line region (LINER) galaxies. While LINERs are relatively common, accounting for roughly one-third of all nearby galaxies, astronomers have fiercely debated the main source of light emission from LINERs. Some argue that weakly active galactic nuclei are responsible, while others maintain that star-forming regions outside the galactic nucleus produce the most light.

A team of astronomers observed six mild-mannered LINER galaxies suddenly and surprisingly transforming into ravenous quasars -- home to the brightest of all active galactic nuclei. The team reported their observations, which could help demystify the nature of both LINERs and quasars while answering some burning questions about galactic evolution, in the Astrophysical Journal on September 18, 2019. Based on their analysis, the researchers suggest they have discovered an entirely new type of black hole activity at the centers of these six LINER galaxies.

"For one of the six objects, we first thought we had observed a tidal disruption event, which happens when a star passes too close to a supermassive black hole and gets shredded," said Sara Frederick, a graduate student in the University of Maryland Department of Astronomy and the lead author of the research paper. "But we later found it was a previously dormant black hole undergoing a transition that astronomers call a 'changing look,' resulting in a bright quasar. Observing six of these transitions, all in relatively quiet LINER galaxies, suggests that we've identified a totally new class of active galactic nucleus."

All six of the surprising transitions were observed during the first nine months of the Zwicky Transient Facility (ZTF), an automated sky survey project based at Caltech's Palomar Observatory near San Diego, California, which began observations in March 2018. UMD is a partner in the ZTF effort, facilitated by the Joint Space-Science Institute (JSI), a partnership between UMD and NASA's Goddard Space Flight Center.

Changing look transitions have been documented in other galaxies -- most commonly in a class of active galaxies known as Seyfert galaxies. By definition, Seyfert galaxies all have a bright, active galactic nucleus, but Type 1 and Type 2 Seyfert galaxies differ in the amount of light they emit at specific wavelengths. According to Frederick, many astronomers suspect that the difference results from the angle at which astronomers view the galaxies.

Type 1 Seyfert galaxies are thought to face Earth head-on, giving an unobstructed view of their nuclei, while Type 2 Seyfert galaxies are tilted at an oblique angle, such that their nuclei are partially obscured by a donut-shaped ring of dense, dusty gas clouds. Thus, changing look transitions between these two classes present a puzzle for astronomers, since a galaxy's orientation towards Earth is not expected to change.

Frederick and her colleagues' new observations may call these assumptions into question.

"We started out trying to understand changing look transformations in Seyfert galaxies. But instead, we found a whole new class of active galactic nucleus capable of transforming a wimpy galaxy to a luminous quasar," said Suvi Gezari, an associate professor of astronomy at UMD, a co-director of JSI and a co-author of the research paper. "Theory suggests that a quasar should take thousands of years to turn on, but these observations suggest that it can happen very quickly. It tells us that the theory is all wrong. We thought that Seyfert transformation was the major puzzle. But now we have a bigger issue to solve."

Frederick and her colleagues want to understand how a previously quiet galaxy with a calm nucleus can suddenly transition to a bright beacon of galactic radiation. To learn more, they performed follow-up observations on the objects with the Discovery Channel Telescope, which is operated by the Lowell Observatory in partnership with UMD, Boston University, the University of Toledo and Northern Arizona University. These observations helped to clarify aspects of the transitions, including how the rapidly transforming galactic nuclei interacted with their host galaxies.

"Our findings confirm that LINERs can, in fact, host active supermassive black holes at their centers," Frederick said. "But these six transitions were so sudden and dramatic, it tells us that there is something altogether different going on in these galaxies. We want to know how such massive amounts of gas and dust can suddenly start falling into a black hole. Because we caught these transitions in the act, it opens up a lot of opportunities to compare what the nuclei looked like before and after the transformation."

Unlike most quasars, which light up the surrounding clouds of gas and dust far beyond the galactic nucleus, the researchers found that only the gas and dust closest to the nucleus had been turned on. Frederick, Gezari and their collaborators suspect that this activity gradually spreads from the galactic nucleus -- and may provide the opportunity to map the development of a newborn quasar.

Read more at Science Daily

Dust from a giant asteroid crash caused an ancient ice age

About 466 million years ago, long before the age of the dinosaurs, the Earth froze. The seas began to ice over at the Earth's poles, and the new range of temperatures around the planet set the stage for a boom of new species evolving. The cause of this ice age was a mystery, until now: a new study in Science Advances argues that the ice age was caused by global cooling, triggered by extra dust in the atmosphere from a giant asteroid collision in outer space.

There's always a lot of dust from outer space floating down to Earth, little bits of asteroids and comets, but this dust is normally only a tiny fraction of the other dust in our atmosphere such as volcanic ash, dust from deserts and sea salt. But when a 93-mile-wide asteroid between Mars and Jupiter broke apart 466 million years ago, it created way more dust than usual. "Normally, Earth gains about 40,000 tons of extraterrestrial material every year," says Philipp Heck, a curator at the Field Museum, associate professor at the University of Chicago, and one of the paper's authors. "Imagine multiplying that by a factor of a thousand or ten thousand." To contextualize that, in a typical year, one thousand semi trucks' worth of interplanetary dust fall to Earth. In the couple million years following the collision, it'd be more like ten million semis.

"Our hypothesis is that the large amounts of extraterrestrial dust over a timeframe of at least two million years played an important role in changing the climate on Earth, contributing to cooling," says Heck.

"Our results show for the first time that such dust, at times, has cooled Earth dramatically," says Birger Schmitz of Sweden's Lund University, the study's lead author and a research associate at the Field Museum. "Our studies can give a more detailed, empirical-based understanding of how this works, and this in turn can be used to evaluate if model simulations are realistic."

To figure it out, researchers looked for traces of space dust in 466-million-year-old rocks, and compared it to tiny micrometeorites from Antarctica as a reference. "We studied extraterrestrial matter, meteorites and micrometeorites, in the sedimentary record of Earth, meaning rocks that were once sea floor," says Heck. "And then we extracted the extraterrestrial matter to discover what it was and where it came from."

Extracting the extraterrestrial matter -- the tiny meteorites and bits of dust from outer space -- involves taking the ancient rock and treating it with acid that eats away the stone and leaves the space stuff. The team then analyzed the chemical makeup of the remaining dust. The team also analyzed rocks from the ancient seafloor and looked for elements that rarely appear in Earth rocks and for isotopes -- different forms of atoms -- that show hallmarks of coming from outer space. For instance, helium atoms normally have two protons, two neutrons, and two electrons, but some that are shot out of the Sun and into space are missing a neutron. The presence of these special helium isotopes, along with rare metals often found in asteroids, proves that the dust originated from space.

Other scientists had already established that our planet was undergoing an ice age around this time. The amount of water in the Earth's oceans influences the way that rocks on the seabed form, and the rocks from this time period show signs of shallower oceans -- a hint that some of the Earth's water was trapped in glaciers and sea ice. Schmitz and his colleagues are the first to show that this ice age syncs up with the extra dust in the atmosphere. "The timing appears to be perfect," he says. The extra dust in the atmosphere helps explain the ice age -- by filtering out sunlight, the dust would have caused global cooling.

Since the dust floated down to Earth over at least two million years, the cooling was gradual enough for life to adapt and even benefit from the changes. An explosion of new species evolved as creatures adapted for survival in regions with different temperatures.

Heck notes that while this period of global cooling proved beneficial to life on Earth, fast-paced climate change can be catastrophic. "In the global cooling we studied, we're talking about timescales of millions of years. It's very different from the climate change caused by the meteorite 65 million years ago that killed the dinosaurs, and it's different from the global warming today -- this global cooling was a gentle nudge. There was less stress."

It's tempting to think that today's global warming could be solved by replicating the dust shower that triggered global cooling 466 million years ago. But Heck says he would be cautious: "Geoengineering proposals should be evaluated very critically and very carefully, because if something goes wrong, things could become worse than before."

While Heck isn't convinced that we've found the solution to climate change, he says it's a good idea for us to be thinking along these lines.

Read more at Science Daily

Towards better hand hygiene for flu prevention

Rubbing hands with ethanol-based sanitizers should provide a formidable defense against infection from flu viruses, which can thrive and spread in saliva and mucus. But findings published this week in mSphere challenge that notion -- and suggest that there's room for improvement in this approach to hand hygiene.

The influenza A virus (IAV) remains infectious in wet mucus from infected patients, even after being exposed to an ethanol-based disinfectant (EBD) for two full minutes, report researchers at Kyoto Profectural University of Medicine, in Japan. Fully deactivating the virus, they found, required nearly four minutes of exposure to the EBD.

The secret to the viral survival was the thick consistency of sputum, the researchers found. The substance's thick hydrogel structure kept the ethanol from reaching and deactivating the IAV.

"The physical properties of mucus protect the virus from inactivation," said physician and molecular gastroenterologist Ryohei Hirose, Ph.D, MD., who led the study with Takaaki Nakaya, PhD, an infectious disease researcher at the same school. "Until the mucus has completely dried, infectious IAV can remain on the hands and fingers, even after appropriate antiseptic hand rubbing."

The study suggests that a splash of hand sanitizer, quickly applied, isn't sufficient to stop IAV. Health care providers should be particularly cautious: If they don't adequately inactivate the virus between patients, they could enable its spread, Hirose said.

The researchers first studied the physical properties of mucus and found -- as they predicted -- that ethanol spreads more slowly through the viscous substance than it does through saline. Then, in a clinical component, they analyzed sputum that had been collected from IAV-infected patients and dabbed on human fingers. (The goal, said Hirose, was to simulate situations in which medical staff could transmit the virus.) After two minutes of exposure to EBD, the IAV virus remained active in the mucus on the fingertips. By four minutes, however, the virus had been deactivated.

Previous studies have suggested that ethanol-based disinfectants, or EBDs, are effective against IAV. The new work challenges those conclusions. Hirose suspects he knows why: Most studies on EBDs test the disinfectants on mucus that has already dried. When he and his colleagues repeated their experiments using fully dried mucus, they found that hand rubbing inactivated the virus within 30 seconds. In addition, the fingertip test used by Hirose and his colleagues may not exactly replicate the effects of hand rubbing, which through convection might be more effective at spreading the EBD.

For flu prevention, both the Centers for Disease Control and Prevention and the World Health Organization recommend hand hygiene practices that include using EBDs for 15-30 seconds. That's not enough rubbing to prevent IAV transmission, said Hirose.

Read more at Science Daily

Learning to read boosts the visual brain

How does learning to read change our brain? Does reading take up brain space dedicated to seeing objects such as faces, tools or houses? In a functional brain imaging study, a research team compared literate and illiterate adults in India. Reading recycles a brain region that is already sensitive to evolutionarily older visual categories, enhancing rather than destroying sensitivity to other visual input.

Reading is a recent invention in the history of human culture -- too recent for dedicated brain networks to have evolved specifically for it. How, then, do we accomplish this remarkable feat? As we learn to read, a brain region known as the 'visual word form area' (VWFA) becomes sensitive to script (letters or characters). However, some have claimed that the development of this area takes up (and thus detrimentally affects) space that is otherwise available for processing culturally relevant objects such as faces, houses or tools.

An international research team led by Falk Huettig (MPI and Radboud University Nijmegen) and Alexis Hervais-Adelman (MPI and University of Zurich) set out to test the effect of reading on the brain's visual system. The team scanned the brains of over ninety adults living in a remote part of Northern India with varying degrees of literacy (from people unable to read to skilled readers), using functional Magnetic Resonance Imaging (fMRI). While in the scanner, participants saw sentences, letters, and other visual categories such as faces.

If learning to read leads to 'competition' with other visual areas in the brain, readers should have different brain activation patterns from non-readers -- and not just for letters, but also for faces, tools, or houses. 'Recycling' of brain networks when learning to read has previously been thought to negatively affect evolutionary old functions such as face processing. Huettig and Hervais-Adelman, however, hypothesised that reading, rather than negatively affecting brain responses to non-orthographic (non-letter) objects, may, conversely, result in increased brain responses to visual stimuli in general.

"When we learn to read, we exploit the brain's capacity to form category-selective patches in visual brain areas. These arise in the same cortical territory as specialisations for other categories that are important to people, such as faces and houses. A long-standing question has been whether learning to read is detrimental to those other categories, given that there is limited space in the brain," explains Alexis Hervais-Adelman.

Reading-induced recycling did not detrimentally affect brain areas for faces, houses, or tools -- neither in location nor size. Strikingly, the brain activation for letters and faces was more similar in readers than in non-readers, particularly in the left hemisphere (the left ventral temporal lobe).

Read more at Science Daily

Sep 17, 2019

Harnessing tomato jumping genes could help speed-breed drought-resistant crops

Tomato plant
Once dismissed as 'junk DNA' that served no purpose, a family of 'jumping genes' found in tomatoes has the potential to accelerate crop breeding for traits such as improved drought resistance.

Researchers from the University of Cambridge's Sainsbury Laboratory (SLCU) and Department of Plant Sciences have discovered that drought stress triggers the activity of a family of jumping genes (Rider retrotransposons) previously known to contribute to fruit shape and colour in tomatoes. Their characterisation of Rider, published today in the journal PLOS Genetics, revealed that the Rider family is also present and potentially active in other crops, highlighting its potential as a source of new trait variations that could help plants better cope with more extreme conditions driven by our changing climate.

"Transposons carry huge potential for crop improvement. They are powerful drivers of trait diversity, and while we have been harnessing these traits to improve our crops for generations, we are now starting to understand the molecular mechanisms involved," said Dr Matthias Benoit, the paper's first author, formerly at SLCU.

Transposons, more commonly called jumping genes, are mobile snippets of DNA code that can copy themselves into new positions within the genome -- the genetic code of an organism. They can change, disrupt or amplify genes, or have no effect at all. Discovered in corn kernels by Nobel prize-winning scientist Barbara McClintock in the 1940s, only now are scientists realising that transposons are not junk at all but actually play an important role in the evolutionary process, and in altering gene expression and the physical characteristics of plants.

Using the jumping genes already present in plants to generate new characteristics would be a significant leap forward from traditional breeding techniques, making it possible to rapidly generate new traits in crops that have traditionally been bred to produce uniform shapes, colours and sizes to make harvesting more efficient and maximise yield. They would enable production of an enormous diversity of new traits, which could then be refined and optimised by gene targeting technologies.

"In a large population size, such as a tomato field, in which transposons are activated in each individual we would expect to see an enormous diversity of new traits. By controlling this 'random mutation' process within the plant we can accelerate this process to generate new phenotypes that we could not even imagine," said Dr Hajk Drost at SLCU, a co-author of the paper.

Today's gene targeting technologies are very powerful, but often require some functional understanding of the underlying gene to yield useful results and usually only target one or a few genes. Transposon activity is a native tool already present within the plant, which can be harnessed to generate new phenotypes or resistances and complement gene targeting efforts. Using transposons offers a transgene-free method of breeding that acknowledges the current EU legislation on Genetically Modified Organisms.

The work also revealed that Rider is present in several plant species, including economically important crops such as rapeseed, beetroot and quinoa. This wide abundance encourages further investigations into how it can be activated in a controlled way, or reactivated or re-introduced into plants that currently have mute Rider elements so that their potential can be regained. Such an approach has the potential to significantly reduce breeding time compared to traditional methods.

Read more at Science Daily

Carp aquaculture in Neolithic China dating back 8,000 years

Carp
In a recent study, an international team of researchers analyzed fish bones excavated from the Early Neolithic Jiahu site in Henan Province, China. By comparing the body-length distributions and species-composition ratios of the bones with findings from East Asian sites with present aquaculture, the researchers provide evidence of managed carp aquaculture at Jiahu dating back to 6200-5700 BC.

Despite the growing importance of farmed fish for economies and diets around the world, the origins of aquaculture remain unknown. The Shijing, the oldest surviving collection of ancient Chinese poetry, mentions carp being reared in a pond circa 1140 BC, and historical records describe carp being raised in artificial ponds and paddy fields in East Asia by the first millennium BC. But considering rice paddy fields in China date all the way back to the fifth millennium BC, researchers from Lake Biwa Museum in Kusatu, Japan, the Max Planck Institute for the Science of Human History in Jena, Germany, the Sainsbury Institute for the Study of Japanese Arts and Cultures in Norwich, U.K., and an international team of colleagues set out to discover whether carp aquaculture in China was practiced earlier than previously thought.

Carp farming goes way back in Early Neolithic Jiahu

Jiahu, located in Henan, China, is known for the early domestication of rice and pigs, as well the early development of fermented beverages, bone flutes, and possibly writing. This history of early development, combined with archaeological findings suggesting the presence of large expanses of water, made Jiahu an ideal location for the present study.

Researchers measured 588 pharyngeal carp teeth extracted from fish remains in Jiahu corresponding with three separate Neolithic periods, and compared the body-length distributions with findings from other sites and a modern sample of carp raised in Matsukawa Village, Japan. While the remains from the first two periods revealed unimodal patterns of body-length distribution peaking at or near carp maturity, the remains of Period III (6200-5700 BC) displayed bimodal distribution, with one peak at 350-400 mm corresponding with sexual maturity, and another at 150-200 mm.

This bimodal distribution identified by researchers was similar to that documented at the Iron Age Asahi site in Japan (circa 400 BC -- AD 100), and is indicative of a managed system of carp aquaculture that until now was unidentified in Neolithic China. "In such fisheries," the study notes, "a large number of cyprinids were caught during the spawning season and processed as preserved food. At the same time, some carp were kept alive and released into confined, human regulated waters where they spawned naturally and their offspring grew by feeding on available resources. In autumn, water was drained from the ponds and the fish harvested, with body-length distributions showing two peaks due to the presence of both immature and mature individuals."

Species-composition ratios support findings, indicate cultural preferences

The size of the fish wasn't the only piece of evidence researchers found supporting carp management at Jiahu. In East Asian lakes and rivers, crucian carp are typically more abundant than common carp, but common carp comprised roughly 75% of cyprinid remains found at Jiahu. This high proportion of less-prevalent fish indicates a cultural preference for common carp and the presence of aquaculture sophisticated enough to provide it.

Based on the analysis of carp remains from Jiahu and data from previous studies, researchers hypothesize three stages of aquaculture development in prehistoric East Asia. In Stage 1, humans fished the marshy areas where carp gather during spawning season. In Stage 2, these marshy ecotones were managed by digging channels and controlling water levels and circulation so the carp could spawn and the juveniles later harvested. Stage 3 involved constant human management, including using spawning beds to control reproduction and fish ponds or paddy fields to manage adolescents.

Read more at Science Daily

Most massive neutron star ever detected, almost too massive to exist

Neutron star illustration
Neutron stars -- the compressed remains of massive stars gone supernova -- are the densest "normal" objects in the known universe. (Black holes are technically denser, but far from normal.) Just a single sugar-cube worth of neutron-star material would weigh 100 million tons here on Earth, or about the same as the entire human population. Though astronomers and physicists have studied and marveled at these objects for decades, many mysteries remain about the nature of their interiors: Do crushed neutrons become "superfluid" and flow freely? Do they breakdown into a soup of subatomic quarks or other exotic particles? What is the tipping point when gravity wins out over matter and forms a black hole?

A team of astronomers using the National Science Foundation's (NSF) Green Bank Telescope (GBT) has brought us closer to finding the answers.

The researchers, members of the NANOGrav Physics Frontiers Center, discovered that a rapidly rotating millisecond pulsar, called J0740+6620, is the most massive neutron star ever measured, packing 2.17 times the mass of our Sun into a sphere only 30 kilometers across. This measurement approaches the limits of how massive and compact a single object can become without crushing itself down into a black hole. Recent work involving gravitational waves observed from colliding neutron stars by LIGO suggests that 2.17 solar masses might be very near that limit.

"Neutron stars are as mysterious as they are fascinating," said Thankful Cromartie, a graduate student at the University of Virginia and Grote Reber pre-doctoral fellow at the National Radio Astronomy Observatory in Charlottesville, Virginia. "These city-sized objects are essentially ginormous atomic nuclei. They are so massive that their interiors take on weird properties. Finding the maximum mass that physics and nature will allow can teach us a great deal about this otherwise inaccessible realm in astrophysics."

Pulsars get their name because of the twin beams of radio waves they emit from their magnetic poles. These beams sweep across space in a lighthouse-like fashion. Some rotate hundreds of times each second. Since pulsars spin with such phenomenal speed and regularity, astronomers can use them as the cosmic equivalent of atomic clocks. Such precise timekeeping helps astronomers study the nature of spacetime, measure the masses of stellar objects, and improve their understanding of general relativity.

In the case of this binary system, which is nearly edge-on in relation to Earth, this cosmic precision provided a pathway for astronomers to calculate the mass of the two stars.

As the ticking pulsar passes behind its white dwarf companion, there is a subtle (on the order of 10 millionths of a second) delay in the arrival time of the signals. This phenomenon is known as "Shapiro Delay." In essence, gravity from the white dwarf star slightly warps the space surrounding it, in accordance with Einstein's general theory of relativity. This warping means the pulses from the rotating neutron star have to travel just a little bit farther as they wend their way around the distortions of spacetime caused by the white dwarf.

Astronomers can use the amount of that delay to calculate the mass of the white dwarf. Once the mass of one of the co-orbiting bodies is known, it is a relatively straightforward process to accurately determine the mass of the other.

Cromartie is the principal author on a paper accepted for publication in Nature Astronomy. The GBT observations were research related to her doctoral thesis, which proposed observing this system at two special points in their mutual orbits to accurately calculate the mass of the neutron star.

"The orientation of this binary star system created a fantastic cosmic laboratory," said Scott Ransom, an astronomer at NRAO and coauthor on the paper. "Neutron stars have this tipping point where their interior densities get so extreme that the force of gravity overwhelms even the ability of neutrons to resist further collapse. Each "most massive" neutron star we find brings us closer to identifying that tipping point and helping us to understand the physics of matter at these mindboggling densities."

Read more at Science Daily

Transplanted brain stem cells survive without anti-rejection drugs in mice

Neurons illustration
In experiments in mice, Johns Hopkins Medicine researchers say they have developed a way to successfully transplant certain protective brain cells without the need for lifelong anti-rejection drugs.

A report on the research, published Sept. 16 in the journal Brain, details the new approach, which selectively circumvents the immune response against foreign cells, allowing transplanted cells to survive, thrive and protect brain tissue long after stopping immune-suppressing drugs.

The ability to successfully transplant healthy cells into the brain without the need for conventional anti-rejection drugs could advance the search for therapies that help children born with a rare but devastating class of genetic diseases in which myelin, the protective coating around neurons that helps them send messages, does not form normally. Approximately 1 of every 100,000 children born in the U.S. will have one of these diseases, such as Pelizaeus-Merzbacher disease. This disorder is characterized by infants missing developmental milestones such as sitting and walking, having involuntary muscle spasms, and potentially experiencing partial paralysis of the arms and legs, all caused by a genetic mutation in the genes that form myelin.

"Because these conditions are initiated by a mutation causing dysfunction in one type of cell, they present a good target for cell therapies, which involve transplanting healthy cells or cells engineered to not have a condition to take over for the diseased, damaged or missing cells," says Piotr Walczak, M.D., Ph.D., associate professor of radiology and radiological science at the Johns Hopkins University School of Medicine.

A major obstacle to our ability to replace these defective cells is the mammalian immune system. The immune system works by rapidly identifying 'self' or 'nonself' tissues, and mounting attacks to destroy nonself or "foreign" invaders. While beneficial when targeting bacteria or viruses, it is a major hurdle for transplanted organs, tissue or cells, which are also flagged for destruction. Traditional anti-rejection drugs that broadly and unspecifically tamp down the immune system altogether frequently work to fend off tissue rejection, but leave patients vulnerable to infection and other side effects. Patients need to remain on these drugs indefinitely.

In a bid to stop the immune response without the side effects, the Johns Hopkins Medicine team sought ways to manipulate T cells, the system's elite infection-fighting force that attacks foreign invaders.

Specifically, Walczak and his team focused on the series of so-called "costimulatory signals" that T cells must encounter in order to begin an attack.

"These signals are in place to help ensure these immune system cells do not go rogue, attacking the body's own healthy tissues," says Gerald Brandacher, M.D., professor of plastic and reconstructive surgery and scientific director of the Vascularized Composite Allotransplantation Research Laboratory at the Johns Hopkins University School of Medicine and co-author of this study.

The idea, he says, was to exploit the natural tendencies of these costimulatory signals as a means of training the immune system to eventually accept transplanted cells as "self" permanently.

To do that, the investigators used two antibodies, CTLA4-Ig and anti-CD154, which keep T cells from beginning an attack when encountering foreign particles by binding to the T cell surface, essentially blocking the 'go' signal. This combination has previously been used successfully to block rejection of solid organ transplants in animals, but had not yet been tested for cell transplants to repair myelin in the brain, says Walczak.

In a key set of experiments, Walczak and his team injected mouse brains with the protective glial cells that produce the myelin sheath that surrounds neurons. These specific cells were genetically engineered to glow so the researchers could keep tabs on them.

The researchers then transplanted the glial cells into three types of mice: mice genetically engineered to not form the glial cells that create the myelin sheath, normal mice and mice bred to be unable to mount an immune response.

Then the researchers used the antibodies to block an immune response, stopping treatment after six days.

Each day, the researchers used a specialized camera that could detect the glowing cells and capture pictures of the mouse brains, looking for the relative presence or absence of the transplanted glial cells. Cells transplanted into control mice that did not receive the antibody treatment immediately began to die off, and their glow was no longer detected by the camera by day 21.

The mice that received the antibody treatment maintained significant levels of transplanted glial cells for over 203 days, showing they were not killed by the mouse's T cells even in the absence of treatment.

"The fact that any glow remained showed us that cells had survived transplantation, even long after stopping the treatment," says Shen Li, M.D., lead author of the study. "We interpret this result as a success in selectively blocking the immune system's T cells from killing the transplanted cells."

The next step was to see whether the transplanted glial cells survived well enough to do what glial cells normally do in the brain -- create the myelin sheath. To do this, the researchers looked for key structural differences between mouse brains with thriving glial cells and those without, using MRI images. In the images, the researchers saw that the cells in the treated animals were indeed populating the appropriate parts of the brain.

Their results confirmed that the transplanted cells were able to thrive and assume their normal function of protecting neurons in the brain.

Walczak cautioned that these results are preliminary. They were able to deliver these cells and allow them to thrive in a localized portion of the mouse brain.

Read more at Science Daily

Sep 16, 2019

Climate signature identified in rivers globally

For decades geoscientists have been trying to detect the influence of climate on the formation of rivers, but up to now there has been no systematic evidence.

A new study, led by scientists from the University of Bristol and published today in the journal Nature, discovers a clear climatic signature on rivers globally that challenges existing theories.

If you walk from a river's source to its mouth, you walk a path that descends in elevation. In some rivers, this path will descend steeply out of the uplands, and then flatten out in the lowlands. This results in an elevational profile (which we call the long profile) that has a concave up shape, similar to the shape of the inside of a bowl as you trace it from the inside rim to the bottom. In contrast, a straight long profile descends evenly in elevation, like a ramp, along the path as you walk from the source to the mouth.

The new research by Chen et al. shows that while river long profiles tend to be concave up in humid regions, they become progressively straighter in drier regions.

Lead author Shiuan-An Chen from the University of Bristol's School of Geographical Sciences, said: "The long profile is formed gradually over tens of thousands to millions of years, so it tells a bigger story about the climate history of region. We would expect climate to affect the river long profile because it controls how much water flows in rivers and the associated force of water to move sediment along the riverbed."

Up until now scientists have lacked a large, systematic dataset of rivers that spans the range of climate zones on Earth, enabling full exploration of the links between climate and river form. The research team produced a new, freely available, database of river long profiles, generated from data originally collected by NASA's space shuttle. They used specialist software developed by co-author Dr Stuart Grieve at Queen Mary University of London to develop a new long profile database that includes over 330,000 rivers across the globe.

The study shows for the first time at the global scale that there are distinct differences in river long profile shapes across climate zones, and that the reason behind these differences lies in the expression of aridity in streamflow in rivers.

In humid regions, rivers tend to have flow in them all year round which continually moves sediment and erodes the overall profile into a concave up shape.

As the climate becomes progressively arid (from semi-arid, to arid, to hyper-arid), rivers only flow a few times per year when it rains, moving sediment infrequently.

Additionally, arid rivers tend to experience brief, intense rainstorms, which do not create flow over the entire river length.

These links between climate, streamflow and long profile shape are explained in the paper using a numerical model which simulates the evolution of river profiles over time in response to streamflow characteristics.

The authors show that regardless of all other potential controls on river profiles, streamflow characteristics have a dominant effect on the final profile shape. They demonstrate that the differences in the climatic expression of streamflow explain the variations in profile shape across climatic regions in their database.

Dr Katerina Michaelides, also from Bristol's School of Geographical Sciences, who led the research added:

"Traditional theory included in textbooks for decades describes that river long profiles evolve to be concave up. Existing theories are biased towards observations made in humid rivers, which are far better studied and more represented in published research than dryland rivers.

"Our study shows that many river profiles around the world are not concave up and that straighter profiles tend to be more common in arid environments."

"I think dryland rivers have been understudied and under-appreciated, especially given that drylands cover ~40% of the global land surface. Their streamflow expression gives unique insights into the climatic influence on land surface topography."

Read more at Science Daily