Sep 19, 2019

Long lost human relative unveiled

If you could travel back in time to 100,000 years ago, you'd find yourself living among several different groups of humans, including Modern Humans (those anatomically similar to us), Neanderthals, and Denisovans. We know quite a bit about Neanderthals, thanks to numerous remains found across Europe and Asia. But exactly what our Denisovan relatives might have looked like had been anyone's guess for a simple reason: the entire collection of Denisovan remains includes three teeth, a pinky bone and a lower jaw. Now, as reported in the scientific journal Cell, a team led by Hebrew University of Jerusalem (HUJI) researchers Professor Liran Carmel and Dr. David Gokhman (currently a postdoc at Stanford) has produced reconstructions of these long-lost relatives based on patterns of methylation (chemical changes) in their ancient DNA.

"We provide the first reconstruction of the skeletal anatomy of Denisovans," says lead author Carmel of HUJI's Institute of Life Sciences. "In many ways, Denisovans resembled Neanderthals but in some traits they resembled us and in others they were unique."

Denisovan remains were first discovered in 2008 and have fascinated human evolution researchers ever since. They lived in Siberia and Eastern Asia, and went extinct approximately 50,000 years ago. We don't yet know why. That said, up to 6% of present-day Melanesians and Aboriginal Australians contain Denisovan DNA. Further, Denisovan DNA likely contributed to modern Tibetans' ability to live in high altitudes and to Inuits' ability to withstand freezing temperatures.

Overall, Carmel and his team identified 56 anatomical features in which Denisovans differ from modern humans and/or Neanderthals, 34 of them in the skull. For example, the Denisovan's skull was probably wider than that of modern humans' or Neanderthals'. They likely also had a longer dental arch and no chin.

The researchers came to these conclusions after three years of intense work studying DNA methylation maps. DNA methylation refers to chemical modifications that affect a gene's activity but not its underlying DNA sequence. The researchers first compared DNA methylation patterns among the three human groups to find regions in the genome that were differentially methylated. Next, they looked for evidence about what those differences might mean for anatomical features -- based on what's known about human disorders in which those same genes lose their function.

"In doing so, we got a prediction as to what skeletal parts are affected by differential regulation of each gene and in what direction that skeletal part would change -- for example, a longer or shorter femur bone," Dr. Gokhman explained.

To test this ground-breaking method, the researchers applied it to two species whose anatomy is known: the Neanderthal and the chimpanzee. They found that roughly 85% of their trait reconstructions were accurate in predicting which traits diverged and in which direction they diverged. Then, they applied this method to the Denisovan and were able to produce the first reconstructed anatomical profile of the mysterious Denisovan.

As for the accuracy of their Denisovan profile, Carmel shared, "One of the most exciting moments happened a few weeks after we sent our paper to peer-review. Scientists had discovered a Denisovan jawbone! We quickly compared this bone to our predictions and found that it matched perfectly. Without even planning on it, we received independent confirmation of our ability to reconstruct whole anatomical profiles using DNA that we extracted from a single fingertip."

In their Cell paper, Carmel and his colleagues predict many Denisovan traits that resemble Neanderthals', such as a sloping forehead, long face and large pelvis, and others that are unique among humans, for example, a large dental arch and very wide skull. Do these traits shed light on the Denisovan lifestyle? Could they explain how Denisovans survived the extreme cold of Siberia?

Read more at Science Daily

Study of ancient climate suggests future warming could accelerate

The rate at which the planet warms in response to the ongoing buildup of heat-trapping carbon dioxide gas could increase in the future, according to new simulations of a comparable warm period more than 50 million years ago.

Researchers at the University of Michigan and the University of Arizona used a state-of-the-art climate model to successfully simulate -- for the first time -- the extreme warming of the Early Eocene Period, which is considered an analog for Earth's future climate.

They found that the rate of warming increased dramatically as carbon dioxide levels rose, a finding with far-reaching implications for Earth's future climate, the researchers report in a paper scheduled for publication Sept. 18 in the journal Science Advances.

Another way of stating this result is that the climate of the Early Eocene became increasingly sensitive to additional carbon dioxide as the planet warmed.

"We were surprised that the climate sensitivity increased as much as it did with increasing carbon dioxide levels," said first author Jiang Zhu, a postdoctoral researcher at the U-M Department of Earth and Environmental Sciences.

"It is a scary finding because it indicates that the temperature response to an increase in carbon dioxide in the future might be larger than the response to the same increase in CO2 now. This is not good news for us."

The researchers determined that the large increase in climate sensitivity they observed -- which had not been seen in previous attempts to simulate the Early Eocene using similar amounts of carbon dioxide -- is likely due to an improved representation of cloud processes in the climate model they used, the Community Earth System Model version 1.2, or CESM1.2.

Global warming is expected to change the distribution and types of clouds in the Earth's atmosphere, and clouds can have both warming and cooling effects on the climate. In their simulations of the Early Eocene, Zhu and his colleagues found a reduction in cloud coverage and opacity that amplified CO2-induced warming.

The same cloud processes responsible for increased climate sensitivity in the Eocene simulations are active today, according to the researchers.

"Our findings highlight the role of small-scale cloud processes in determining large-scale climate changes and suggest a potential increase in climate sensitivity with future warming," said U-M paleoclimate researcher Christopher Poulsen, a co-author of the Science Advances paper.

"The sensitivity we're inferring for the Eocene is indeed very high, though it's unlikely that climate sensitivity will reach Eocene levels in our lifetimes," said Jessica Tierney of the University of Arizona, the paper's third author.

The Early Eocene (roughly 48 million to 56 million years ago) was the warmest period of the past 66 million years. It began with the Paleocene-Eocene Thermal Maximum, which is known as the PETM, the most severe of several short, intensely warm events.

The Early Eocene was a time of elevated atmospheric carbon dioxide concentrations and surface temperatures at least 14 degrees Celsius (25 degrees Fahrenheit) warmer, on average, than today. Also, the difference between temperatures at the equator and the poles was much smaller.

Geological evidence suggests that atmospheric carbon dioxide levels reached 1,000 parts per million in the Early Eocene, more than twice the present-day level of 412 ppm. If nothing is done to limit carbon emissions from the burning of fossil fuels, CO2 levels could once again reach 1,000 ppm by the year 2100, according to climate scientists.

Until now, climate models have been unable to simulate the extreme surface warmth of the Early Eocene -- including the sudden and dramatic temperature spikes of the PETM -- by relying solely on atmospheric CO2 levels. Unsubstantiated changes to the models were required to make the numbers work, said Poulsen, a professor in the U-M Department of Earth and Environmental Sciences and associate dean for natural sciences.

"For decades, the models have underestimated these temperatures, and the community has long assumed that the problem was with the geological data, or that there was a warming mechanism that hadn't been recognized," he said.

But the CESM1.2 model was able to simulate both the warm conditions and the low equator-to-pole temperature gradient seen in the geological records.

"For the first time, a climate model matches the geological evidence out of the box -- that is, without deliberate tweaks made to the model. It's a breakthrough for our understanding of past warm climates," Tierney said.

CESM1.2 was one of the climate models used in the authoritative Fifth Assessment Report from the Intergovernmental Panel on Climate Change, finalized in 2014. The model's ability to satisfactorily simulate Early Eocene warming provides strong support for CESM1.2's prediction of future warming, which is expressed through a key climate parameter called equilibrium climate sensitivity.

Read more at Science Daily

Even short-lived solar panels can be economically viable

A new study shows that, contrary to widespread belief within the solar power industry, new kinds of solar cells and panels don't necessarily have to last for 25 to 30 years in order to be economically viable in today's market.

Rather, solar panels with initial lifetimes of as little as 10 years can sometimes make economic sense, even for grid-scale installations -- thus potentially opening the door to promising new solar photovoltaic technologies that have been considered insufficiently durable for widespread use.

The new findings are described in a paper in the journal Joule, by Joel Jean, a former MIT postdoc and CEO of startup company Swift Solar; Vladimir Bulović, professor of electrical engineering and computer science and director of MIT.nano; and Michael Woodhouse of the National Renewable Energy Laboratory (NREL) in Colorado.

"When you talk to people in the solar field, they say any new solar panel has to last 25 years," Jean says. "If someone comes up with a new technology with a 10-year lifetime, no one is going to look at it. That's considered common knowledge in the field, and it's kind of crippling."

Jean adds that "that's a huge barrier, because you can't prove a 25-year lifetime in a year or two, or even 10." That presumption, he says, has left many promising new technologies stuck on the sidelines, as conventional crystalline silicon technologies overwhelmingly dominate the commercial solar marketplace. But, the researchers found, that does not need to be the case.

"We have to remember that ultimately what people care about is not the cost of the panel; it's the levelized cost of electricity," he says. In other words, it's the actual cost per kilowatt-hour delivered over the system's useful lifetime, including the cost of the panels, inverters, racking, wiring, land, installation labor, permitting, grid interconnection, and other system components, along with ongoing maintenance costs.

Part of the reason that the economics of the solar industry look different today than in the past is that the cost of the panels (also known as modules) has plummeted so far that now, the "balance of system" costs -- that is, everything except the panels themselves -- exceeds that of the panels. That means that, as long as newer solar panels are electrically and physically compatible with the racking and electrical systems, it can make economic sense to replace the panels with newer, better ones as they become available, while reusing the rest of the system.

"Most of the technology is in the panel, but most of the cost is in the system," Jean says. "Instead of having a system where you install it and then replace everything after 30 years, what if you replace the panels earlier and leave everything else the same? One of the reasons that might work economically is if you're replacing them with more efficient panels," which is likely to be the case as a wide variety of more efficient and lower-cost technologies are being explored around the world.

He says that what the team found in their analysis is that "with some caveats about financing, you can, in theory, get to a competitive cost, because your new panels are getting better, with a lifetime as short as 15 or even 10 years."

Although the costs of solar cells have come down year by year, Bulović says, "the expectation that one had to demonstrate a 25-year lifetime for any new solar panel technology has stayed as a tautology. In this study we show that as the solar panels get less expensive and more efficient, the cost balance significantly changes."

He says that one aim of the new paper is to alert the researchers that their new solar inventions can be cost-effective even if relatively short lived, and hence may be adopted and deployed more rapidly than expected. At the same time, he says, investors should know that they stand to make bigger profits by opting for efficient solar technologies that may not have been proven to last as long, knowing that periodically the panels can be replaced by newer, more efficient ones.

"Historical trends show that solar panel technology keeps getting more efficient year after year, and these improvements are bound to continue for years to come," says Bulović. Perovskite-based solar cells, for example, when first developed less than a decade ago, had efficiencies of only a few percent. But recently their record performance exceeded 25 percent efficiency, compared to 27 percent for the record silicon cell and about 20 percent for today's standard silicon modules, according to Bulović. Importantly, in novel device designs, a perovskite solar cell can be stacked on top of another perovskite, silicon, or thin-film cell, to raise the maximum achievable efficiency limit to over 40 percent, which is well above the 30 percent fundamental limit of today's silicon solar technologies. But perovskites have issues with longevity of operation and have not yet been shown to be able to come close to meeting the 25-year standard.

Bulović hopes the study will "shift the paradigm of what has been accepted as a global truth." Up to now, he says, "many promising technologies never even got a start, because the bar is set too high" on the need for durability.

For their analysis, the team looked at three different kinds of solar installations: a typical 6-kilowatt residential system, a 200-kilowatt commercial system, and a large 100-megawatt utility-scale system with solar tracking. They used NREL benchmark parameters for U.S. solar systems and a variety of assumptions about future progress in solar technology development, financing, and the disposal of the initial panels after replacement, including recycling of the used modules. The models were validated using four independent tools for calculating the levelized cost of electricity (LCOE), a standard metric for comparing the economic viability of different sources of electricity.

In all three installation types, they found, depending on the particulars of local conditions, replacement with new modules after 10 to 15 years could in many cases provide economic advantages while maintaining the many environmental and emissions-reduction benefits of solar power. The basic requirement for cost-competitiveness is that any new solar technology that is to be installed in the U.S should start with a module efficiency of at least 20 percent, a cost of no more than 30 cents per watt, and a lifetime of at least 10 years, with the potential to improve on all three.

Jean points out that the solar technologies that are considered standard today, mostly silicon-based but also thin-film variants such as cadmium telluride, "were not very stable in the early years. The reason they last 25 to 30 years today is that they have been developed for many decades." The new analysis may now open the door for some of the promising newer technologies to be deployed at sufficient scale to build up similar levels of experience and improvement over time and to make an impact on climate change earlier than they could without module replacement, he says.

Read more at Science Daily

How people with psychopathic traits control their 'dark impulses'

People with psychopathic traits are predisposed toward antisocial behavior that can result in "unsuccessful" outcomes such as incarceration. However, many individuals with psychopathic traits are able to control their antisocial tendencies and avoid committing the antagonistic acts that can result.

A team of researchers at Virginia Commonwealth University and the University of Kentucky set out to explore what mechanisms might explain why certain people with psychopathic traits are able to successfully control their antisocial tendencies while others are not. Using neuroimaging technology, they investigated the possibility that "successful" psychopathic individuals -- those who control their antisocial tendencies -- have more developed neural structures that promote self-regulation.

Over two structural MRI studies of "successful" psychopathic individuals, the researchers found that participants had greater levels of gray matter density in the ventrolateral prefrontal cortex, one of the brain regions involved in self-regulatory processes, including the down-regulation of more primitive and reactive emotions, such as fear or anger.

"Our findings indicating that this region is denser in people higher on certain psychopathic traits suggests that these individuals may have a greater capacity for self-control," said Emily Lasko, a doctoral student in theDepartment of Psychologyin VCU'sCollege of Humanities and Sciences, who led the study. "This is important because it is some of the first evidence pointing us to a biological mechanism that can potentially explain how some psychopathic people are able to be 'successful' whereas others aren't."

The team's findings will be described in an article, "An Investigation of the Relationship Between Psychopathy and Greater Gray Matter Density in Lateral Prefrontal Cortex," that will be published in a forthcoming edition of the journal Personality Neuroscience.

The first study involved 80 adults in long-term relationships who were placed in an MRI scanner at VCU's Collaborative Advanced Research Imaging center, where researchers took a high-resolution scan of their brain. Afterwards, participants completed a battery of questionnaires, including one that measured the "dark triad" of personality traits, individually assessing psychopathy (e.g., "it's true that I can be mean to others"), narcissism (e.g., "I like to get acquainted with important people"), and Machiavellianism (e.g., "it's not wise to tell your secrets").

The second looked at another "successful" population: undergraduate students. The researchers recruited 64 undergraduate students who were assessed for psychopathic traits and tendencies using an assessment tool designed for use in community and student populations, measuring primary psychopathy (e.g., "I enjoy manipulating other people's feelings") and secondary psychopathy (e.g., "I quickly lose interest in the tasks I start"). The participants were then scanned at the University of Kentucky's Magnetic Resonance Imaging and Spectroscopy Center.

In both studies, the researchers observed that gray matter density in the ventrolateral prefrontal cortex -- which the researchers call "a hub for self-regulation" -- was positively associated with psychopathic traits.

The researchers say their findings support a compensatory model of psychopathy, in which "successful" psychopathic individuals develop inhibitory mechanisms to compensate for their antisocial tendencies.

"Most neuroscientific models of psychopathy emphasize deficits in brain structure and function. These new findings lend preliminary support to the growing notion that psychopathic individuals have some advantages compared to others, not just deficiencies," said study co-authorDavid Chester, Ph.D., an assistant professor in the Department of Psychology who runs theSocial Psychology and Neuroscience Lab, which conducts research on psychopathy, aggression and why people try to harm others.

Across the two samples of individuals who varied widely in their psychopathic tendencies, Chester said, the team found greater structural integrity in brain regions that facilitate impulse control.

"Such neural advantages may allow psychopathic individuals to counteract their selfish and hostile tendencies, allowing them to coexist with others in spite of their antisocial impulses," he said. "To fully understand and effectively treat psychopathic traits in the human population, we need to understand both the shortfalls and the surpluses inherent in psychopathy. These new results are an important, though preliminary, step in that direction."

The compensatory model of psychopathy offers a more optimistic alternative to the traditional view that focuses more on the deficits associated with psychopathy, Lasko said. The finding that the ventrolateral prefrontal cortex is denser in these individuals lends support for the compensatory model because that region is linked to self-regulatory and inhibitory behaviors, she said.

"Psychopathy is a highly nuanced construct and this framework helps to acknowledge those nuances," she said. "People high in psychopathy have 'dark' impulses, but some of these individuals are able to either inhibit them or find a socially acceptable outlet for them. The compensatory model posits that these individuals have enhanced self-regulation abilities, which are able to compensate for their antisocial impulses and facilitate their 'success.'"

Past research has indicated that approximately 1% of the general population, and 15% to 25% of incarcerated people, would meet the clinical criteria for psychopathy. By gaining a deeper understanding of the neurological advantages associated with "successful" psychopathic individuals, researchers may unlock new treatments and rehabilitation strategies for them, Lasko said.

"We believe that it is critical to understand these potential 'advantages' because if we are able to identify biomarkers of psychopathy, and importantly, factors that could be informative in determining an individual's potential for violent behavior and potential for rehabilitation, we will be better equipped to develop effective intervention and treatment strategies," she said.

Lasko emphasized that the researchers' findings are preliminary.

"Although the findings are novel and definitely provide a promising avenue for future research, they still need to be replicated," she said. "They are also correlational so we currently aren't able to make any causal inferences about the [ventrolateral prefrontal cortex]-psychopathy relationship."

Read more at Science Daily

Persistent headache or back pain 'twice as likely' in the presence of the other

People with persistent back pain or persistent headaches are twice as likely to suffer from both disorders, a new study from the University of Warwick has revealed.

The results, published in the Journal of Headache and Pain, suggest an association between the two types of pain that could point to a shared treatment for both.

The researchers from Warwick Medical School who are funded by the National Institute for Health Research (NIHR) led a systematic review of fourteen studies with a total of 460,195 participants that attempt to quantify the association between persistent headaches and persistent low back pain. They found an association between having persistent low back pain and having persistent (chronic) headaches, with patients experiencing one typically being twice as likely to experience the other compared to people without either headaches or back pain. The association is also stronger for people affected by migraine.

The researchers focused on people with chronic headache disorders, those who will have had headaches on most days for at least three months, and people with persistent low back pain that experience that pain day after day. These are two very common disorders that are leading causes of disability worldwide.

Around one in five people have persistent low back pain and one in 30 have chronic headaches. The researchers estimate that just over one in 100 people (or well over half a million people) in the UK have both.

Professor Martin Underwood, from Warwick Medical School, said: "In most of the studies we found that the odds were about double -- either way, you're about twice as likely to have headaches or chronic low back pain in the presence of the other. Which is very interesting because typically these have been looked as separate disorders and then managed by different people. But this makes you think that there might be, at least for some people, some commonality in what is causing the problem.

"There may be something in the relationship between how people react to the pain, making some people more sensitive to both the physical causes of the headache, particularly migraine, and the physical causes in the back, and how the body reacts to that and how you become disabled by it. There may also be more fundamental ways in how the brain interprets pain signals, so the same amount of input into the brain may be felt differently by different people.

"It suggests the possibility of an underpinning biological relationship, at least in some people with headache and back pain, that could also be a target for treatment."

Currently, there are specific drug treatments for patients with persistent migraine. For back pain, treatment focuses on exercise and manual therapy, but can also include cognitive behavioural approaches and psychological support approaches for people who are very disabled with back pain. The researchers suggest that those types of behavioural support systems may also help people living with chronic headaches.

Professor Underwood added: "A joint approach would be appropriate because there are specific treatments for headaches and people with migraine. Many of the ways we approach chronic musculoskeletal pain, particularly back pain, are with supportive management by helping people to live better with their pain.

Read more at Science Daily

Sep 18, 2019

Six galaxies undergoing sudden, dramatic transitions

Galaxies come in a wide variety of shapes, sizes and brightnesses, ranging from humdrum ordinary galaxies to luminous active galaxies. While an ordinary galaxy is visible mainly because of the light from its stars, an active galaxy shines brightest at its center, or nucleus, where a supermassive black hole emits a steady blast of bright light as it voraciously consumes nearby gas and dust.

Sitting somewhere on the spectrum between ordinary and active galaxies is another class, known as low-ionization nuclear emission-line region (LINER) galaxies. While LINERs are relatively common, accounting for roughly one-third of all nearby galaxies, astronomers have fiercely debated the main source of light emission from LINERs. Some argue that weakly active galactic nuclei are responsible, while others maintain that star-forming regions outside the galactic nucleus produce the most light.

A team of astronomers observed six mild-mannered LINER galaxies suddenly and surprisingly transforming into ravenous quasars -- home to the brightest of all active galactic nuclei. The team reported their observations, which could help demystify the nature of both LINERs and quasars while answering some burning questions about galactic evolution, in the Astrophysical Journal on September 18, 2019. Based on their analysis, the researchers suggest they have discovered an entirely new type of black hole activity at the centers of these six LINER galaxies.

"For one of the six objects, we first thought we had observed a tidal disruption event, which happens when a star passes too close to a supermassive black hole and gets shredded," said Sara Frederick, a graduate student in the University of Maryland Department of Astronomy and the lead author of the research paper. "But we later found it was a previously dormant black hole undergoing a transition that astronomers call a 'changing look,' resulting in a bright quasar. Observing six of these transitions, all in relatively quiet LINER galaxies, suggests that we've identified a totally new class of active galactic nucleus."

All six of the surprising transitions were observed during the first nine months of the Zwicky Transient Facility (ZTF), an automated sky survey project based at Caltech's Palomar Observatory near San Diego, California, which began observations in March 2018. UMD is a partner in the ZTF effort, facilitated by the Joint Space-Science Institute (JSI), a partnership between UMD and NASA's Goddard Space Flight Center.

Changing look transitions have been documented in other galaxies -- most commonly in a class of active galaxies known as Seyfert galaxies. By definition, Seyfert galaxies all have a bright, active galactic nucleus, but Type 1 and Type 2 Seyfert galaxies differ in the amount of light they emit at specific wavelengths. According to Frederick, many astronomers suspect that the difference results from the angle at which astronomers view the galaxies.

Type 1 Seyfert galaxies are thought to face Earth head-on, giving an unobstructed view of their nuclei, while Type 2 Seyfert galaxies are tilted at an oblique angle, such that their nuclei are partially obscured by a donut-shaped ring of dense, dusty gas clouds. Thus, changing look transitions between these two classes present a puzzle for astronomers, since a galaxy's orientation towards Earth is not expected to change.

Frederick and her colleagues' new observations may call these assumptions into question.

"We started out trying to understand changing look transformations in Seyfert galaxies. But instead, we found a whole new class of active galactic nucleus capable of transforming a wimpy galaxy to a luminous quasar," said Suvi Gezari, an associate professor of astronomy at UMD, a co-director of JSI and a co-author of the research paper. "Theory suggests that a quasar should take thousands of years to turn on, but these observations suggest that it can happen very quickly. It tells us that the theory is all wrong. We thought that Seyfert transformation was the major puzzle. But now we have a bigger issue to solve."

Frederick and her colleagues want to understand how a previously quiet galaxy with a calm nucleus can suddenly transition to a bright beacon of galactic radiation. To learn more, they performed follow-up observations on the objects with the Discovery Channel Telescope, which is operated by the Lowell Observatory in partnership with UMD, Boston University, the University of Toledo and Northern Arizona University. These observations helped to clarify aspects of the transitions, including how the rapidly transforming galactic nuclei interacted with their host galaxies.

"Our findings confirm that LINERs can, in fact, host active supermassive black holes at their centers," Frederick said. "But these six transitions were so sudden and dramatic, it tells us that there is something altogether different going on in these galaxies. We want to know how such massive amounts of gas and dust can suddenly start falling into a black hole. Because we caught these transitions in the act, it opens up a lot of opportunities to compare what the nuclei looked like before and after the transformation."

Unlike most quasars, which light up the surrounding clouds of gas and dust far beyond the galactic nucleus, the researchers found that only the gas and dust closest to the nucleus had been turned on. Frederick, Gezari and their collaborators suspect that this activity gradually spreads from the galactic nucleus -- and may provide the opportunity to map the development of a newborn quasar.

Read more at Science Daily

Dust from a giant asteroid crash caused an ancient ice age

About 466 million years ago, long before the age of the dinosaurs, the Earth froze. The seas began to ice over at the Earth's poles, and the new range of temperatures around the planet set the stage for a boom of new species evolving. The cause of this ice age was a mystery, until now: a new study in Science Advances argues that the ice age was caused by global cooling, triggered by extra dust in the atmosphere from a giant asteroid collision in outer space.

There's always a lot of dust from outer space floating down to Earth, little bits of asteroids and comets, but this dust is normally only a tiny fraction of the other dust in our atmosphere such as volcanic ash, dust from deserts and sea salt. But when a 93-mile-wide asteroid between Mars and Jupiter broke apart 466 million years ago, it created way more dust than usual. "Normally, Earth gains about 40,000 tons of extraterrestrial material every year," says Philipp Heck, a curator at the Field Museum, associate professor at the University of Chicago, and one of the paper's authors. "Imagine multiplying that by a factor of a thousand or ten thousand." To contextualize that, in a typical year, one thousand semi trucks' worth of interplanetary dust fall to Earth. In the couple million years following the collision, it'd be more like ten million semis.

"Our hypothesis is that the large amounts of extraterrestrial dust over a timeframe of at least two million years played an important role in changing the climate on Earth, contributing to cooling," says Heck.

"Our results show for the first time that such dust, at times, has cooled Earth dramatically," says Birger Schmitz of Sweden's Lund University, the study's lead author and a research associate at the Field Museum. "Our studies can give a more detailed, empirical-based understanding of how this works, and this in turn can be used to evaluate if model simulations are realistic."

To figure it out, researchers looked for traces of space dust in 466-million-year-old rocks, and compared it to tiny micrometeorites from Antarctica as a reference. "We studied extraterrestrial matter, meteorites and micrometeorites, in the sedimentary record of Earth, meaning rocks that were once sea floor," says Heck. "And then we extracted the extraterrestrial matter to discover what it was and where it came from."

Extracting the extraterrestrial matter -- the tiny meteorites and bits of dust from outer space -- involves taking the ancient rock and treating it with acid that eats away the stone and leaves the space stuff. The team then analyzed the chemical makeup of the remaining dust. The team also analyzed rocks from the ancient seafloor and looked for elements that rarely appear in Earth rocks and for isotopes -- different forms of atoms -- that show hallmarks of coming from outer space. For instance, helium atoms normally have two protons, two neutrons, and two electrons, but some that are shot out of the Sun and into space are missing a neutron. The presence of these special helium isotopes, along with rare metals often found in asteroids, proves that the dust originated from space.

Other scientists had already established that our planet was undergoing an ice age around this time. The amount of water in the Earth's oceans influences the way that rocks on the seabed form, and the rocks from this time period show signs of shallower oceans -- a hint that some of the Earth's water was trapped in glaciers and sea ice. Schmitz and his colleagues are the first to show that this ice age syncs up with the extra dust in the atmosphere. "The timing appears to be perfect," he says. The extra dust in the atmosphere helps explain the ice age -- by filtering out sunlight, the dust would have caused global cooling.

Since the dust floated down to Earth over at least two million years, the cooling was gradual enough for life to adapt and even benefit from the changes. An explosion of new species evolved as creatures adapted for survival in regions with different temperatures.

Heck notes that while this period of global cooling proved beneficial to life on Earth, fast-paced climate change can be catastrophic. "In the global cooling we studied, we're talking about timescales of millions of years. It's very different from the climate change caused by the meteorite 65 million years ago that killed the dinosaurs, and it's different from the global warming today -- this global cooling was a gentle nudge. There was less stress."

It's tempting to think that today's global warming could be solved by replicating the dust shower that triggered global cooling 466 million years ago. But Heck says he would be cautious: "Geoengineering proposals should be evaluated very critically and very carefully, because if something goes wrong, things could become worse than before."

While Heck isn't convinced that we've found the solution to climate change, he says it's a good idea for us to be thinking along these lines.

Read more at Science Daily

Towards better hand hygiene for flu prevention

Rubbing hands with ethanol-based sanitizers should provide a formidable defense against infection from flu viruses, which can thrive and spread in saliva and mucus. But findings published this week in mSphere challenge that notion -- and suggest that there's room for improvement in this approach to hand hygiene.

The influenza A virus (IAV) remains infectious in wet mucus from infected patients, even after being exposed to an ethanol-based disinfectant (EBD) for two full minutes, report researchers at Kyoto Profectural University of Medicine, in Japan. Fully deactivating the virus, they found, required nearly four minutes of exposure to the EBD.

The secret to the viral survival was the thick consistency of sputum, the researchers found. The substance's thick hydrogel structure kept the ethanol from reaching and deactivating the IAV.

"The physical properties of mucus protect the virus from inactivation," said physician and molecular gastroenterologist Ryohei Hirose, Ph.D, MD., who led the study with Takaaki Nakaya, PhD, an infectious disease researcher at the same school. "Until the mucus has completely dried, infectious IAV can remain on the hands and fingers, even after appropriate antiseptic hand rubbing."

The study suggests that a splash of hand sanitizer, quickly applied, isn't sufficient to stop IAV. Health care providers should be particularly cautious: If they don't adequately inactivate the virus between patients, they could enable its spread, Hirose said.

The researchers first studied the physical properties of mucus and found -- as they predicted -- that ethanol spreads more slowly through the viscous substance than it does through saline. Then, in a clinical component, they analyzed sputum that had been collected from IAV-infected patients and dabbed on human fingers. (The goal, said Hirose, was to simulate situations in which medical staff could transmit the virus.) After two minutes of exposure to EBD, the IAV virus remained active in the mucus on the fingertips. By four minutes, however, the virus had been deactivated.

Previous studies have suggested that ethanol-based disinfectants, or EBDs, are effective against IAV. The new work challenges those conclusions. Hirose suspects he knows why: Most studies on EBDs test the disinfectants on mucus that has already dried. When he and his colleagues repeated their experiments using fully dried mucus, they found that hand rubbing inactivated the virus within 30 seconds. In addition, the fingertip test used by Hirose and his colleagues may not exactly replicate the effects of hand rubbing, which through convection might be more effective at spreading the EBD.

For flu prevention, both the Centers for Disease Control and Prevention and the World Health Organization recommend hand hygiene practices that include using EBDs for 15-30 seconds. That's not enough rubbing to prevent IAV transmission, said Hirose.

Read more at Science Daily

Learning to read boosts the visual brain

How does learning to read change our brain? Does reading take up brain space dedicated to seeing objects such as faces, tools or houses? In a functional brain imaging study, a research team compared literate and illiterate adults in India. Reading recycles a brain region that is already sensitive to evolutionarily older visual categories, enhancing rather than destroying sensitivity to other visual input.

Reading is a recent invention in the history of human culture -- too recent for dedicated brain networks to have evolved specifically for it. How, then, do we accomplish this remarkable feat? As we learn to read, a brain region known as the 'visual word form area' (VWFA) becomes sensitive to script (letters or characters). However, some have claimed that the development of this area takes up (and thus detrimentally affects) space that is otherwise available for processing culturally relevant objects such as faces, houses or tools.

An international research team led by Falk Huettig (MPI and Radboud University Nijmegen) and Alexis Hervais-Adelman (MPI and University of Zurich) set out to test the effect of reading on the brain's visual system. The team scanned the brains of over ninety adults living in a remote part of Northern India with varying degrees of literacy (from people unable to read to skilled readers), using functional Magnetic Resonance Imaging (fMRI). While in the scanner, participants saw sentences, letters, and other visual categories such as faces.

If learning to read leads to 'competition' with other visual areas in the brain, readers should have different brain activation patterns from non-readers -- and not just for letters, but also for faces, tools, or houses. 'Recycling' of brain networks when learning to read has previously been thought to negatively affect evolutionary old functions such as face processing. Huettig and Hervais-Adelman, however, hypothesised that reading, rather than negatively affecting brain responses to non-orthographic (non-letter) objects, may, conversely, result in increased brain responses to visual stimuli in general.

"When we learn to read, we exploit the brain's capacity to form category-selective patches in visual brain areas. These arise in the same cortical territory as specialisations for other categories that are important to people, such as faces and houses. A long-standing question has been whether learning to read is detrimental to those other categories, given that there is limited space in the brain," explains Alexis Hervais-Adelman.

Reading-induced recycling did not detrimentally affect brain areas for faces, houses, or tools -- neither in location nor size. Strikingly, the brain activation for letters and faces was more similar in readers than in non-readers, particularly in the left hemisphere (the left ventral temporal lobe).

Read more at Science Daily

Sep 17, 2019

Harnessing tomato jumping genes could help speed-breed drought-resistant crops

Tomato plant
Once dismissed as 'junk DNA' that served no purpose, a family of 'jumping genes' found in tomatoes has the potential to accelerate crop breeding for traits such as improved drought resistance.

Researchers from the University of Cambridge's Sainsbury Laboratory (SLCU) and Department of Plant Sciences have discovered that drought stress triggers the activity of a family of jumping genes (Rider retrotransposons) previously known to contribute to fruit shape and colour in tomatoes. Their characterisation of Rider, published today in the journal PLOS Genetics, revealed that the Rider family is also present and potentially active in other crops, highlighting its potential as a source of new trait variations that could help plants better cope with more extreme conditions driven by our changing climate.

"Transposons carry huge potential for crop improvement. They are powerful drivers of trait diversity, and while we have been harnessing these traits to improve our crops for generations, we are now starting to understand the molecular mechanisms involved," said Dr Matthias Benoit, the paper's first author, formerly at SLCU.

Transposons, more commonly called jumping genes, are mobile snippets of DNA code that can copy themselves into new positions within the genome -- the genetic code of an organism. They can change, disrupt or amplify genes, or have no effect at all. Discovered in corn kernels by Nobel prize-winning scientist Barbara McClintock in the 1940s, only now are scientists realising that transposons are not junk at all but actually play an important role in the evolutionary process, and in altering gene expression and the physical characteristics of plants.

Using the jumping genes already present in plants to generate new characteristics would be a significant leap forward from traditional breeding techniques, making it possible to rapidly generate new traits in crops that have traditionally been bred to produce uniform shapes, colours and sizes to make harvesting more efficient and maximise yield. They would enable production of an enormous diversity of new traits, which could then be refined and optimised by gene targeting technologies.

"In a large population size, such as a tomato field, in which transposons are activated in each individual we would expect to see an enormous diversity of new traits. By controlling this 'random mutation' process within the plant we can accelerate this process to generate new phenotypes that we could not even imagine," said Dr Hajk Drost at SLCU, a co-author of the paper.

Today's gene targeting technologies are very powerful, but often require some functional understanding of the underlying gene to yield useful results and usually only target one or a few genes. Transposon activity is a native tool already present within the plant, which can be harnessed to generate new phenotypes or resistances and complement gene targeting efforts. Using transposons offers a transgene-free method of breeding that acknowledges the current EU legislation on Genetically Modified Organisms.

The work also revealed that Rider is present in several plant species, including economically important crops such as rapeseed, beetroot and quinoa. This wide abundance encourages further investigations into how it can be activated in a controlled way, or reactivated or re-introduced into plants that currently have mute Rider elements so that their potential can be regained. Such an approach has the potential to significantly reduce breeding time compared to traditional methods.

Read more at Science Daily