Dec 15, 2018

Physical activity in the evening does not cause sleep problems

Moderate intensity exercise shortly before bedtime does not negatively affect sleep. At most, vigorous exercise close to bedtime might have a negative effect. Each symbol in this overview represents one set of experimental data.
Even among sleep researchers, it is a widely held belief that sleep quality can be improved by avoiding exercise in the evening. However, as researchers from the Institute of Human Movement Sciences and Sport at ETH Zurich have demonstrated, it is not generally true.

The scientists combed through the literature on the subject and analysed all 23 studies that met their quality requirements. They concluded that doing exercise in the four hours before going to bed does not have a negative effect on sleep. "If doing sport in the evening has any effect on sleep quality at all, it's rather a positive effect, albeit only a mild one," says Christina Spengler, head of the Exercise Physiology Lab at ETH Zurich.

By combining the data from the different studies, the researchers showed that in the night after study participants had done some sport in the evening, they spent 21.2 percent of their sleeping time in deep sleep. Following an evening without exercise, the average figure was 19.9 percent. While the difference is small, it is statistically significant. Deep sleep phases are especially important for physical recovery.

Intensive training late in the evening: an exception to the rule

Vigorous training within an hour before bedtime is an exception to the rule. According to this analysis, it is the only type of evening exercise that may have a negative effect on sleep. "However, this preliminary observation is based on just one study," Spengler says.

"As a rule of thumb, vigorous training is defined as training in which a person is unable to talk. Moderate training is physical activity of an intensity high enough that a person would no longer be able to sing, but they could speak," Spengler says. One example of vigorous training is the kind of high-intensity interval training that competitive athletes often perform. In many cases, though, a longer endurance run or a longer ride on a racing bike would fall into the moderate training category.

As the analysis showed, it took study participants who completed an intensive training session shortly before bedtime longer to fall asleep. The study also provided insight into why this is the case: the test subjects were not able to recover sufficiently in the hour before they went to bed. Their hearts were still beating more than 20 beats per minute faster than their resting heart rate.

Possible sleep problems are no excuse

According to the official recommendations of sport physicians, people should do at least 150 minutes of moderate exercise each week. Many may ask themselves: should I exercise in the evening if I didn't have time during the day, or will that have a negative effect on my sleep? "People can do exercise in the evening without hesitation. The data shows that moderate exercise in the evening is no problem at all," says Jan Stutz, a doctoral student in Spengler's research group and lead author of the analysis, which was published in the journal Sports Medicine. Moderate exercise did not cause sleep problems in any of the studies examined, not even when the training session ended just 30 minutes before bedtime. "However, vigorous training or competitions should be scheduled earlier in the day, if possible," Stutz says.

Stutz and Spengler point out that they examined average values over the course of their analysis, which made only general statements possible. "Not everyone reacts to exercise in the same way, and people should keep listening to their bodies. If they notice they are having problems falling asleep after doing sport, they should try to work out a little earlier," Stutz says.

Read more at Science Daily

Biologists turn eavesdropping viruses into bacterial assassins

These E. coli bacteria harbor proteins from the eavesdropping virus. One of the viral proteins has been tagged with a red marker. When the virus is in the 'stay' mode (left), the bacteria grow and the red protein is spread throughout each cell. When the virus overhears that its hosts have achieved a quorum (right), the kill-stay decision protein is flipped to 'kill' mode. A second viral protein binds the red protein and sends it to the cell poles (yellow dots). All the cells in the right panel will soon die.
Princeton molecular biologist Bonnie Bassler and graduate student Justin Silpe have identified a virus, VP882, that can listen in on bacterial conversations -- and then, in a twist like something out of a spy novel, they found a way to use that to make it attack bacterial diseases like E. coli and cholera.

"The idea that a virus is detecting a molecule that bacteria use for communication -- that is brand-new," said Bassler, the Squibb Professor of Molecular Biology. "Justin found this first naturally occurring case, and then he re-engineered that virus so that he can provide any sensory input he chooses, rather than the communication molecule, and then the virus kills on demand." Their paper will appear in the Jan. 10 issue of the journal Cell.

A virus can only ever make one decision, Bassler said: Stay in the host or kill the host. That is, either remain under the radar inside its host or activate the kill sequence that creates hundreds or thousands of offspring that burst out, killing the current host and launching themselves toward new hosts.

There's an inherent risk in choosing the kill option: "If there are no other hosts nearby, then the virus and all its kin just died," she said. VP882 has found a way to take the risk out of the decision. It listens for the bacteria to announce that they are in a crowd, upping the chances that when the virus kills, the released viruses immediately encounter new hosts. "It's brilliant and insidious!" said Bassler.

"This paper provides an entirely new perspective on the dynamic relationship between viruses and their hosts," said Graham Hatfull, the Eberly Family Professor of Biotechnology at the University of Pittsburgh, who was not involved in this research. "This study tells us for the first time … that when a phage is in the lysogenic [stay] state, it is not 'fast asleep', but rather catnapping, with one eye open and ears alert, ready to respond when it 'hears' signals that cells are getting ready to respond to changes in their environment."

Bassler, who is also the chair of molecular biology and a Howard Hughes Medical Institute Investigator, had discovered years before that bacteria can communicate and sense one another's presence and that they wait to establish a quorum before they act in concert. But she had never imagined that a virus could eavesdrop on this quorum-sensing communication.

"The bugs are getting bugged," she said with a laugh. "Plus, Justin's work shows that these quorum-sensing molecules are conveying information across kingdom boundaries." Viruses are not in the same kingdom as bacteria -- in fact, they are not in any kingdom, because they are not technically alive. But for such radically different organisms to be able to detect and interpret each other's signals is simply mind-boggling, she said. It's not like enemy nations spying on each other, or even like a human communicating with a dog -- those at least are members of the same kingdom (animal) and phylum (vertebrate).

After finding the first evidence of this cross-kingdom eavesdropping, Silpe started looking for more -- and found it.

"He just started a brand-new field," Bassler said. "The idea that there's only one example of this cross-domain communication made no sense to us. Justin discovered the first case, and then, with his discovery in hand, he went looking more deeply and he found a whole set of viruses that harbor similar capabilities. They may not all be listening in to this quorum-sensing information, but it is clear that these viruses can listen in to their hosts' information and then use that information to kill them."

Silpe said he was drawn to work in Bassler's lab because of her research on bacterial communication. "Communication seems like such an evolved trait," he said. "To hear that bacteria can do it -- her discovery -- it was just mind-blowing that organisms you think of as so primitive could actually be capable of communication. And viruses are even simpler than bacteria. The one I studied, for example, only has about 70 genes. It's really remarkable that it devotes one of those genes to quorum sensing. Communication is clearly not something higher organisms created."

Once Silpe demonstrated that VP882 was eavesdropping, he began experimenting with feeding it misinformation to trick the virus into killing on command -- to turn the predator into an assassin.

VP882 is not the first virus used as an antimicrobial treatment. Viruses that prey on bacteria are called "phages," and "phage therapy" -- targeting a bacterial disease with a phage -- is a known medical strategy. But VP882 is the first phage that uses eavesdropping to know when it is optimal to kill its targets, making Silpe's experiments with salmonella and other disease-causing bacteria the first time that phage therapy has used trans-kingdom communication.

In addition, this virus holds enormous promise as a therapeutic tool because it does not act like a typical virus, Bassler said. Most viruses can only infect a very specific type of cell. Flu viruses, for example, only infect lung cells; HIV only targets specific immune-system cells. But the virus VP882 has an "exceptionally broad host range," Bassler said. So far, Silpe has only performed "proof of principle" tests with three unrelated bacteria: Vibrio cholerae (cholera), salmonella and E. coli. Those diseases have evolved separately for hundreds of millions of years, so the fact that they are all susceptible to this bacterial assassin suggests that many, many more are as well.

Hatfull is also optimistic about the utility of this re-engineered virus for antibiotic-resistant bacteria. "Antibiotic resistance is clearly a major global health threat, and there is a clear and evident demand for new strategies and approaches to this problem," he said. "Although we have admittedly found it tricky even to reach 'first base' with basic therapeutic use of naturally occurring phages, we can envisage the possibility of a 'home run' if we can engineer phages for therapeutic use that have very specific targeting." These viral assassins might even slow down the emergence of antibiotic resistant strains, he said.

Bassler gives all credit for the discovery to Silpe. After identifying a new quorum-sensing gene in V. cholerae, he made the choice to search genome databases for that gene. It showed up in some cholera-related strains and exactly one virus. Bassler wondered if that could be a meaningless data artifact, but Silpe wanted to get a specimen of the virus and run experiments.

"He was gung-ho, and I thought, 'What the heck, give this kid a little rope. If this isn't working soon, we can always move on,'" she said. "His was a crazy idea, because there's never, ever been evidence of a virus listening in on bacterial host information to decide whether to stay put or kill. But this lab was built on crazy ideas, like bacteria talking to each other, and we've kind of made a living out of it. … Of course, that's the beauty of science, and science at Princeton, that you have enough resources to play those hunches out, and see if there's a 'there' there. And this time, there was a big 'there' there."

Read more at Science Daily

Dec 14, 2018

How complexity science can quickly detect climate record anomalies

This is a segment of the West Antarctic Ice Core.
The history of our climate is written in ice. Reading it is a matter of deciphering the complex signals pulled from tens of thousands of years of accumulated isotopes frozen miles below the surface of Antarctica.

When making sense of the massive amount of information packed into an ice core, scientists face a forensic challenge: how best to separate the useful information from the corrupt.

A new paper published in the journal Entropy shows how tools from information theory, a branch of complexity science, can address this challenge by quickly homing in on portions of the data that require further investigation.

"With this kind of data, we have limited opportunities to get it right," says Joshua Garland, a mathematician at the Santa Fe Institute who works with 68,000 years of data from the West Antarctic Ice Sheet Divide ice Core. "Extracting the ice and processing the data takes hundreds of people, and tons of processing and analysis. Because of resource constraints, replicate cores are rare. "

By the time Garland and his team got ahold of the data, more than 10 years had passed from the initial drilling of the ice core to the publishing of the dataset it contained. The two-mile ice core was extracted over five seasons from 2007-2012, by teams from the multiple universities funded by the National Science Foundation. From the field camp in West Antarctica, the core was packaged, then shipped to the National Science Foundation Ice Core Facility in Colorado, and finally to the University of Colorado. At the Stable Isotope Lab at the Institute of Arctic and Alpine Research, a state-of-the-art processing facility helped scientists pull water isotope records from the ice.

The result is a highly resolved, complex dataset. Compared to previous ice core data, which allowed for analysis every 5 centimeters, the WAIS Divide core permits analysis at millimeter resolution.

"One of the exciting thing about ice core research in the last decade is we've developed these lab systems to analyze the ice in high resolution," says Tyler Jones, a paleoclimatologist at the University of Colorado Boulder. "Quite a while back we were limited in our ability to analyze climate because we couldn't get enough data points, or if we could it would take too long. These new techniques have given us millions of data points, which is rather difficult to manage and interpret without some new advances in our [data] processing."

In previous cores, Garland notes that decades, even centuries, were aggregated into a single point. The WAIS data, by contrast, sometimes gives more than forty data points per year. But as scientists move to analyze the data at shorter time scales, even small anomalies can be problematic.

"As fine-grained data becomes available, fine-grained analyses can be performed," Garland notes. "But it also makes the analysis susceptible to fine-grained anomalies."

To quickly identify which anomalies require further investigation, the team uses information theoretic techniques to measure how much complexity appears at each point in the time sequence. A sudden spike in the complexity could mean that there was either a major, unexpected climate event, like a super volcano, or that there was an issue in the data or the data processing pipeline.

"This kind of anomaly would be invisible without a highly detailed, fine-grained, point-by-point analysis of the data, which would take a human expert many months to perform," says Elizabeth Bradley, a computer scientist at the University of Colorado Boulder and External Professor at the Santa Fe Institute. "Even though information theory can't tell us the underlying cause of an anomaly, we can use these techniques to quickly flag the segments of the data set that should be investigated by paleoclimate experts."

She compares the ice core dataset to a Google search that returns a million pages. "It's not that you couldn't go through those million pages," Bradley says. "But imagine if you had a technique that could point you toward the ones that were potentially meaningful?" When analyzing large, real-world datasets, information theory can spot differences in the data that signal either a processing error or a significant climate event.

In their Entropy paper, the scientists detail how they used information theory to identify and repair a problematic stretch of data from the original ice core. Their investigation eventually prompted a resampling of the archival ice core -- the longest resampling of a high-resolution ice core to date. When that portion of the ice was resampled and reprocessed, the team was able to resolve an anomalous spike in entropy from roughly 5,000 years ago.

"It's vitally important to get this area right," Garland notes, "because it contains climate information from the dawn of human civilization."

Read more at Science Daily

A young star caught forming like a planet

This is an artists impression of the disc of dust and gas surrounding the massive protostar MM 1a, with its companion MM 1b forming in the outer regions.
Astronomers have captured one of the most detailed views of a young star taken to date, and revealed an unexpected companion in orbit around it.

While observing the young star, astronomers led by Dr John Ilee from the University of Leeds discovered it was not in fact one star, but two.

The main object, referred to as MM 1a, is a young massive star surrounded by a rotating disc of gas and dust that was the focus of the scientists' original investigation.

A faint object, MM 1b, was detected just beyond the disc in orbit around MM 1a. The team believe this is one of the first examples of a "fragmented" disc to be detected around a massive young star.

"Stars form within large clouds of gas and dust in interstellar space," said Dr Ilee, from the School of Physics and Astronomy at Leeds.

"When these clouds collapse under gravity, they begin to rotate faster, forming a disc around them. In low mass stars like our Sun, it is in these discs that planets can form."

"In this case, the star and disc we have observed is so massive that, rather than witnessing a planet forming in the disc, we are seeing another star being born."

By measuring the amount of radiation emitted by the dust, and subtle shifts in the frequency of light emitted by the gas, the researchers were able to calculate the mass of MM 1a and MM 1b.

Their work, published today in the Astrophysical Journal Letters, found MM 1a weighs 40 times the mass of our Sun. The smaller orbiting star MM 1b was calculated to weigh less than half the mass of our Sun.

"Many older massive stars are found with nearby companions," added Dr Ilee. "But binary stars are often very equal in mass, and so likely formed together as siblings. Finding a young binary system with a mass ratio of 80:1 is very unusual, and suggests an entirely different formation process for both objects."

The favoured formation process for MM 1b occurs in the outer regions of cold, massive discs. These "gravitationally unstable" discs are unable to hold themselves up against the pull of their own gravity, collapsing into one -- or more -- fragments.

Dr Duncan Forgan, a co-author from the Centre for Exoplanet Science at the University of St Andrews, added: "I've spent most of my career simulating this process to form giant planets around stars like our Sun. To actually see it forming something as large as a star is really exciting."

The researchers note that newly-discovered young star MM 1b could also be surrounded by its own circumstellar disc, which may have the potential to form planets of its own -- but it will need to be quick.

Dr Ilee added: "Stars as massive as MM 1a only live for around a million years before exploding as powerful supernovae, so while MM 1b may have the potential to form its own planetary system in the future, it won't be around for long."

The astronomers made this surprising discovery by using a unique new instrument situated high in the Chilean desert -- the Atacama Large Millimetre/submillimetre Array (ALMA).

Using the 66 individual dishes of ALMA together in a process called interferometry, the astronomers were able to simulate the power of a single telescope nearly 4km across, allowing them to image the material surrounding the young stars for the first time.

Read more at Science Daily

Can stem cells help a diseased heart heal itself? Researcher achieves important milestone

A team of Rutgers scientists, including Leonard Lee and Shaohua Li, have taken an important step toward the goal of making diseased hearts heal themselves -- a new model that would reduce the need for bypass surgery, heart transplants or artificial pumping devices.

The study, recently published in Frontiers in Cell and Developmental Biology, involved removing connective tissue cells from a human heart, "reverse-engineering" them into heart stem cells, then "re-engineering" them into heart muscle cells.

The Rutgers team's true breakthrough, however, is that the newly created cardiac muscle cells clumped together into a single unit that visibly pumps under the microscope.

Senior author Leonard Y. Lee, chair of the Department of Surgery at Rutgers Robert Wood Johnson Medical School, said cardiac cells made in this way don't normally come together and beat as one. His team succeeded in making this happen by over-expressing, a protein in the cells called CREG.

According to Lee, fibroblasts, a cell in connective tissue, were isolated from the heart tissue and reverse-engineered -- or transformed -- into stem cells. This was done so that when the CREG protein was over expressed the stem cells would differentiate into cardiac cells.

"Heart failure has reached epidemic proportions. Right now, the only option to treat it is surgery, transplant, or connecting the patient with a blood-pumping machine," Lee said. "But transplantable hearts are in short supply and mechanical devices limit the patient's quality of life. So, we are working for ways to help hearts heal themselves."

Though still far off, Lee's ultimate goal is to be able to remove small amounts of a patient's native heart tissue, use CREG to convert the tissue into cardiac muscles that will work together cohesively, and re-introduce them into the patient's heart allowing it to heal itself.

More than six million Americans are living with heart failure, according to the American Heart Association. While most people hear the term "heart failure" and think this means the heart is no longer working at all, but it actually means that the heart is not pumping as well as it should be. People with heart failure often experience fatigue and shortness of breath and have difficulty with every day activities such as walking and climbing stairs.

From Science Daily

Scientists overhaul corn domestication story with multidisciplinary analysis

Varieties of maize found near Cuscu and Machu Pichu at Salineras de Maras on the Inca Sacred Valley in Peru, June 2007. The history of maize begins with its wild ancestor, teosinte. Teosinte bears little resemblance to the corn eaten today: Its cobs are tiny and its few kernels are protected by a nearly impenetrable outer casing. In fact, Kistler said, it is not clear why people bothered with it all. Over time, however, as early farmers selected for desirable traits, the descendants of the wild plant developed larger cobs and more tender, plentiful kernels, eventually becoming the staple crop that maize is today. The newly published study in the journal Science shows that the final stages of maize's domestication happened more than once in more than one place, revising the history of one of the world's most important crops.
Smithsonian scientists and collaborators are revising the history of one of the world's most important crops. Drawing on genetic and archaeological evidence, researchers have found that a predecessor of today's corn plants still bearing many features of its wild ancestor was likely brought to South America from Mexico more than 6,500 years ago. Farmers in Mexico and the southwestern Amazon continued to improve the crop over thousands of years until it was fully domesticated in each region.

The findings, reported Dec. 13 in the journal Science, come from a multidisciplinary, international collaboration between scientists at 14 institutions. Their account deepens researchers' understanding of the long, shared history between humans and maize, which is critical for managing our fragile relationships with the plants that feed us, said Logan Kistler, curator of archaeogenomics and archaeobotany at the Smithsonian's National Museum of Natural History and lead author of the study. "It's the long-term evolutionary history of domesticated plants that makes them fit for the human environment today," he said. "Understanding that history gives us tools for assessing the future of corn as we continue to drastically reshape our global environment and increase our agricultural demands on land around the globe."

The history of maize begins with its wild ancestor, teosinte. Teosinte bears little resemblance to the corn eaten today: Its cobs are tiny and its few kernels are protected by a nearly impenetrable outer casing. In fact, Kistler said, it's not clear why people bothered with it all. Over time, however, as early farmers selected for desirable traits, the descendants of the wild plant developed larger cobs and more tender, plentiful kernels, eventually becoming the staple crop that maize is today.

For years, geneticists and archaeologists have deduced that teosinte's transformation into maize began in the tropical lowlands of what is now southern Mexico about 9,000 years ago. The teosinte that grows wild in this region today is more genetically similar to maize than teosinte elsewhere in Mexico and Central America -- though all remain separated from the domesticated crop by hundreds of genes.

In the southwest Amazon and coastal Peru, microscopic pollen and other resilient plant remains found in ancient sediments indicate a history of fully domesticated maize use by around 6,500 years ago, and researchers initially reasoned that the fully domesticated plant must have been carried there from the north as people migrated south and across the Americas.

"As far as we could tell before conducting our study, it looked like there was a single domestication event in Mexico and that people then spread it further south after domestication had taken place," Kistler said.

But a few years ago, when geneticists sequenced the DNA of 5,000-year-old maize found in Mexico, the story got more complicated. The genetic results showed that what they had found was a proto-corn -- its genes were a mixture of those found in teosinte and those of the domesticated plant. According to the ancient DNA, that plant lacked teosinte's tough kernel casings, but this proto-corn had not yet acquired other traits that eventually made maize into a practical food crop.

"But you've got continuous cultivation of maize in the southwest Amazon from 6,500 years ago all the way up through European colonization," Kistler said. "How can you have this flourishing, fully domesticated maize complex in the southwest Amazon, and meanwhile, near the domestication center in Mexico the domestication process is still ongoing?"

In an effort to try to solve this mystery, Kistler's team reconstructed the plant's evolutionary history by undertaking a genetic comparison of more than 100 varieties of modern maize that grow throughout the Americas, including 40 newly sequenced varieties -- many from the eastern lowlands of South America, which had been underrepresented in previous studies. Many of these varieties were collected in collaboration with indigenous and traditional farmers over the past 60 years and are curated in the genebank at Embrapa, the Brazilian government's agriculture enterprise. Fabio Freitas, an ethnobotanist and farm conservationist at Embrapa, said that his work conserving traditional cultivated plants with indigenous groups from the South border of the Amazon forest helped guide the discussion of how maize diffusion may have played out in the past. The genomes of 11 ancient plants, including nine newly sequenced archaeological samples, were also part of the analysis. The team mapped out the genetic relationships between the plants and discovered several distinct lineages, each with its own degree of similarity to their shared ancestor, teosinte. In other words, Kistler explained, the final stages of maize's domestication happened more than once in more than one place.

"This work fundamentally changes our understanding of maize origins," said study co-author Robin Allaby from the School of Life Sciences at the University of Warwick. "It shows that maize did not have a simple origin story, that it did not really form the crop as we know it until it left its homeland."

At first, Kistler said, the genetic evidence was puzzling. But as he and his collaborators began to integrate what each had learned about the history of South America, a picture of how maize may have spread across the continent emerged.

Read more at Science Daily

HIV vaccine protects non-human primates from infection

Researchers are making progress on a potential HIV vaccine.
For more than 20 years, scientists at Scripps Research have chipped away at the challenges of designing an HIV vaccine. Now new research, published in Immunity, shows that their experimental vaccine strategy works in non-human primates.

The new study shows that rhesus macaque monkeys can be prompted to produce neutralizing antibodies against one strain of HIV that resembles the resilient viral form that most commonly infects people, called a Tier 2 virus. The research also provides the first-ever estimate of vaccine-induced neutralizing antibody levels needed to protect against HIV.

"We found that neutralizing antibodies that have been induced by vaccination can protect animals against viruses that look a lot like real-world HIV," says Dennis Burton, PhD, chair of Scripps Research's Department of Immunology and Microbiology, and scientific director of the International AIDS Vaccine Initiative (IAVI) Neutralizing Antibody Center and of the National Institutes of Health's Center for HIV/AIDS Vaccine Immunology and Immunogen Discovery (CHAVI-ID).

Although the vaccine is far from human clinical trials, the study provides proof-of-concept for the HIV vaccine strategy Burton and his colleagues have been developing since the 1990s.

The goal of this strategy is to identify the rare, vulnerable areas on HIV and teach the immune system to make antibodies to attack those areas. Studies led by Scripps Research scientists have shown that the body needs to produce neutralizing antibodies that bind to the virus's outer envelope protein trimer. To support this idea, scientists found that they could protect animal models from HIV by injecting them with neutralizing antibodies that were produced in the lab.

The challenge then was to get animals to make the neutralizing antibodies themselves. To do this, scientists needed to expose the immune system to the envelope protein trimer, effectively training it how to spot this target and produce the right antibodies against it.

But there was a big problem. The HIV envelope trimer is unstable and tends to fall apart when isolated. How could scientists use it as an ingredient in a vaccine? A breakthrough came in 2013, when scientists genetically engineered a more stable trimer, or SOSIP.

"For the first time, we had something that looked pretty much like the HIV envelope protein trimer," says Matthias Pauthner, PhD, a research associate at Scripps Research and co-first author of the new study.

The scientists quickly moved forward with designing an experimental HIV vaccine that contained this stable SOSIP trimer. Their goal with the new study was to see if this kind of vaccine could actually protect animals from infection.

The team tested the vaccine in two groups of rhesus macaques. A previous study using the same vaccine had shown that some immunized monkeys naturally developed low neutralizing antibody titers (antibody levels) in their bodies, while others developed high titers following vaccination. From this study, the researchers selected and re-vaccinated six low titer monkeys and six high titer monkeys. They also studied 12 unimmunized primates as their control group.

The primates were then exposed to a form of the virus called SHIV, an engineered simian version of HIV that contains the same envelope trimer as the human virus.

This particular strain of the virus is known as a Tier 2 virus because it has been shown to be hard to neutralize, much like the forms of HIV circulating in the human population.

The researchers found that the vaccination worked in the high titer animals. The monkeys could produce sufficient levels of neutralizing antibodies against the envelope protein trimer to prevent infection.

"Since HIV emerged, this is the first evidence we have of antibody-based protection from a Tier 2 virus following vaccination," says Pauthner. "One question now is how can we get such high titers into every animal?"

The focus on titers became especially important as the researchers saw HIV protection wane as the high titers fell in the weeks and months following vaccination. In tracking the titers while continuously exposing animals to the virus, the researchers determined the titers needed to keep HIV at bay.

Importantly, the study also showed that neutralizing antibodies, but not other aspects of the immune system, were the key to stopping the virus. Pauthner says this is an important finding, since other labs have focused on the potential for T cells and other immune system defenses to block infection.

Going forward, the scientists are looking to improve the vaccine design for human trials and keep titers high. "There are many immunological tricks that can be explored to make immunity last longer," says Pauthner.

Read more at Science Daily

Dec 13, 2018

First-ever look at complete skeleton of Thylacoleo, Australia's extinct 'marsupial lion'

Thylacoleo carnifex reconstructions. (A) Reconstruction of the skeleton of T. carnifex. (B) Body outline based on examination of musculature evident in x-ray imaging of marsupials Vogelnest and Allen.
Thyalacoleo carnifex, the "marsupial lion" of Pleistocene Australia, was an adept hunter that got around with the help of a strong tail, according to a study released December 12, 2018 in the open-access journal PLOS ONE by Roderick T. Wells of Flinders University and Aaron B. Camens of the South Australia Museum, Adelaide. These insights come after newly-discovered remains, including one nearly complete fossil specimen, allowed these researchers to reconstruct this animal's entire skeleton for the first time.

A marsupial predator with an estimated weight of over 100kg, Thylacoleo was unlike any living animal, and paleontologists have long tried to interpret its lifestyle from incomplete remains. The new fossils, discovered in Komatsu Cave in Naracoorte and Flight Star Cave in the Nullarbor Plain, include the first known remains of the tail and collarbone of this animal. The authors used this new information to re-assess the biomechanics of Thylacoleo, and by comparing its anatomy to living marsupials, reach new conclusions about the biology and behavior of the "marsupial lion."

The tail of Thylacoleo appears to have been stiff and heavily-muscled, probably allowing it to be used along with the hind limbs as a "tripod" to brace the body while freeing up the forelimbs for handling food or climbing, as many living marsupials do. The analysis suggests that Thylacoleo had a rigid lower back and powerful forelimbs anchored by strong collarbones, likely making it poorly suited for chasing prey, but well-adapted for ambush hunting and/or scavenging. These features also add to a list of evidence that Thylacoleo was an adept climber, perhaps of trees or steep-walled caverns. Among living marsupials, the anatomy of Thylacoleo appears most similar to the Tasmanian devil, a small carnivore that exhibits many of these inferred behaviors.

The authors add: "The extinct marsupial lion, Thylacoleo carnifex has intrigued scientists since it was first described in 1859 from skull and jaw fragments collected at Lake Colongulac in Victoria Australia and sent to Sir Richard Owen at the British Museum. Although Australia's largest marsupial carnivore it retains many features indicative of its diprotodont herbivore ancestry and its niche has been a matter of considerable debate for more than 150yrs. Recent cave finds have for the first time enabled a description and reconstruction of the complete skeleton including the hitherto unrecognised tail and clavicles. In this study, Wells and Camens compare the Thylacoleo skeleton with those of range of extant Australian arboreal and terrestrial marsupials in which behaviour and locomotion is well documented. They conclude that the nearest structural and functional analogue to Thylacoleo is to be found in the unrelated and much smaller Tasmanian Devil, Sarcophilus harrisii, a scavenger /hunter. They draw attention to the prevalence of all age classes within individual cave deposits as suggestive of a high degree of sociality. Those ancestral features Thylacoleo shares with arboreal forms are equally well suited to climbing or grasping a prey. They conclude that Thylacoleo is a scavenger, ambush predator of large prey."

From Science Daily

Early animals: Death near the shoreline, not life on land

Close-up of a looping millipede death-trail.
Our understanding of when the very first animals started living on land is helped by identifying trace fossils -- the tracks and trails left by ancient animals -- in sedimentary rocks that were deposited on the continents.

Geoscientists Anthony P. Shillito and Neil S. Davies of the University of Cambridge studied the site of what has widely been accepted as the earliest set of non-marine trackways, in Ordovician (ca. 455 million-year-old) strata from the Lake District, England.

What they discovered is that the trackways occur within volcanic ash that settled under water, and not within freshwater lake and sub-aerial sands (as previously thought). This means that the site is not the oldest evidence for animal communities on land, but instead "is actually a remarkable example of a 'prehistoric Pompeii'," says Shillito -- a suite of rocks that preserve trails made by distressed and dying millipede-like arthropods as they were overcome by ash from volcanic events.

Shillito and Davies directed their research at this site in particular because it seemed unusual -- at every other known trackway site in the world the evidence for when animals came onto land dates to the latest Silurian (ca. 420 million years ago), so something about the Borrowdale site didn't seem right. Further investigation proved that this was the case. In the course of their study, they found 121 new millipede trackways, all within volcanic ash with evidence for underwater or shoreline deposition.

Volcanic ash is known to cause mass death in some modern arthropod communities, particularly in water, because ash is so tiny it can get inside arthropod exoskeletons and stick to their breathing and digestive apparatus. Shilllito and Davies noticed that most of the trails were extremely tightly looping -- a feature which is commonly associated with "death dances" in modern and ancient arthropods.

This study, published in Geology, overturns what is known about the earliest life on land and casts new light onto one of the key evolutionary events in the history of life on Earth. Shillito notes, "It reveals how even surprising events can be preserved in the ancient rock record, but -- by removing the 'earliest' outlier of evidence -- suggests that the invasion of the continents happened globally at the same time."

Read more at Science Daily

Organic food worse for the climate

The crops per hectare are significantly lower in organic farming, which, according to the study, leads to much greater indirect carbon dioxide emissions from deforestation. Although direct emissions from organic agriculture are often lower -- due to less use of fossil energy, among other things - the overall climate footprint is definitely greater than for conventional farmed foods.
Organically farmed food has a bigger climate impact than conventionally farmed food, due to the greater areas of land required. This is the finding of a new international study involving Chalmers University of Technology, Sweden, published in the journal Nature.

The researchers developed a new method for assessing the climate impact from land-use, and used this, along with other methods, to compare organic and conventional food production. The results show that organic food can result in much greater emissions.

"Our study shows that organic peas, farmed in Sweden, have around a 50 percent bigger climate impact than conventionally farmed peas. For some foodstuffs, there is an even bigger difference -- for example, with organic Swedish winter wheat the difference is closer to 70 percent," says Stefan Wirsenius, an associate professor from Chalmers, and one of those responsible for the study.

The reason why organic food is so much worse for the climate is that the yields per hectare are much lower, primarily because fertilisers are not used. To produce the same amount of organic food, you therefore need a much bigger area of land.

The ground-breaking aspect of the new study is the conclusion that this difference in land usage results in organic food causing a much larger climate impact.

"The greater land-use in organic farming leads indirectly to higher carbon dioxide emissions, thanks to deforestation," explains Stefan Wirsenius. "The world's food production is governed by international trade, so how we farm in Sweden influences deforestation in the tropics. If we use more land for the same amount of food, we contribute indirectly to bigger deforestation elsewhere in the world."

Even organic meat and dairy products are -- from a climate point of view -- worse than their conventionally produced equivalents, claims Stefan Wirsenius.

"Because organic meat and milk production uses organic feed-stock, it also requires more land than conventional production. This means that the findings on organic wheat and peas in principle also apply to meat and milk products. We have not done any specific calculations on meat and milk, however, and have no concrete examples of this in the article," he explains.

A new metric: Carbon Opportunity Cost

The researchers used a new metric, which they call "Carbon Opportunity Cost," to evaluate the effect of greater land-use contributing to higher carbon dioxide emissions from deforestation. This metric takes into account the amount of carbon that is stored in forests, and thus released as carbon dioxide as an effect of deforestation. The study is among the first in the world to make use of this metric.

"The fact that more land use leads to greater climate impact has not often been taken into account in earlier comparisons between organic and conventional food," says Stefan Wirsenius. "This is a big oversight, because, as our study shows, this effect can be many times bigger than the greenhouse gas effects, which are normally included. It is also serious because today in Sweden, we have politicians whose goal is to increase production of organic food. If that goal is implemented, the climate influence from Swedish food production will probably increase a lot."

So why have earlier studies not taken into account land-use and its relationship to carbon dioxide emissions?

"There are surely many reasons. An important explanation, I think, is simply an earlier lack of good, easily applicable methods for measuring the effect. Our new method of measurement allows us to make broad environmental comparisons, with relative ease," says Stefan Wirsenius.

The results of the study are published in the article "Assessing the efficiency of changes in land use for mitigating climate change" in the journal Nature. The article is written by Timothy Searchinger, Princeton University, Stefan Wirsenius, Chalmers University of Technology, Tim Beringer, Humboldt Universität zu Berlin, and Patrice Dumas, Cired.

More on: The consumer perspective

Stefan Wirsenius notes that the findings do not mean that conscientious consumers should simply switch to buying non-organic food. "The type of food is often much more important. For example, eating organic beans or organic chicken is much better for the climate than to eat conventionally produced beef," he says. "Organic food does have several advantages compared with food produced by conventional methods," he continues. "For example, it is better for farm animal welfare. But when it comes to the climate impact, our study shows that organic food is a much worse alternative, in general."

For consumers who want to contribute to the positive aspects of organic food production, without increasing their climate impact, an effective way is to focus instead on the different impacts of different types of meat and vegetables in our diet. Replacing beef and lamb, as well as hard cheeses, with vegetable proteins such as beans, has the biggest effect. Pork, chicken, fish and eggs also have a substantially lower climate impact than beef and lamb.

More on: The conflict between different environmental goals

In organic farming, no fertilisers are used. The goal is to use resources like energy, land and water in a long-term, sustainable way. Crops are primarily nurtured through nutrients present in the soil. The main aims are greater biological diversity and a balance between animal and plant sustainability. Only naturally derived pesticides are used.

The arguments for organic food focus on consumers' health, animal welfare, and different aspects of environmental policy. There is good justification for these arguments, but at the same time, there is a lack of scientific evidence to show that organic food is in general healthier and more environmentally friendly than conventionally farmed food, according to the National Food Administration of Sweden and others. The variation between farms is big, with the interpretation differing depending on what environmental goals one prioritises. At the same time, current analysis methods are unable to fully capture all aspects.

The authors of the study now claim that organically farmed food is worse for the climate, due to bigger land use. For this argument they use statistics from the Swedish Board of Agriculture on the total production in Sweden, and the yields per hectare for organic versus conventional farming for the years 2013-2015.

More on biofuels: "The investment in biofuels increases carbon dioxide emissions"

Today's major investments in biofuels are also harmful to the climate because they require large areas of land suitable for crop cultivation, and thus -- according to the same logic -- increase deforestation globally, the researchers in the same study argue.

For all common biofuels (ethanol from wheat, sugar cane and corn, as well as biodiesel from palm oil, rapeseed and soya), the carbon dioxide cost is greater than the emissions from fossil fuel and diesel, the study shows. Biofuels from waste and by-products do not have this effect, but their potential is small, the researchers say.

Read more at Science Daily

Neanderthal genes give clues to human brain evolution

This image shows a CT scan of the Neandertal fossil (left) with a typical elongated endocranial imprint (red) and a CT scan of a modern human (right) showing the characteristic globular endocranial shape (blue).
A distinctive feature of modern humans is our round (globular) skulls and brains. On December 13, in the journal Current Biology, researchers report that present-day humans who carry particular Neanderthal DNA fragments have heads that are slightly less rounded, revealing genetic clues to the evolution of modern brain shape and function.

"We captured subtle variations in endocranial shape that likely reflect changes in the volume and connectivity of certain brain areas," says Philipp Gunz, a paleoanthropologist at the Max Planck Institute for Evolutionary Anthropology, who co-led the study with Amanda Tilot of the Max Planck Institute for Psycholinguistics.

"Our aim was to identify potential candidate genes and biological pathways that are related to brain globularity," says Amanda Tilot.

To tightly focus their search, they took advantage of the fact that living humans with European ancestry carry rare fragments of Neanderthal DNA buried in their genomes, as a result of interbreeding between Neanderthals and the ancestors of modern Europeans. Different people carry different fragments, which are scattered through the genome.

Gunz, Tilot, and colleagues analyzed cranial shape and identified stretches of Neanderthal DNA in a large sample of modern humans, relying on MRI brain scans and genetic information for about 4,500 people. Based on computed tomographic scans, they computed the endocranial shape differences between Neanderthal fossils and modern human skulls. They used this contrast to assess endocranial shape in thousands of MRI brain scans of living people.

They used information from sequenced genomes of ancient Neanderthal DNA to identify Neanderthal DNA fragments in living humans on chromosomes 1 and 18 that correlated with reduced cranial roundness. These fragments contained two genes already linked to brain development: UBR4, involved in the generation of neurons, and PHLPP1, involved in the development of myelin insulation around nerve cell projections.

"We know from other studies that completely disrupting UBR4 or PHLPP1 can have major consequences for brain development," says senior author Simon Fisher, a geneticist at the Max Planck Institute for Psycholinguistics. "Here we found that, in carriers of the relevant Neanderthal fragment, UBR4 is slightly down-regulated in the putamen. For carriers of the Neanderthal PHLPP1 fragment, gene expression is slightly higher in the cerebellum, which would be predicted to have a dampening effect on cerebellar myelination."

The putamen -- part of a network of brain structures called the basal ganglia -- and the cerebellum are thought to be important in movement.

"Both brain regions receive direct input from the motor cortex and are involved in the preparation, learning, and sensorimotor coordination of movements," says Gunz. "The basal ganglia also contribute to diverse cognitive functions, in memory, attention, planning, skill learning, and potentially speech and language evolution."

The researchers stress that the effects of carrying these rare Neanderthal fragments are subtle and only detectable in a very large sample size.

"The Neanderthal variants lead to small changes in gene activity and only push people slightly towards a less globular brain shape," says Fisher. "This is just our first glimpse of the molecular underpinnings of this phenotype, which is likely to involve many other genes."

The researchers are preparing to scale up their approach and apply it to tens of thousands of people. That will enable them to carry out a fully genome-wide screen to reveal additional genes associated with cranial roundness and other biological characteristics.

Read more at Science Daily

Where did the hot Neptunes go? A shrinking planet holds the answer

This artist's illustration shows a giant cloud of hydrogen streaming off a warm, Neptune-sized planet just 97 light-years from Earth. The exoplanet is tiny compared to its star, a red dwarf named GJ 3470. The star's intense radiation is heating the hydrogen in the planet's upper atmosphere to a point where it escapes into space. The alien world is losing hydrogen at a rate 100 times faster than a previously observed warm Neptune whose atmosphere is also evaporating away.
"But where did the hot Neptunes go?" This is the question astronomers have been asking for a long time, faced with the mysterious absence of planets the size of Neptunes very close to their star. A team of researchers, led by astronomers from the University of Geneva (UNIGE), Switzerland, has just discovered that one of these planets is losing its atmosphere at a frantic pace. This observation strengthens the theory that hot Neptunes have lost much of their atmosphere and turned into smaller planets called super-Earths, which are much more numerous. Results to read in the journal Astronomy & Astrophysics.

Fishermen would be puzzled if they netted only big and little fish, but few medium-sized fish. This is similar to what happens to astronomers hunting exoplanets. They found a large number of hot planets the size of Jupiter and numerous others a little larger than the Earth (called super-Earths whose diameter does not exceed 1.5 times that of the Earth), but no planets close to their star the size of Neptune. This mysterious "desert" of hot Neptunes suggests two explanations: either such alien worlds are rare, or, they were plentiful at one time, but have since disappeared.

A few years ago, UNIGE astronomers using NASA's Hubble Space Telescope discovered that a warm Neptune on the edge of the desert, GJ 436b, was losing hydrogen from its atmosphere. This loss is not enough to threaten the atmosphere of GJ 436b, but suggested that Neptunes receiving more energy from their star could evolve more dramatically. This has just been confirmed by the same astronomers, members of the national research center PlanetS. They observed with Hubble that another warm Neptune at the edge of the desert, named GJ 3470b, is losing its hydrogen 100 times faster than GJ 436b. The two planets reside about 3.7 million kilometres from their star, one-tenth the distance between Mercury and the Sun, but the star hosting GJ 3470b is much younger and energetic. "This is the first time that a planet has been observed to lose its atmosphere so quickly that it can impact its evolution," says Vincent Bourrier, researcher in the Astronomy Department of the Faculty of Science of the UNIGE, member of the European project FOUR ACES and first author of the study. The team estimates that GJ 3470b has already lost more than a third of its mass.

"Until now we were not sure of the role played by the evaporation of atmospheres in the formation of the desert," states Vincent Bourrier. The discovery of several warm Neptunes at the edge of the desert losing their atmosphere supports the idea that the hotter version of these planets is short-lived. Hot Neptunes would have shrunk to become mini-Neptunes, or would have eroded completely to leave only their rocky core. "This could explain the abundance of hot super-Earths that have been discovered," says David Ehrenreich, associate professor in the astronomy department of the science faculty at UNIGE and co-author of the study.

Read more at Science Daily

Dec 12, 2018

Dracula ants possess fastest known animal appendage: The snap-jaw

The mandibles of the Dracula ant, Mystrium camillae, are the fastest known moving animal appendages, snapping shut at speeds of up to 90 meters per second.
Move over, trap-jaw ants and mantis shrimp: There's a faster appendage in town. According to a new study, the Dracula ant, Mystrium camillae, can snap its mandibles at speeds of up to 90 meters per second (more than 200 mph), making it the fastest animal movement on record.

"The high accelerations of Mystrium strikes likely result in high-impact forces necessary for predatory or defensive behaviors," the researchers wrote in a report of their findings in the journal Royal Society Open Science.

"These ants are fascinating as their mandibles are very unusual," said University of Illinois animal biology and entomology professor Andrew Suarez, who led the research with Fredrick J. Larabee, a postdoctoral researcher at the Smithsonian National Museum of Natural History; and Adrian A. Smith, of the North Carolina Museum of Natural Sciences and North Carolina State University, Raleigh. "Even among ants that power-amplify their jaws, the Dracula ants are unique: Instead of using three different parts for the spring, latch and lever arm, all three are combined in the mandible."

Unlike trap-jaw ants, whose powerful jaws snap closed from an open position, Dracula ants power up their mandibles by pressing the tips together, spring-loading them with internal stresses that release when one mandible slides across the other, similar to a human finger snap, the researchers said.

"The ants use this motion to smack other arthropods, likely stunning them, smashing them against a tunnel wall or pushing them away. The prey is then transported back to the nest, where it is fed to the ants' larvae," Suarez said.

"Scientists have described many different spring-loading mechanisms in ants, but no one knew the relative speed of each of these mechanisms," Larabee said. "We had to use incredibly fast cameras to see the whole movement. We also used X-ray imaging technology to be able to see their anatomy in three dimensions, to better understand how the movement works."

The team also conducted computer simulations of the mandible snaps of different castes of Dracula ants to test how the shape and structural characteristics of the mandibles affected the power of their snap.

"Our main findings are that snap-jaws are the fastest of the spring-loaded ant mouthparts, and the fastest currently known animal movement," Larabee said. "By comparing the jaw shape of snapping ants with biting ants, we also learned that it only took small changes in shape for the jaws to evolve a new function: acting as a spring."

The team's future work includes examining how the ants use their mandibles in the field.

Read more at Science Daily

Why deep oceans gave life to the first big, complex organisms

Ocean.
In the beginning, life was small. For billions of years, all life on Earth was microscopic, consisting mostly of single cells. Then suddenly, about 570 million years ago, complex organisms including animals with soft, sponge-like bodies up to a meter long sprang to life. And for 15 million years, life at this size and complexity existed only in deep water.

Scientists have long questioned why these organisms appeared when and where they did: in the deep ocean, where light and food are scarce, in a time when oxygen in Earth's atmosphere was in particularly short supply. A new study from Stanford University, published Dec. 12 in the peer-reviewed Proceedings of the Royal Society B, suggests that the more stable temperatures of the ocean's depths allowed the burgeoning life forms to make the best use of limited oxygen supplies.

All of this matters in part because understanding the origins of these marine creatures from the Ediacaran period is about uncovering missing links in the evolution of life, and even our own species. "You can't have intelligent life without complex life," explained Tom Boag, lead author on the paper and a doctoral candidate in geological sciences at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth).

The new research comes as part of a small but growing effort to apply knowledge of animal physiology to understand the fossil record in the context of a changing environment. The information could shed light on the kinds of organisms that will be able to survive in different environments in the future.

"Bringing in this data from physiology, treating the organisms as living, breathing things and trying to explain how they can make it through a day or a reproductive cycle is not a way that most paleontologists and geochemists have generally approached these questions," said Erik Sperling, senior author on the paper and an assistant professor of geological sciences.

Goldilocks and temperature change

Previously, scientists had theorized that animals have an optimum temperature at which they can thrive with the least amount of oxygen. According to the theory, oxygen requirements are higher at temperatures either colder or warmer than a happy medium. To test that theory in an animal reminiscent of those flourishing in the Ediacaran ocean depths, Boag measured the oxygen needs of sea anemones, whose gelatinous bodies and ability to breathe through the skin closely mimic the biology of fossils collected from the Ediacaran oceans.

"We assumed that their ability to tolerate low oxygen would get worse as the temperatures increased. That had been observed in more complex animals like fish and lobsters and crabs," Boag said. The scientists weren't sure whether colder temperatures would also strain the animals' tolerance. But indeed, the anemones needed more oxygen when temperatures in an experimental tank veered outside their comfort zone.

Together, these factors made Boag and his colleagues suspect that, like the anemones, Ediacaran life would also require stable temperatures to make the most efficient use of the ocean's limited oxygen supplies.

Refuge at depth

It would have been harder for Ediacaran animals to use the little oxygen present in cold, deep ocean waters than in warmer shallows because the gas diffuses into tissues more slowly in colder seawater. Animals in the cold have to expend a larger portion of their energy just to move oxygenated seawater through their bodies.

But what it lacked in useable oxygen, the deep Ediacaran ocean made up for with stability. In the shallows, the passing of the sun and seasons can deliver wild swings in temperature -- as much as 10 degrees Celsius (50 degrees F.) in the modern ocean, compared to seasonal variations of less than 1 degree Celsius at depths below one kilometer (.62 mile). "Temperatures change much more rapidly on a daily and annual basis in shallow water," Sperling explained.

In a world with low oxygen levels, animals unable to regulate their own body temperature couldn't have withstood an environment that so regularly swung outside their Goldilocks temperature.

Read more at Science Daily

Deep-learning technique reveals 'invisible' objects in the dark

From an original transparent etching (far right), engineers produced a photograph in the dark (top left), then attempted to reconstruct the object using first a physics-based algorithm (top right), then a trained neural network (bottom left), before combining both the neural network with the physics-based algorithm to produce the clearest, most accurate reproduction (bottom right) of the original object.
Small imperfections in a wine glass or tiny creases in a contact lens can be tricky to make out, even in good light. In almost total darkness, images of such transparent features or objects are nearly impossible to decipher. But now, engineers at MIT have developed a technique that can reveal these "invisible" objects, in the dark.

In a study published today in Physical Review Letters, the researchers reconstructed transparent objects from images of those objects, taken in almost pitch-black conditions. They did this using a "deep neural network," a machine-learning technique that involves training a computer to associate certain inputs with specific outputs -- in this case, dark, grainy images of transparent objects and the objects themselves.

The team trained a computer to recognize more than 10,000 transparent glass-like etchings, based on extremely grainy images of those patterns. The images were taken in very low lighting conditions, with about one photon per pixel -- far less light than a camera would register in a dark, sealed room. They then showed the computer a new grainy image, not included in the training data, and found that it learned to reconstruct the transparent object that the darkness had obscured.

The results demonstrate that deep neural networks may be used to illuminate transparent features such as biological tissues and cells, in images taken with very little light.

"In the lab, if you blast biological cells with light, you burn them, and there is nothing left to image," says George Barbastathis, professor of mechanical engineering at MIT. "When it comes to X-ray imaging, if you expose a patient to X-rays, you increase the danger they may get cancer. What we're doing here is, you can get the same image quality, but with a lower exposure to the patient. And in biology, you can reduce the damage to biological specimens when you want to sample them."

Barbastathis' co-authors on the paper are lead author Alexandre Goy, Kwabena Arthur, and Shuai Li.

Deep dark learning

Neural networks are computational schemes that are designed to loosely emulate the way the brain's neurons work together to process complex data inputs. A neural network works by performing successive "layers" of mathematical manipulations. Each computational layer calculates the probability for a given output, based on an initial input. For instance, given an image of a dog, a neural network may identify features reminiscent first of an animal, then more specifically a dog, and ultimately, a beagle. A "deep" neural network encompasses many, much more detailed layers of computation between input and output.

A researcher can "train" such a network to perform computations faster and more accurately, by feeding it hundreds or thousands of images, not just of dogs, but other animals, objects, and people, along with the correct label for each image. Given enough data to learn from, the neural network should be able to correctly classify completely new images.

Deep neural networks have been widely applied in the field of computer vision and image recognition, and recently, Barbastathis and others developed neural networks to reconstruct transparent objects in images taken with plenty of light. Now his team is the first to use deep neural networks in experiments to reveal invisible objects in images taken in the dark.

"Invisible objects can be revealed in different ways, but it usually requires you to use ample light," Barbastathis says. "What we're doing now is visualizing the invisible objects, in the dark. So it's like two difficulties combined. And yet we can still do the same amount of revelation."

The law of light

The team consulted a database of 10,000 integrated circuits (IC), each of which is etched with a different intricate pattern of horizontal and vertical bars.

"When we look with the naked eye, we don't see much -- they each look like a transparent piece of glass," Goy says. "But there are actually very fine and shallow structures that still have an effect on light."

Instead of etching each of the 10,000 patterns onto as many glass slides, the researchers used a "phase spatial light modulator," an instrument that displays the pattern on a single glass slide in a way that recreates the same optical effect that an actual etched slide would have.

The researchers set up an experiment in which they pointed a camera at a small aluminum frame containing the light modulator. They then used the device to reproduce each of the 10,000 IC patterns from the database. The researchers covered the entire experiment so it was shielded from light, and then used the light modulator to rapidly rotate through each pattern, similarly to a slide carousel. They took images of each transparent pattern, in near total darkness, producing "salt-and-pepper" images that resembled little more than static on a television screen.

The team developed a deep neural network to identify transparent patterns from dark images, then fed the network each of the 10,000 grainy photographs taken by the camera, along with their corresponding patterns, or what the researchers called "ground-truths."

"You tell the computer, 'If I put this in, you get this out,'" Goy says. "You do this 10,000 times, and after the training, you hope that if you give it a new input, it can tell you what it sees."

"It's a little worse than a baby," Barbastathis quips. "Usually babies learn a bit faster."

The researchers set their camera to take images slightly out of focus. As counterintuitive as it seems, this actually works to bring a transparent object into focus. Or, more precisely, defocusing provides some evidence, in the form of ripples in the detected light, that a transparent object may be present. Such ripples are a visual flag that a neural network can detect as a first sign that an object is somewhere in an image's graininess.

But defocusing also creates blur, which can muddy a neural network's computations. To deal with this, the researchers incorporated into the neural network a law in physics that describes the behavior of light, and how it creates a blurring effect when a camera is defocused.

"What we know is the physical law of light propagation between the sample and the camera," Barbastathis says. "It's better to include this knowledge in the model, so the neural network doesn't waste time learning something that we already know."

Sharper image

After training the neural network on 10,000 images of different IC patterns, the team created a completely new pattern, not included in the original training set. When they took an image of the pattern, again in darkness, and fed this image into the neural network, they compared the patterns that the neural network reconstructed, both with and without the physical law embedded in the network.

They found that both methods reconstructed the original transparent pattern reasonably well, but the "physics-informed reconstruction" produced a sharper, more accurate image. What's more, this reconstructed pattern, from an image taken in near total darkness, was more defined than a physics-informed reconstruction of the same pattern, imaged in light that was more than 1,000 times brighter.

The team repeated their experiments with a totally new dataset, consisting of more than 10,000 images of more general and varied objects, including people, places, and animals. After training, the researchers fed the neural network a completely new image, taken in the dark, of a transparent etching of a scene with gondolas docked at a pier. Again, they found that the physics-informed reconstruction produced a more accurate image of the original, compared to reproductions without the physical law embedded.

"We have shown that deep learning can reveal invisible objects in the dark," Goy says. "This result is of practical importance for medical imaging to lower the exposure of the patient to harmful radiation, and for astronomical imaging."

Read more at Science Daily

The epoch of planet formation, times twenty

ALMA's high-resolution images of nearby protoplanetary disks, which are results of the Disk Substructures at High Angular Resolution Project (DSHARP).
Astronomers have cataloged nearly 4,000 exoplanets in orbit around distant stars. Though the discovery of these newfound worlds has taught us much, there is still a great deal we do not know about the birth of planets and the precise cosmic recipes that spawn the wide array of planetary bodies we have already uncovered, including so-called hot Jupiters, massive rocky worlds, icy dwarf planets, and -- hopefully someday soon -- distant analogs of Earth.

To help answer these and other intriguing questions, a team of astronomers has conducted ALMA's first large-scale, high-resolution survey of protoplanetary disks, the belts of dust and gas around young stars.

Known as the Disk Substructures at High Angular Resolution Project (DSHARP), this "Large Program" of the Atacama Large Millimeter/submillimeter Array (ALMA) has yielded stunning, high-resolution images of 20 nearby protoplanetary disks and given astronomers new insights into the variety of features they contain and the speed with which planets can emerge.

The results of this survey will appear in a special focus issue of the Astrophysical Journal Letters.

According to the researchers, the most compelling interpretation of these observations is that large planets, likely similar in size and composition to Neptune or Saturn, form quickly, much faster than current theory would allow. Such planets also tend to form in the outer reaches of their solar systems at tremendous distances from their host stars.

Such precocious formation could also help explain how rocky, Earth-size worlds are able to evolve and grow, surviving their presumed self-destructive adolescence.

"The goal of this months-long observing campaign was to search for structural commonalities and differences in protoplanetary disks. ALMA's remarkably sharp vision has revealed previously unseen structures and unexpectedly complex patterns," said Sean Andrews, an astronomer at the Harvard-Smithsonian Center for Astrophysics (CfA) and a leader of the ALMA observing campaign, along with Andrea Isella of Rice University, Laura Pérez of the University of Chile, and Cornelis Dullemond of Heidelberg University. "We are seeing distinct details around a wide assortment of young stars of various masses. The most compelling interpretation of these highly diverse, small-scale features is that there are unseen planets interacting with the disk material."

The leading models for planet formation hold that planets are born by the gradual accumulation of dust and gas inside a protoplanetary disk, beginning with grains of icy dust that coalesce to form larger and larger rocks, until asteroids, planetesimals, and planets emerge. This hierarchical process should take many millions of years to unfold, suggesting that its impact on protoplanetary disks would be most prevalent in older, more mature systems. Mounting evidence, however, indicates that is not always the case.

ALMA's early observations of young protoplanetary disks, some only about one million years old, reveal surprisingly well-defined structures, including prominent rings and gaps, which appear to be the hallmarks of planets. Astronomers were initially cautious to ascribe these features to the actions of planets since other natural process could be at play.

"It was surprising to see possible signatures of planet formation in the very first high-resolution images of young disks. It was important to find out whether these were anomalies or if those signatures were common in disks," said Jane Huang, a graduate student at CfA and a member of the research team.

Since the initial sample of disks that astronomers could study was so small, however, it was impossible to draw any overarching conclusions. It could have been that astronomers were observing atypical systems. More observations on a variety of protoplanetary disks were needed to determine the most likely causes of the features they were seeing.

The DSHARP campaign was designed to do precisely that by studying the relatively small-scale distribution of dust particles around 20 nearby protoplanetary disks. These dust particles naturally glow in millimeter-wavelength light, enabling ALMA to precisely map the density distribution of small, solid particles around young stars.

Depending on the star's distance from Earth, ALMA was able to distinguish features as small as a few Astronomical Units. (An Astronomical Unit is the average distance of the Earth to the Sun -- about 150 million kilometers, which is a useful scale for measuring distances on the scale of star systems). Using these observations, the researchers were able to image an entire population of nearby protoplanetary disks and study their AU-scale features.

The researchers found that many substructures -- concentric gaps, narrow rings -- are common to nearly all the disks, while large-scale spiral patterns and arc-like features are also present in some of the cases. Also, the disks and gaps are present at a wide range of distances from their host stars, from a few AU to more than 100 AU, which is more than three times the distance of Neptune from our Sun.

These features, which could be the imprint of large planets, may explain how rocky Earth-size planets are able to form and grow. For decades, astronomers have puzzled over a major hurdle in planet-formation theory: Once dusty bodies grow to a certain size -- about one centimeter in diameter -- the dynamics of a smooth protoplanetary disk would induce them to fall in on their host star, never acquiring the mass necessary to form planets like Mars, Venus, and Earth.

The dense rings of dust we now see with ALMA would produce a safe haven for rocky worlds to fully mature. Their higher densities and the concentration of dust particles would create perturbations in the disk, forming zones where planetesimals would have more time to grow into fully fledged planets.

"When ALMA truly revealed its capabilities with its iconic image of HL Tau, we had to wonder if that was an outlier since the disk was comparatively massive and young," noted Pérez. "These latest observations show that, though striking, HL Tau is far from unusual and may actually represent the normal evolution of planets around young stars."

Read more at Science Daily

Dec 11, 2018

NASA's Voyager 2 probe enters interstellar space

This illustration shows the position of NASA's Voyager 1 and Voyager 2 probes, outside of the heliosphere, a protective bubble created by the Sun that extends well past the orbit of Pluto. Voyager 1 exited the heliosphere in August 2012. Voyager 2 exited at a different location in November 2018.
For the second time in history, a human-made object has reached the space between the stars. NASA's Voyager 2 probe now has exited the heliosphere -- the protective bubble of particles and magnetic fields created by the Sun.

Members of NASA's Voyager team will discuss the findings at a news conference at 11 a.m. EST (8 a.m. PST) today at the meeting of the American Geophysical Union (AGU) in Washington. The news conference will stream live on the agency's website (https://www.nasa.gov/live).

Comparing data from different instruments aboard the trailblazing spacecraft, mission scientists determined the probe crossed the outer edge of the heliosphere on Nov. 5. This boundary, called the heliopause, is where the tenuous, hot solar wind meets the cold, dense interstellar medium. Its twin, Voyager 1, crossed this boundary in 2012, but Voyager 2 carries a working instrument that will provide first-of-its-kind observations of the nature of this gateway into interstellar space.

Voyager 2 now is slightly more than 11 billion miles (18 billion kilometers) from Earth. Mission operators still can communicate with Voyager 2 as it enters this new phase of its journey, but information -- moving at the speed of light -- takes about 16.5 hours to travel from the spacecraft to Earth. By comparison, light traveling from the Sun takes about eight minutes to reach Earth.

The most compelling evidence of Voyager 2's exit from the heliosphere came from its onboard Plasma Science Experiment (PLS), an instrument that stopped working on Voyager 1 in 1980, long before that probe crossed the heliopause. Until recently, the space surrounding Voyager 2 was filled predominantly with plasma flowing out from our Sun. This outflow, called the solar wind, creates a bubble -- the heliosphere -- that envelopes the planets in our solar system. The PLS uses the electrical current of the plasma to detect the speed, density, temperature, pressure and flux of the solar wind. The PLS aboard Voyager 2 observed a steep decline in the speed of the solar wind particles on Nov. 5. Since that date, the plasma instrument has observed no solar wind flow in the environment around Voyager 2, which makes mission scientists confident the probe has left the heliosphere.

"Working on Voyager makes me feel like an explorer, because everything we're seeing is new," said John Richardson, principal investigator for the PLS instrument and a principal research scientist at the Massachusetts Institute of Technology in Cambridge. "Even though Voyager 1 crossed the heliopause in 2012, it did so at a different place and a different time, and without the PLS data. So we're still seeing things that no one has seen before."

In addition to the plasma data, Voyager's science team members have seen evidence from three other onboard instruments -- the cosmic ray subsystem, the low energy charged particle instrument and the magnetometer -- that is consistent with the conclusion that Voyager 2 has crossed the heliopause. Voyager's team members are eager to continue to study the data from these other onboard instruments to get a clearer picture of the environment through which Voyager 2 is traveling.

"There is still a lot to learn about the region of interstellar space immediately beyond the heliopause," said Ed Stone, Voyager project scientist based at Caltech in Pasadena, California.

Together, the two Voyagers provide a detailed glimpse of how our heliosphere interacts with the constant interstellar wind flowing from beyond. Their observations complement data from NASA's Interstellar Boundary Explorer (IBEX), a mission that is remotely sensing that boundary. NASA also is preparing an additional mission -- the upcoming Interstellar Mapping and Acceleration Probe (IMAP), due to launch in 2024 -- to capitalize on the Voyagers' observations.

"Voyager has a very special place for us in our heliophysics fleet," said Nicola Fox, director of the Heliophysics Division at NASA Headquarters. "Our studies start at the Sun and extend out to everything the solar wind touches. To have the Voyagers sending back information about the edge of the Sun's influence gives us an unprecedented glimpse of truly uncharted territory."

While the probes have left the heliosphere, Voyager 1 and Voyager 2 have not yet left the solar system, and won't be leaving anytime soon. The boundary of the solar system is considered to be beyond the outer edge of the Oort Cloud, a collection of small objects that are still under the influence of the Sun's gravity. The width of the Oort Cloud is not known precisely, but it is estimated to begin at about 1,000 astronomical units (AU) from the Sun and to extend to about 100,000 AU. One AU is the distance from the Sun to Earth. It will take about 300 years for Voyager 2 to reach the inner edge of the Oort Cloud and possibly 30,000 years to fly beyond it.

The Voyager probes are powered using heat from the decay of radioactive material, contained in a device called a radioisotope thermal generator (RTG). The power output of the RTGs diminishes by about four watts per year, which means that various parts of the Voyagers, including the cameras on both spacecraft, have been turned off over time to manage power.

"I think we're all happy and relieved that the Voyager probes have both operated long enough to make it past this milestone," said Suzanne Dodd, Voyager project manager at NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California. "This is what we've all been waiting for. Now we're looking forward to what we'll be able to learn from having both probes outside the heliopause."

Voyager 2 launched in 1977, 16 days before Voyager 1, and both have traveled well beyond their original destinations. The spacecraft were built to last five years and conduct close-up studies of Jupiter and Saturn. However, as the mission continued, additional flybys of the two outermost giant planets, Uranus and Neptune, proved possible. As the spacecraft flew across the solar system, remote-control reprogramming was used to endow the Voyagers with greater capabilities than they possessed when they left Earth. Their two-planet mission became a four-planet mission. Their five-year lifespans have stretched to 41 years, making Voyager 2 NASA's longest running mission.

The Voyager story has impacted not only generations of current and future scientists and engineers, but also Earth's culture, including film, art and music. Each spacecraft carries a Golden Record of Earth sounds, pictures and messages. Since the spacecraft could last billions of years, these circular time capsules could one day be the only traces of human civilization.

Voyager's mission controllers communicate with the probes using NASA's Deep Space Network (DSN), a global system for communicating with interplanetary spacecraft. The DSN consists of three clusters of antennas inGoldstone, California; Madrid, Spain; and Canberra, Australia.

Read more at Science Daily

Tiny droplets of early universe matter created

Visualization of expanding drops of quark gluon plasmas in three geometric shapes.
Researchers have created tiny droplets of the ultra-hot matter that once filled the early universe, forming three distinct shapes and sizes: circles, ellipses and triangles.

The study, published today in Nature Physics, stems from the work of an international team of scientists and focuses on a liquid-like state of matter called a quark gluon plasma. Physicists believe that this matter filled the entire universe during the first few microseconds after the Big Bang when the universe was still too hot for particles to come together to make atoms.

CU Boulder Professor Jamie Nagle and colleagues on an experiment known as PHENIX used a massive collider at Brookhaven National Laboratory in Upton, New York, to recreate that plasma. In a series of tests, the researchers smashed packets of protons and neutrons in different combinations into much bigger atomic nuclei.

They discovered that, by carefully controlling conditions, they could generate droplets of quark gluon plasma that expanded to form three different geometric patterns.

"Our experimental result has brought us much closer to answering the question of what is the smallest amount of early universe matter that can exist," Nagle said.

Researchers from CU Boulder and Vanderbilt University lead the data analysis efforts for the PHENIX experiment.

Scientists first started studying such matter at Brookhaven's Relativistic Heavy Ion Collider (RHIC) in 2000. They crashed together the heavy nuclei of gold atoms, generating temperatures of trillions of degrees Celsius. In the resulting boil, quarks and gluons, the subatomic particles that make up all protons and neutrons, broke free from their atomic chains and flowed almost freely.

Several years later, another group of researchers reported that they seemed to have created a quark gluon plasma not by slamming together two atoms, but by crashing together just two protons.

That was surprising because most scientists assumed that lone protons could not deliver enough energy to make anything that could flow like a fluid.

Nagle and his colleagues devised a way to test those results in 2014: If such tiny droplets behaved like liquid, then they should hold their shape.

As he explained, "Imagine that you have two droplets that are expanding into a vacuum. If the two droplets are really close together, then as they're expanding out, they run into each other and push against each other, and that's what creates this pattern."

In other words, if you toss two stones into a pond close together, the ripples from those impacts will flow into each other, forming a pattern that resembles an ellipse. The same could be true if you smashed a proton-neutron pair, called a deuteron, into something bigger, Nagle and Romatschke reasoned. Likewise, a proton-proton-neutron trio, also known as a helium-3 atom, might expand out into something akin to a triangle.

And that's exactly what the PHENIX experiment found: collisions of deuterons formed short-lasting ellipses, helium-3 atoms formed triangles and a single proton exploded in the shape of a circle.

The results, the researchers said, could help theorists better understand how the universe's original quark gluon plasma cooled over milliseconds, giving birth to the first atoms in existence.

Read more at Science Daily