Jan 19, 2019

How musicians communicate non-verbally during performance

A team of researchers from McMaster University has discovered a new technique to examine how musicians intuitively coordinate with one another during a performance, silently predicting how each will express the music.

The findings, published today in the journal Scientific Reports, provide new insights into how musicians synchronize their movements so they are playing exactly in time, as one single unit.

"Successfully performing music with a group is a highly complex endeavor," explains Laurel Trainor, the senior author on the study and director of the LIVELab at McMaster University where the work was conducted.

"How do musicians coordinate with each other to perform expressive music that has changes in tempo and dynamics? Accomplishing this relies on predicting what your fellow musicians will do next so that you can plan the motor movements so as to express the same emotions in a coordinated way. If you wait to hear what your fellow musicians will do, it is too late," she says.

For this study, researchers turned to the Gryphon Trio, an acclaimed chamber music ensemble. Each performer was fitted with motion capture markers to track their movements while the musicians played happy or sad musical excerpts, once with musical expression, once without.

Using mathematical techniques, investigators measured how much the movements of each musician were predicting the movements of the others.

Whether they were portraying joy or sadness, the musicians predicted each others' movements to a greater extent when they played expressively, compared to when they played with no emotion.

"Our work shows we can measure communication of emotion between musicians by analyzing their movements in detail, and that achieving a common emotion expression as a group requires a lot of communication," says Andrew Chang, the lead author on the study.

Researchers suggest this novel technique can be applied to other situations, such as communication between non-verbal patients and their family and caregivers. They are also testing the technique in a study on romantic attraction.

"The early results indicate that communication measured in body sway can predict which couples will want to see each other again," says Chang.

From Science Daily

Waves in Saturn's rings give precise measurement of planet's rotation rate

This image of Saturn's rings was taken by NASA's Cassini spacecraft on Sept. 13, 2017. It is among the last images Cassini sent back to Earth.
Saturn's distinctive rings were observed in unprecedented detail by NASA's Cassini spacecraft, and scientists have now used those observations to probe the interior of the giant planet and obtain the first precise determination of its rotation rate. The length of a day on Saturn, according to their calculations, is 10 hours 33 minutes and 38 seconds.

The researchers studied wave patterns created within Saturn's rings by the planet's internal vibrations. In effect, the rings act as an extremely sensitive seismograph by responding to vibrations within the planet itself.

Similar to Earth's vibrations from an earthquake, Saturn responds to perturbations by vibrating at frequencies determined by its internal structure. Heat-driven convection in the interior is the most likely source of the vibrations. These internal oscillations cause the density at any particular place within the planet to fluctuate, which makes the gravitational field outside the planet oscillate at the same frequencies.

"Particles in the rings feel this oscillation in the gravitational field. At places where this oscillation resonates with ring orbits, energy builds up and gets carried away as a wave," explained Christopher Mankovich, a graduate student in astronomy and astrophysics at UC Santa Cruz.

Mankovich is lead author of a paper, published January 17 in the Astrophysical Journal, comparing the wave patterns in the rings with models of Saturn's interior structure.

Most of the waves observed in Saturn's rings are due to the gravitational effects of the moons orbiting outside the rings, said coauthor Jonathan Fortney, professor of astronomy and astrophysics at UC Santa Cruz. "But some of the features in the rings are due to the oscillations of the planet itself, and we can use those to understand the planet's internal oscillations and internal structure," he said.

Mankovich developed a set of models of the internal structure of Saturn, used them to predict the frequency spectrum of Saturn's internal vibrations, and compared those predictions with the waves observed by Cassini in Saturn's C ring. One of the main results of his analysis is the new calculation of Saturn's rotation rate, which has been surprisingly difficult to measure.

As a gas giant planet, Saturn has no solid surface with landmarks that could be tracked as it rotates. Saturn is also unusual in having its magnetic axis nearly perfectly aligned with its rotational axis. Jupiter's magnetic axis, like Earth's, is not aligned with its rotational axis, which means the magnetic pole swings around as the planet rotates, enabling astronomers to measure a periodic signal in radio waves and calculate the rotation rate.

The rotation rate of 10:33:38 determined by Mankovich's analysis is several minutes faster than previous estimates based on radiometry from the Voyager and Cassini spacecraft.

"We now have the length of Saturn's day, when we thought we wouldn't be able to find it," said Cassini Project Scientist Linda Spilker. "They used the rings to peer into Saturn's interior, and out popped this long-sought, fundamental quality of the planet. And it's a really solid result. The rings held the answer."

The idea that Saturn's rings could be used to study the seismology of the planet was first suggested in 1982, long before the necessary observations were possible. Coauthor Mark Marley, now at NASA's Ames Research Center in Silicon Valley, subsequently fleshed out the idea for his Ph.D. thesis in 1990, showed how the calculations could be done, and predicted where features in Saturn's rings would be. He also noted that the Cassini mission, then in the planning stages, would be able to make the observations needed to test the idea.

Read more at Science Daily

Jan 18, 2019

Big Bang query: Mapping how a mysterious liquid became all matter

The leading theory about how the universe began is the Big Bang, which says that 14 billion years ago the universe existed as a singularity, a one-dimensional point, with a vast array of fundamental particles contained within it. Extremely high heat and energy caused it to inflate and then expand into the cosmos as we know it -- and, the expansion continues to this day.

The initial result of the Big Bang was an intensely hot and energetic liquid that existed for mere microseconds that was around 10 billion degrees Fahrenheit (5.5 billion Celsius). This liquid contained nothing less than the building blocks of all matter. As the universe cooled, the particles decayed or combined giving rise to...well, everything.

Quark-gluon plasma (QGP) is the name for this mysterious substance so called because it was made up of quarks -- the fundamental particles -- and gluons, which physicist Rosi J. Reed describes as "what quarks use to talk to each other."

Scientists like Reed, an assistant professor in Lehigh University's Department of Physics whose research includes experimental high-energy physics, cannot go back in time to study how the Universe began. So they re-create the circumstances, by colliding heavy ions, such as Gold, at nearly the speed of light, generating an environment that is 100,000 times hotter than the interior of the sun. The collision mimics how quark-gluon plasma became matter after the Big Bang, but in reverse: the heat melts the ions' protons and neutrons, releasing the quarks and gluons hidden inside them.

There are currently only two operational accelerators in the world capable of colliding heavy ions -- and only one in the U.S.: Brookhaven National Lab's Relativistic Heavy Ion Collider (RHIC). It is about a three-hour drive from Lehigh, in Long Island, New York.

Reed is part of the STAR Collaboration , an international group of scientists and engineers running experiments on the Solenoidal Tracker at RHIC (STAR). The STAR detector is massive and is actually made up of many detectors. It is as large as a house and weighs 1,200 tons. STAR's specialty is tracking the thousands of particles produced by each ion collision at RHIC in search of the signatures of quark-gluon plasma.

"When running experiments there are two 'knobs' we can change: the species -- such as gold on gold or proton on proton -- and the collision energy," says Reed. "We can accelerate the ions differently to achieve different energy-to-mass ratio."

Using the various STAR detectors, the team collides ions at different collision energies. The goal is to map quark-gluon plasma's phase diagram, or the different points of transition as the material changes under varying pressure and temperature conditions. Mapping quark-gluon plasma's phase diagram is also mapping the nuclear strong force, otherwise known as Quantum Chromodynamics (QCD), which is the force that holds positively charged protons together.

"There are a bunch of protons and neutrons in the center of an ion," explains Reed. "These are positively charged and should repel, but there's a 'strong force' that keeps them together --  strong enough to overcome their tendency to come apart."

Understanding quark-gluon plasma's phase diagram, and the location and existence of the phase transition between the plasma and normal matter is of fundamental importance, says Reed.

"It's a unique opportunity to learn how one of the four fundamental forces of nature operates at temperature and energy densities similar to those that existed only microseconds after the Big Bang," says Reed.

Upgrading the RHIC detectors to better map the "strong force"

The STAR team uses a Beam Energy Scan (BES) to do the phase transition mapping. During the first part of the project, known as BES-I, the team collected observable evidence with "intriguing results." Reed presented these results at the 5th Joint Meeting of the APS Division of Nuclear Physics and the Physical Society of Japan in Hawaii in October 2018 in a talk titled: "Testing the quark-gluon plasma limits with energy and species scans at RHIC."

However, limited statistics, acceptance, and poor event plane resolution did not allow firm conclusions for a discovery. The second phase of the project, known as BES-II, is going forward and includes an improvement that Reed is working on with STAR team members: an upgrade of the Event Plan Detector. Collaborators include scientists at Brookhaven as well as at Ohio State University.

The STAR team plans to continue to run experiments and collect data in 2019 and 2020, using the new Event Plan Detector. According to Reed, the new detector is designed to precisely locate where the collision happens and will help characterize the collision, specifically how "head on" it is.

"It will also help improve the measurement capabilities of all the other detectors," says Reed.

The STAR collaboration expects to run their next experiments at RHIC in March 2019.

Read more at Science Daily

Scientists find increase in asteroid impacts on ancient Earth by studying the Moon

Image depicts the change in impact rate modeled in this paper. Some of the craters used in the study on both the moon and Earth are highlighted in the background.
An international team of scientists is challenging our understanding of a part of Earth's history by looking at the Moon, the most complete and accessible chronicle of the asteroid collisions that carved our solar system.

In a study published today in Science, the team shows the number of asteroid impacts on the Moon and Earth increased by two to three times starting around 290 million years ago.

"Our research provides evidence for a dramatic change in the rate of asteroid impacts on both Earth and the Moon that occurred around the end of the Paleozoic era," said lead author Sara Mazrouei, who recently earned her PhD in the Department of Earth Sciences in the Faculty of Arts & Science at the University of Toronto (U of T). "The implication is that since that time we have been in a period of relatively high rate of asteroid impacts that is 2.6 times higher than it was prior to 290 million years ago."

It had been previously assumed that most of Earth's older craters produced by asteroid impacts have been erased by erosion and other geologic processes. But the new research shows otherwise.

"The relative rarity of large craters on Earth older than 290 million years and younger than 650 million years is not because we lost the craters, but because the impact rate during that time was lower than it is now," said Rebecca Ghent, an associate professor in U of T's Department of Earth Sciences and one of the paper's co-authors. "We expect this to be of interest to anyone interested in the impact history of both Earth and the Moon, and the role that it might have played in the history of life on Earth."

Scientists have for decades tried to understand the rate that asteroids hit Earth by using radiometric dating of the rocks around them to determine their ages. But because it was believed erosion caused some craters to disappear, it was difficult to find an accurate impact rate and determine whether it had changed over time.

A way to sidestep this problem is to examine the Moon, which is hit by asteroids in the same proportions over time as Earth. But there was no way to determine the ages of lunar craters until NASA's Lunar Reconnaissance Orbiter (LRO) started circling the Moon a decade ago and studying its surface.

"The LRO's instruments have allowed scientists to peer back in time at the forces that shaped the Moon," said Noah Petro, an LRO project scientist based at NASA Goddard Space Flight Center.

Using LRO data, the team was able to assemble a list of ages of all lunar craters younger than about a billion years. They did this by using data from LRO's Diviner instrument, a radiometer that measures the heat radiating from the Moon's surface, to monitor the rate of degradation of young craters.

During the lunar night, rocks radiate much more heat than fine-grained soil called regolith. This allows scientists to distinguish rocks from fine particles in thermal images. Ghent had previously used this information to calculate the rate at which large rocks around the Moon's young craters -- ejected onto the surface during asteroid impact -- break down into soil as a result of a constant rain of tiny meteorites over tens of millions of years. By applying this idea, the team was able to calculate ages for previously un-dated lunar craters.

When compared to a similar timeline of Earth's craters, they found the two bodies had recorded the same history of asteroid bombardment.

"It became clear that the reason why Earth has fewer older craters on its most stable regions is because the impact rate was lower up until about 290 million years ago," said William Bottke, an asteroid expert at the Southwest Research Institute in Boulder, Colorado and another of the paper's coauthors. "The answer to Earth's impact rate was staring everyone right in the face."

The reason for the jump in the impact rate is unknown, though the researchers speculate it might be related to large collisions taking place more than 300 million years ago in the main asteroid belt between the orbits of Mars and Jupiter. Such events can create debris that can reach the inner solar system.

Ghent and her colleagues found strong supporting evidence for their findings through a collaboration with Thomas Gernon, an Earth scientist based at the University of Southampton in England who works on a terrestrial feature called kimberlite pipes. These underground pipes are long-extinct volcanoes that stretch, in a carrot shape, a couple of kilometers below the surface, and are found on some of the least eroded regions of Earth in the same places preserved impact craters are found.

"The Canadian shield hosts some of the best-preserved and best-studied of this terrain -- and also some of the best-studied large impact craters," said Mazrouei.

Gernon showed that kimberlite pipes formed since about 650 million years ago had not experienced much erosion, indicating that the large impact craters younger than this on stable terrains must also be intact.

"This is how we know those craters represent a near-complete record," Ghent said.

While the researchers weren't the first to propose that the rate of asteroid strikes to Earth has fluctuated over the past billion years, they are the first to show it statistically and to quantify the rate.

"The findings may also have implications for the history of life on Earth, which is punctuated by extinction events and rapid evolution of new species," said Ghent. "Though the forces driving these events are complicated and may include other geologic causes, such as large volcanic eruptions, combined with biological factors, asteroid impacts have surely played a role in this ongoing saga.

"The question is whether the predicted change in asteroid impacts can be directly linked to events that occurred long ago on Earth."

Read more at Science Daily

Mechanism helps explain the ear's exquisite sensitivity

This image, taken through an optical microscope, shows a cross-section of the tectorial membrane, a gelatinous structure that lies atop the tiny hairs that line the inner ear.
The human ear, like those of other mammals, is so extraordinarily sensitive that it can detect sound-wave-induced vibrations of the eardrum that move by less than the width of an atom. Now, researchers at MIT have discovered important new details of how the ear achieves this amazing ability to pick up faint sounds.

The new findings help explain how our ears can detect vibrations a million times less intense than those we can detect through the sense of touch, for example. The results appear in the journal Physical Review Letters, in a paper by visiting scientist and lead author Jonathan Sellon, professor of electrical engineering and senior author Dennis Freeman, visiting scientist Roozbeh Ghaffari, and members of the Grodzinsky group at MIT.

Both the ear's sensitivity and its selectivity -- its ability to distinguish different frequencies of sound -- depend crucially on the behavior of a minuscule gelatinous structure in the inner ear called the tectorial membrane, which Freeman and his students have been studying for more than a decade. Now, they have found that the way the gel membrane gives our hearing its extreme sensitivity has to do with the size, stiffness, and distribution of nanoscale pores in that membrane, and the way those nanopores control the movement of water within the gel.

The tectorial membrane lies atop the tiny hairs that line the inner ear, or cochlea. These sensory receptors are arranged in tufts that are each sensitive to different frequencies of sound, in a progression along the length of the tightly curled structure. The fact that the tips of those hairs are embedded in the tectorial membrane means its behavior strongly affects the way those hairs respond to sound.

"Mechanically, it's Jell-O," Freeman says, describing the tiny tectorial membrane, which is thinner than a hair. Though it's essentially a saturated sponge-like structure made mostly of water, "if you squeeze it as hard as you can, you can't get the water out. It's held together by electrostatic forces," he explains. But though there are many gel-based materials in the body, including cartilage, elastin and tendons, the tectorial membrane develops from a different set of genetic instructions.

The purpose of the structure was a puzzle initially. "Why would you want that?" Sellon says. Sitting right on top of the sensitive sound-pickup structure, "it's the kind of thing that muffles most kinds of microphones," he says. "Yet it's essential for hearing," and any defects in its structure caused by gene variations can significantly degrade a person's hearing.

After detailed tests of the microscopic structure, the team found that the size and arrangement of pores within it, and the way those properties affect how water within the gel moves back and forth between pores in response to vibration, makes the response of the whole system highly selective. Both the highest and lowest tones coming into the ear are less affected by the amplification provided by the tectorial membrane, while the middle frequencies are more strongly amplified.

"It's tuned just right to get the signal you need," Sellon says, to amplify the sounds that are most useful.

The team found that the tectorial membrane's structure "looked like a solid but behaved like a liquid," Freeman says -- which makes sense since it is composed mostly of liquid. "What we're finding is that the tectorial membrane is less solid than we thought." The key finding, which he says the team hadn't anticipated, was that "for middle frequencies, the structure moves as a liquid, but for high and low frequencies, it only behaves as a solid."

Overall, the researchers hope that a better understanding of these mechanisms may help in devising ways to counteract various kinds of hearing impairment -- either through mechanical aids such as improved cochlear implants, or medical interventions such as drugs that may alter the nanopores or the properties of the fluid in the tectorial membrane. "If the size of the pores is important for the functioning of hearing, there are things you could do," Freeman says.

Read more at Science Daily

How to rapidly image entire brains at nanoscale resolution

A forest of dendritic spines protrude from the branches of neurons in the mouse cortex.
Eric Betzig didn't expect the experiment to work.

Two scientists, Ruixuan Gao and Shoh Asano, wanted to use his team's microscope on brain samples expanded to four times their usual size -- blown up like balloons. The duo, part of Howard Hughes Medical Institute (HHMI) Investigator Ed Boyden's lab at the Massachusetts Institute of Technology (MIT), uses a chemical technique to make small specimens bigger so scientists can more easily see molecular details.

Their technique, called expansion microscopy, worked well on single cells or thin tissue sections imaged in conventional light microscopes, but Boyden's team wanted to image vastly larger chunks of tissue. They wanted to see complete neural circuits spanning millimeters or more. The scientists needed a microscope that was high-speed, high resolution, and relatively gentle -- something that didn't destroy a sample before they could finish imaging it.

So, they turned to Betzig. His team at HHMI's Janelia Research Campus had used their lattice light-sheet microscope to image the rapid subcellular dynamics of sensitive living cells in 3-D. Combining the two microscopy techniques could potentially offer rapid, detailed images of wide swaths of brain tissue.

"I thought they were full of it," Betzig remembers. "The idea does sound a bit crude," Gao says. "We're stretching tissues apart." But Betzig invited Gao and Asano to try the lattice scope out.

"I was going to show them," Betzig laughs. Instead, he was blown away. "I couldn't believe the quality of the data I was seeing. You could have knocked me over with a feather."

Now, he and his Janelia colleagues have teamed up with Boyden's group and imaged the entire fruit fly brain and sections of mouse brain the thickness of the cortex. Their combined method offers high resolution with the ability to visualize any desired protein -- and it's fast, too. Imaging the fly brain in multiple colors took just 62.5 hours, compared to the years it would take using an electron microscope, Boyden, Betzig, and their colleagues report January 17, 2018, in the journal, Science.

"I can see us getting to the point of imaging at least 10 fly brains per day," says Betzig, now an HHMI investigator at the University of California, Berkeley. Such speed and resolution will let scientists ask new questions, he says, like how brains differ between males and females, or how brain circuits vary between flies of the same type.

Boyden's group dreams of making a map of the brain so detailed you can simulate it in a computer. "We've crossed a threshold in imaging performance," he says. "That's why we're so excited. We're not just scanning incrementally more brain tissue, we're scanning entire brains."

Expanding the brain

Making detailed maps of the brain requires charting its activity and wiring -- in humans, the thousands of connections made by each of more than 80 billion neurons. Such maps could help scientists spot where brain disease begins, build better artificial intelligence, or even explain behavior. "That's like the holy grail for neuroscience," Boyden says.

Years ago, his group had an idea to figure out how everything was organized: What if they could actually make the brain bigger -- big enough to look inside? By infusing samples with swellable gels -- like the stuff in baby diapers -- the team invented a way to expand tissues, making the molecules inside less crowded and easier to see under a microscope. Molecules lock into a gel scaffold, keeping the same relative positions even after expansion.

But it wasn't easy to image large tissue volumes. The thicker a specimen gets, the harder it is to illuminate only the parts you want to see. Shining too much light on samples can photobleach them, burning out the fluorescent "bulbs" scientists use to light up cells.

Expanding a sample just four-fold increases its volume 64-fold, so imaging speed also becomes paramount, Gao says. "We needed something that was fast and didn't have much photobleaching, and we knew there was a fantastic microscope at Janelia."

The lattice light-sheet microscope sweeps an ultrathin sheet of light through a specimen, illuminating only that part in the microscope's plane of focus. That helps out-of-focus areas stay dark, keeping a specimen's fluorescence from being extinguished.

When Gao and Asano first tested their expanded mouse tissues on the lattice scope, they saw a thicket of glowing nubs protruding from neurons' branches. These nubs, called dendritic spines, often look like mushrooms, with bulbous heads on skinny necks that can be hard to measure. But the scientists were able to see even "the smallest necks possible," Asano says, while simultaneously imaging synaptic proteins nearby.

"It was incredibly impressive," says Betzig. The team was convinced that they should explore the combined technique further. "And that's what we've been doing ever since," he says.

The brain and beyond


Over the last two years, Gao and Asano have spent months at Janelia, teaming up with biologists, microscopists, physicists, and computer scientists across the campus to capture and analyze images. "This is like an Avengers-level collaboration," Gao says, referring to the crew of comic book superheroes.

Yoshinori Aso and the FlyLight team provided high-quality fly brain specimens, which Gao and Asano expanded and used to collect some 50,000 cubes of data across each brain -- forming a kind of 3-D jigsaw puzzle. Those images required complicated computational stitching to put the pieces back together, work led by Stephan Saalfeld and Igor Pisarev. "Stephen and Igor saved our bacon," Betzig says. "They dealt with all the horrible little details of image processing and made it work on each multi-terabyte data set."

Next, Srigokul Upadhyayula from Harvard Medical School, a co-first author of the report, analyzed the combined 200 terabytes of data and created the stunning movies that showcase the brain's intricacies in vivid color. He and his coauthors investigated more than 1,500 dendritic spines, imaged fatty sheaths that insulate mouse nerve cells, highlighted all of the dopaminergic neurons, and counted all the synapses across the entire fly brain.

The nuances of Boyden's team expansion technique make it well suited for the lattice scope; the technique produces nearly transparent samples. For the microscope, it's almost like looking through water, rather than a turbid sea of molecular gunk. "The result is that we get crystal clear images at blazingly fast speeds over very large volumes compared to earlier microscopy techniques," Boyden says.

Still, challenges remain. As with any kind of super resolution fluorescence microscopy, Betzig says, it can be hard to decorate proteins with enough fluorescent bulbs to see them clearly at high resolution. And since expansion microscopy requires many processing steps, there's still the potential for artifacts to be introduced. Because of this, he says, "we worked very hard to validate what we've done, and others would be well advised to do the same."

Read more at Science Daily

Jan 17, 2019

Evidence of changing seasons, rain on Saturn's moon Titan's north pole

New research provides evidence of rainfall on the north pole of Titan, the largest of Saturn’s moons, shown here. The rainfall would be the first indication of the start of a summer season in the moon’s northern hemisphere, according to the researchers.
An image from the international Cassini spacecraft provides evidence of rainfall on the north pole of Titan, the largest of Saturn's moons. The rainfall would be the first indication of the start of a summer season in the moon's northern hemisphere.

"The whole Titan community has been looking forward to seeing clouds and rains on Titan's north pole, indicating the start of the northern summer, but despite what the climate models had predicted, we weren't even seeing any clouds," said Rajani Dhingra, a doctoral student in physics at the University of Idaho in Moscow, and lead author of the new study accepted for publication in Geophysical Research Letters, a journal of the American Geophysical Union. "People called it the curious case of missing clouds."

Dhingra and her colleagues identified a reflective feature near Titan's north pole on an image taken June 7, 2016, by Cassini's near-infrared instrument, the Visual and Infrared Mapping Spectrometer. The reflective feature covered approximately 46,332 square miles, roughly half the size of the Great Lakes, and did not appear on images from previous and subsequent Cassini passes.

Analyses of the short-term reflective feature suggested it likely resulted from sunlight reflecting off a wet surface. The study attributes the reflection to a methane rainfall event, followed by a probable period of evaporation.

"It's like looking at a sunlit wet sidewalk," Dhingra said.

This reflective surface represents the first observations of summer rainfall on the moon's northern hemisphere. If compared to Earth's yearly cycle of four seasons, a season on Titan lasts seven Earth years. Cassini arrived at Titan during the southern summer and observed clouds and rainfall in the southern hemisphere. Climate models of Titan predicted similar weather would occur in the northern hemisphere in the years leading up to the northern summer solstice in 2017. But, by 2016, the expected cloud cover in the northern hemisphere had not appeared. This observation may help scientists gain a more complete understanding of Titan's seasons.

"We want our model predictions to match our observations. This rainfall detection proves Cassini's climate follows the theoretical climate models we know of," Dhingra said. "Summer is happening. It was delayed, but it's happening. We will have to figure out what caused the delay, though."

Additional analyses suggest the methane rain fell across a relatively pebble-like surface, Dhingra said. A rougher surface generates an amorphous pattern as the liquid settles in crevasses and gullies, while liquid falling on a smooth surface would puddle in a relatively circular pattern.

Read more at Science Daily

More animal species under threat of extinction, new method shows

Verreaux's Sifaka from Madagaskar, a threatened species on the Red List.
Currently approximately 600 species might be inaccurately assessed as non-threatened on the Red List of Threatened Species. More than a hundred others that couldn't be assessed before, also appear to be threatened. A new more efficient, systematic and comprehensive approach to assess the extinction risk of animals has shown this. The method, designed by Radboud University ecologist Luca Santini and colleagues, is described in Conservation Biology on January 17th.

Using their new method, the researchers' predictions of extinction risks are quite consistent with the current published Red List assessments, and even a bit more optimistic overall. However, they found that 20% of 600 species that were impossible to assess before by Red List experts, are likely under threat of extinction, such as the brown-banded rail and Williamson's mouse-deer. Also, 600 species that were assessed previously as being non-threatened, are actually likely to be threatened, such as the red-breasted pygmy parrot and the Ethiopian striped mouse. "This indicates that urgent re-assessment is needed of the current statuses of animal species on the Red List," Santini says.

The Red List

Once every few years, specialized researchers voluntarily assess the conservation status of animal species in the world, which is then recorded in the International Union for Conservation of Nature (IUCN) Red List of Threatened Species. Species are classified into five extinction risk categories ranging from Least Concern to Critically Endangered, based on data such as species distribution, population size and recent trends.

"While this process is extremely important for conservation, experts often have a limited amount of data to apply the criteria to the more than 90,000 species that are currently covered by the Red List," Santini says. "Often these data are of poor quality because they are outdated or inaccurate because certain species that live in very remote areas have not been properly studied. This might lead to species to be misclassified or not assessed at all."

Greater efficiency needed

It's time for a more efficient, systematic and comprehensive approach, according to Santini and his colleagues. They designed a new method that provides Red List experts with additional independent information, which should help them to better assess species.

The method uses information from land cover maps, that show how the distribution of species in the world has changed over time. The researchers' method couples this information with statistical models to estimate a number of additional parameters, such as species' abilities to move through fragmented landscapes, to classify species into a Red List extinction risk category.

Early warning system

The new approach is meant to complement the traditional methods of Red List assessments. "As the Red List grows, keeping it updated becomes a daunting task. Algorithms that use near-real time remote sensing products to scan across vast species lists, and flag those that may be nearing extinction, can improve dramatically the timeliness and effectiveness of the Red List," says Carlo Rondinini, Director of the Global Mammal Assessment Programme for the Red List.

Santini: "Our vision is that our new method will soon be automated so that data is re-updated every year with new land cover information. Thus, our method really can speed up the process and provide an early warning system by pointing specifically to species that should be re-assessed quickly."

Read more at Science Daily

Saturn hasn't always had rings

Artist’s concept of the Cassini orbiter crossing Saturn’s ring plane.
One of the last acts of NASA's Cassini spacecraft before its death plunge into Saturn's hydrogen and helium atmosphere was to coast between the planet and its rings and let them tug it around, essentially acting as a gravity probe.

Precise measurements of Cassini's final trajectory have now allowed scientists to make the first accurate estimate of the amount of material in the planet's rings, weighing them based on the strength of their gravitational pull.

That estimate -- about 40 percent of the mass of Saturn's moon Mimas, which itself is 2,000 times smaller than Earth's moon -- tells them that the rings are relatively recent, having originated less than 100 million years ago and perhaps as recently as 10 million years ago.

Their young age puts to rest a long-running argument among planetary scientists. Some thought that the rings formed along with the planet 4.5 billion years ago from icy debris remaining in orbit after the formation of the solar system. Others thought the rings were very young and that Saturn had, at some point, captured an object from the Kuiper belt or a comet and gradually reduced it to orbiting rubble.

The new mass estimate is based on a measurement of how much the flight path of Cassini was deflected by the gravity of the rings when the spacecraft flew between the planet and the rings on its final set of orbits in September 2017. Initially, however, the deflection did not match predictions based on models of the planet and rings. Only when the team accounted for very deep flowing winds in atmosphere on Saturn -- something impossible to observe from space -- did the measurements make sense, allowing them to calculate the mass of the rings.

"The first time I looked at the data I didn't believe it, because I trusted our models and it took a while to sink in that there was some effect that changed the gravity field that we had not considered," said Burkhard Militzer, a professor of earth and planetary science at the University of California, Berkeley, who models planetary interiors. "That turned out to be massive flows in the atmosphere at least 9,000 kilometers deep around the equatorial region. We thought preliminarily that these clouds were like clouds on Earth, which are confined to a thin layer and contain almost no mass. But on Saturn they are really massive."

They also calculated that the surface clouds at Saturn's equator rotate 4 percent faster than the layer 9,000 kilometers (about 6,000 miles) deep. That deeper layer takes 9 minutes longer to rotate than do the cloud tops at the equator, which go around the planet once every 10 hours, 33 minutes.

"The discovery of deeply rotating layers is a surprising revelation about the internal structure of the planet," said Cassini project scientist Linda Spilker of NASA's Jet Propulsion Laboratory in Pasadena, California. "The question is what causes the more rapidly rotating part of the atmosphere to go so deep and what does that tell us about Saturn's interior."

Militzer also was able to calculate that the rocky core of the planet must be between 15 and 18 times the mass of Earth, which is similar to earlier estimates.

The team, led by Luciano Iess at the Sapienza University of Rome, Italy, reported their results today in the journal Science.

Did rings come from icy comet?

Earlier estimates of the mass of Saturn's rings -- between one-half and one-third the mass of Mimas -- came from studying the density waves that travel around the rocky, icy rings. These waves are caused by the planet's 62 satellites, including Mimas, which creates the so-called Cassini division between the two largest rings, A and B. Mimas is smooth and round, 246 kilometers in diameter. It has a big impact crater that makes it resemble the Death Star from the Star Wars movies.

"People didn't trust the wave measurements because there might be particles in the rings that are massive but are not participating in the waves," Militzer said. "We always suspected there was some hidden mass that we could not see in the waves."

Luckily, as Cassini approached the end of its life, NASA programmed it to perform 22 dives between the planet and the rings to probe Saturn's gravity field. Earth-based radio telescopes measured the spacecraft's velocity to within a fraction of a millimeter per second.

The new ring mass value is in the range of earlier estimates and allows the researchers to determine their age.

These age calculations, led by Philip Nicholson of Cornell University and Iess, built on a connection that scientists had previously made between the mass of the rings and their age. Lower mass points to a younger age, because the rings are initially made of ice and are bright but over time become contaminated and darkened by interplanetary debris.

Read more at Science Daily

High-speed supernova reveals earliest moments of a dying star

Artist's impression of the cocoon.
An international team of scientists, including astronomers from the Universities of Leicester, Bath and Warwick, have found evidence for the existence of a 'hot cocoon' of material enveloping a relativistic jet escaping a dying star. This research is been published online today (Wednesday 16 January) and in print in Nature tomorrow (Thursday 17 January).

A relativistic jet is a very powerful phenomena which involves plasma jets shooting out of black holes at close to the speed of light, and can extend across millions of light years.

Observations of supernova SN2017iuk taken shortly after its onset showed it expanding rapidly, at one third of the speed of light. This is the fastest supernova expansion measured to date. Monitoring of the outflow over many weeks revealed a clear difference between the initial chemical composition and that at later times.

Taken together, these are indicators of the presence of the much theorised hot cocoon, filling a gap in our knowledge of how a jet of material escaping a star interacts with the stellar envelope around it and providing a potential link between two previously distinct classes of supernovae.

The supernova signals the final demise of a massive star, in which the stellar core collapses and the outer layers are violently blown off. SN2017iuk belongs to a class of extreme supernovae, sometimes called hypernovae or GRB-SNe, that accompany a yet more dramatic event known as a gamma-ray burst (GRB).

At stellar death, a highly relativistic, narrow beam of material can be ejected from the poles of the star which glows brightly first in gamma radiation and then across the entire electromagnetic spectrum and is known as a GRB.

Until now, astronomers have been unable to study the earliest moments in the development of a supernova of this kind (a GRB-SN), but SN2017iuk was fortuitously close-by -- at roughly 500 million light years from Earth -- and the GRB light was underluminous, allowing the SN itself to be detectable at early times.

Dr Rhaana Starling, Associate Professor in the University of Leicester's Department of Physics and Astronomy said: "This immediately looked like an event worth chasing, as it happened in a grand-design spiral galaxy at very close proximity, cosmologically speaking.

"When the first sets of data came in there was an unusual component to the light that looked very blue, prompting a monitoring campaign to see if we could determine its origin by following the evolution and taking detailed spectra.

"The gamma-ray burst itself looked quite weak, so we could see other processes that were going on around the newly-formed jet which are normally drowned out. The idea of a cocoon of thermalised gas created by the relativistic jet as it drills out of the star had been proposed and implied in other cases, but here was the evidence that we needed to pin down the existence of such a structure."

A coordinated approach using a suite of space- and ground-based observatories was required to monitor the supernova over 30 days and at many wavelengths. The event was first detected using the Neil Gehrels Swift Observatory. Swift is a NASA space mission in which the University of Leicester is one of three partners, and hosts its UK data centre.

Data obtained with the Gravitational-wave Optical Transient Observatory (GOTO) helped to track the supernova light, while spectroscopy was obtained through dedicated observing programmes including initiatives by the STARGATE Collaboration headed up by Professor Nial Tanvir at the University of Leicester, which uses 8-m telescopes at the European Southern Observatory.

Professor Tanvir, Lecturer in Physics and Astronomy at the University of Leicester said: "The relativistic jet punches out through the star as if it was a bullet being fired out from the inside of an apple. What we've seen for the first time is all the apply debris that explodes out after the bullet."

Speeds of up to 115,000 kilometres per second were measured for the expanding supernova for approximately one hour after its onset. A different chemical composition was found for the early expanding supernova when compared with the more iron-rich later ejecta. The team concluded that just hours after the onset the ejecta is coming from the interior, from a hot cocoon created by the jet.

Existing supernova production models proved insufficient to account for the large amount of high velocity material measured. The team developed new models which incorporated the cocoon component and found these were an excellent match.

SN2017iuk also provides a long-sought link between the supernova that accompany GRBs, and those that do not: in lone supernovae, high speed outflows have also been seen, with velocities reaching 50,000 kilometres per second, which can originate in the same cocoon scenario but escape of the relativistic GRB jet is somehow thwarted.

Core-collapse supernovae without GRBs are usually found much later after their onset, giving scientists very little chance of detecting any signatures of a hot cocoon, whilst cocoon features in GRB-associated supernovae are usually hidden by the bright, relativistic jet.

Read more at Science Daily

Artificial intelligence applied to the genome identifies an unknown human ancestor

Genome sequencing
By combining deep learning algorithms and statistical methods, investigators from the Institute of Evolutionary Biology (IBE), the Centro Nacional de Análisis Genómico (CNAG-CRG) of the Centre for Genomic Regulation (CRG) and the Institute of Genomics at the University of Tartu have identified, in the genome of Asian individuals, the footprint of a new hominid who cross bred with its ancestors tens of thousands of years ago.

Modern human DNA computational analysis suggests that the extinct species was a hybrid of Neanderthals and Denisovans and cross bred with Out of Africa modern humans in Asia. This finding would explain that the hybrid found this summer in the caves of Denisova -- the offspring of a Neanderthal mother and a Denisovan father -- was not an isolated case, but rather was part of a more general introgression process.

The study, published in Nature Communications, uses deep learning for the first time ever to account for human evolution, paving the way for the application of this technology in other questions in biology, genomics and evolution.

Humans had descendants with an species that is unknown to us

One of the ways of distinguishing between two species is that while both of them may cross breed, they do not generally produce fertile descendants. However, this concept is much more complex when extinct species are involved. In fact, the story told by current human DNA blurs the lines of these limits, preserving fragments of hominids from other species, such as the Neanderthals and the Denisovans, who coexisted with modern humans more than 40,000 years ago in Eurasia.

Now, investigators of the Institute of Evolutionary Biology (IBE), the Centro Nacional de Análisis Genómico (CNAG-CRG) of the Centre for Genomic Regulation (CRG), and the University of Tartu have used deep learning algorithms to identify a new and hitherto-unknown ancestor of humans that would have interbred with modern humans tens of thousands of years ago. "About 80,000 years ago, the so-called Out of Africa occurred, when part of the human population, which already consisted of modern humans, abandoned the African continent and migrated to other continents, giving rise to all the current populations," explained Jaume Bertranpetit, principal investigator at the IBE and head of Department at the UPF. "We know that from that time onwards, modern humans cross bred with Neanderthals in all the continents, except Africa, and with the Denisovans in Oceania and probably in South-East Asia, although the evidence of cross-breeding with a third extinct species had not been confirmed with any certainty."

Deep learning: deciphering the keys to human evolution in ancient DNA

Hitherto, the existence of the third ancestor was only a theory that would explain the origin of some fragments of the current human genome (part of the team involved in this study had already posed the existence of the extinct hominid in a previous study). However, deep learning has made it possible to make the transition from DNA to the demographics of ancestral populations.

The problem the investigators had to contend with is that the demographic models they have analysed are much more complex than anything else considered to date and there were no statistic tools available to analyse them. Deep learning "is an algorithm that imitates the way in which the nervous system of mammals works, with different artificial neurons that specialise and learn to detect, in data, patterns that are important for performing a given task," stated Òscar Lao, principal investigator at the CNAG-CRG and an expert in this type of simulations. "We have used this property to get the algorithm to learn to predict human demographics using genomes obtained through hundreds of thousands of simulations. Whenever we run a simulation we are travelling along a possible path in the history of humankind. Of all simulations, deep learning allows us to observe what makes the ancestral puzzle fit together."

It is the first time that deep learning has been used successfully to explain human history, paving the way for this technology to be applied in other questions in biology, genomics and evolution.

Read more at Science Daily

Jan 15, 2019

Antarctica losing six times more ice mass annually now than 40 years ago

Researchers from UCI and NASA JPL recently conducted an assessment of 40 years' worth of ice mass balance in Antarctica, finding accelerating deterioration of its ice cover.
Antarctica experienced a sixfold increase in yearly ice mass loss between 1979 and 2017, according to a study published today in Proceedings of the National Academy of Sciences. Glaciologists from the University of California, Irvine, NASA's Jet Propulsion Laboratory and the Netherlands' Utrecht University additionally found that the accelerated melting caused global sea levels to rise more than half an inch during that time.

"That's just the tip of the iceberg, so to speak," said lead author Eric Rignot, Donald Bren Professor and chair of Earth system science at UCI. "As the Antarctic ice sheet continues to melt away, we expect multi-meter sea level rise from Antarctica in the coming centuries."

For this study, Rignot and his collaborators conducted what he called the longest-ever assessment of remaining Antarctic ice mass. Spanning four decades, the project was also geographically comprehensive; the research team examined 18 regions encompassing 176 basins, as well as surrounding islands.

Techniques used to estimate ice sheet balance included a comparison of snowfall accumulation in interior basins with ice discharge by glaciers at their grounding lines, where ice begins to float in the ocean and detach from the bed. Data was derived from fairly high-resolution aerial photographs taken from a distance of about 350 meters via NASA's Operation IceBridge; satellite radar interferometry from multiple space agencies; and the ongoing Landsat satellite imagery series, begun in the early 1970s.

The team was able to discern that between 1979 and 1990, Antarctica shed an average of 40 gigatons of ice mass annually. (A gigaton is 1 billion tons.) From 2009 to 2017, about 252 gigatons per year were lost.

The pace of melting rose dramatically over the four-decade period. From 1979 to 2001, it was an average of 48 gigatons annually per decade. The rate jumped 280 percent to 134 gigatons for 2001 to 2017.

Rignot said that one of the key findings of the project is the contribution East Antarctica has made to the total ice mass loss picture in recent decades.

"The Wilkes Land sector of East Antarctica has, overall, always been an important participant in the mass loss, even as far back as the 1980s, as our research has shown," he said. "This region is probably more sensitive to climate [change] than has traditionally been assumed, and that's important to know, because it holds even more ice than West Antarctica and the Antarctic Peninsula together."

He added that the sectors losing the most ice mass are adjacent to warm ocean water.

Read more at Science Daily

Scientists identify two new species of fungi in retreating Arctic glacier

Two new species of fungi isolated from sediments and soil in the Canadian Arctic (A)micrographic image of Vishniacozyma ellesmerensis (B) colonies of V. ellesmerensis (C) micrographic image of Mrakia hoshinonis (D) colonies of M. hoshinonis.
Two new species of fungi have made an appearance in a rapidly melting glacier on Ellesmere Island in the Canadian Arctic, just west of Greenland. A collaborative team of researchers from Japan's National Institute of Polar Research, The Graduate University for Advanced Studies in Tokyo, Japan, and Laval University in Québec, Canada made the discovery.

The scientists published their results on DATE in two separate papers, one for each new species, in the International Journal of Systematic and Evolutionary Microbiology.

"The knowledge of fungi inhabiting the Arctic is still fragmentary. We set out to survey the fungal diversity in the Canadian High Arctic," said Masaharu Tsuji, a project researcher at the National Institute of Polar Research in Japan and first author on both papers. "We found two new fungal species in the same investigation on Ellesmere Island."

One species is the 10th to join the genus Mrakia, with the proposed name M. hoshinonis, in honor of Tamotsu Hoshino, a senior researcher at the National Institute of Advanced Science and Technology in Japan. Hoshino has made significant contributions to the study of fungi in polar regions. The other species is the 12th to join the genus Vishniacozyma, with the proposed name V. ellesmerensis as a nod to the island where it was found. Both species are types of yeast that are well-adapted to the cold and can even grow below 0°C.

The samples of fungi were collected from the unofficially named Walker Glacier. The designation comes from Paul T. Walker, who installed the datum pole that measures the glacier's growth and shrinkage, in 1959. At the time of sample collection in 2016, measurements showed that the glacier was receding at a rate two-and-a-half times faster than its retreat over the previous 50 years.

"Climate-related effects have been observed in this region over the last 20 years," Tsuji said. "Soon, some of the glaciers may completely melt and disappear."

Only about five percent of fungi species have been discovered, but their function across ecological climates is well understood -- from the tropics to the Arctic, fungi decompose dead organic material. Each species operates a little differently, but their general role is to reintroduce nutrients from dead plant material back into the ecosystem. If the glaciers melt, the fungi lose their habitat. The results could have catastrophic knock-on effects throughout the ecosystem, according to Tsuji, although more research is needed to understand exactly how the changing climate is influencing fungi beyond destroying their habitat.

Next, Tsuji and his team plan to survey the fungi in Ward Hunt Lake, the northern most lake in the world. It is on Ward Hunt Island, just off the northern coast of Ellesmere Island, and less than 500 miles from the North Pole.

Read more at Science Daily

11,500-year-old animal bones in Jordan suggest early dogs helped humans hunt

Selection of gazelle bones from Space 3 at Shubayqa 6 displaying evidence for having been in the digestive tract of a carnivore.
11,500 years ago in what is now northeast Jordan, people began to live alongside dogs and may also have used them for hunting, a new study from the University of Copenhagen shows. The archaeologists suggest that the introduction of dogs as hunting aids may explain the dramatic increase of hares and other small prey in the archaeological remains at the site.

Dogs were domesticated by humans as early as 14,000 years ago in the Near East, but whether this was accidental or on purpose is so far not clear. New research published in the Journal of Anthropological Archaeology by a team of archaeologists from the University of Copenhagen and University College London may suggest that humans valued the tracking and hunting abilities of early dogs more than previously known.

A study of animal bones from the 11,500 year old settlement Shubayqa 6 in northeast Jordan not only suggests that dogs were present in this region at the start of the Neolithic period, but that humans and dogs likely hunted animals together:

"The study of the large assemblage of animal bones from Shubayqa 6 revealed a large proportion of bones with unmistakable signs of having passed through the digestive tract of another animal; these bones are so large that they cannot have been swallowed by humans, but must have been digested by dogs," explained zooarchaeologist and the study's lead author Lisa Yeomans.

Lisa Yeomans and her colleagues have been able to show that Shubayqa 6 was occupied year round, which suggests that the dogs were living together with the humans rather than visiting the site when there were no inhabitants:

"The dogs were not kept at the fringes of the settlement, but must have been closely integrated into all aspects of day-to-day life and allowed to freely roam around the settlement, feeding on discarded bones and defecating in and around the site."

Can new hunting techniques account for the increase in small prey?

When Yeomans and her co-authors sifted through the analysed data, they also noted a curious increase in the number of hares at the time that dogs appeared at Shubayqa 6. Hares were hunted for their meat, but Shubayqa 6's inhabitants also used the hare bones to make beads. The team think that it is likely that the appearance of dogs and the increase in hares are related.

"The use of dogs for hunting smaller, fast prey such as hares and foxes, perhaps driving them into enclosures, could provide an explanation that is in line with the evidence we have gathered. The long history of dog use, to hunt both small as well as larger prey, in the region is well known, and it would be strange not to consider hunting aided by dogs as a likely explanation for the sudden abundance of smaller prey in the archaeological record," said Lisa Yeomans.

Read more at Science Daily

3,000-year-old eastern North American quinoa discovered in Ontario

The color seed shot shows the crop (left) and the wild/weedy relative (right).
A mass of charred seeds found while clearing a home construction site in Brantford, Ontario, has been identified as ancient, domesticated goosefoot (C. berlandieri spp. jonesianum), a form of quinoa native to Eastern North America. The seeds date back to 900 B.C., and have never previously been found north of Kentucky this early in history, says Professor Gary Crawford of the Department of Anthropology at the University of Toronto Mississauga (UTM), who was brought in by Archaeological Services Inc. (ASI), the archaeological consulting firm that excavated the site.

Archaeological discoveries don't normally shock Crawford but this one comes close. "Finding domesticated seeds that are so old in Ontario is special," Crawford says. "The next time we find a crop in the province is about 500 A.D., and it's corn. All previous research on this species of quinoa, which is now extinct, has taken place in the central United States: Arkansas, Illinois and Kentucky."

The charred seeds, about 140,000 in total, were discovered in Brantford in 2010 during a required archeological assessment conducted by Archaeological Services Inc. prior to site development. The Tutela Heights site, which has since become a housing development, yielded some stone tools, post holes, debris and the chenopod seeds. Jessica Lytle, a co-author of the resulting research paper, was one of the assessors who did the initial seed analysis and brought them to Crawford for further analysis, having studied with him at UTM. Their findings are published in the October 2018 issue of American Antiquity. The analysis took time, especially given the number of seeds and the need to document whether the whole collection was from the same crop.

"This discovery raises more questions than it answers. We had to consider whether the seeds were only traded here or grown locally," says Ron Williamson, PhD, of ASI, another co-author. "We also had to consider whether this was the beginning of agriculture in the province. It appears not, because we don't see any evidence of local cultivation. If it were grown in the region, we would have expected to see seeds of the crop in other pits around the site, but they were confined to this specific pit. We also don't see any sign of agricultural weeds or stone tools that may have been used for cultivation.

Indigenous peoples at the time exchanged certain kinds of minerals and finished stone objects over long distances, but this is the first evidence of a crop circulating in this exchange system. What meaning this plant had for local indigenous people nearly 3000 years ago still is not clear.

Professor Crawford notes "We always wondered if they were also exchanging perishable materials. We're taking the conservative view that these seeds were traded; it would make sense that it wasn't only stone and minerals being moved around. In Kentucky, Illinois and Arkansas, this was a very important foodstuff; its nutrient value was probably similar to that of modern quinoa, which comes from South America."

The researchers also explored how and why the seeds were charred. They speculate that it may have happened accidentally when the local inhabitants were attempting to parch them.

"You can lightly parch seeds so they don't sprout and store them," Crawford says. "It could have been a mistake to have burned them. There was a slight oxidization of the surrounding sediment, so the soil was heated; we think they were burned in place in the pit."

For Crawford, the next step in answering some of the questions will be to review seeds in his lab that were collected at other sites in Ontario to see if there are other charred seeds that may be variations of this subspecies and to examine other Ontario seed collections. Today, there is a weedy version that grows locally and he is curious whether this is a holdover from Indigenous agriculture.

Read more at Science Daily

Jan 14, 2019

DNA tool allows you to trace your ancient ancestry

Scientists at the University of Sheffield studying ancient DNA have created a tool allowing them to more accurately identify ancient Eurasian populations, which can be used to test an individual's similarity to ancient people who once roamed the earth.

Currently the study of ancient DNA requires a lot of information to classify a skeleton to a population or find its biogeographical origins.

Now scientists have defined a new concept called Ancient Ancestry Informative Markers (aAIMs) -- a group of mutations that are sufficiently informative to identify and classify ancient populations.

The research, led by Dr Eran Elhaik, from the University of Sheffield's Department of Animal and Plant Sciences, saw the identification of a small group of aAIMs that can be used to classify skeletons to ancient populations.

Dr Elhaik said: "We developed a new method that finds aAIMs efficiently and have proved that it is accurate."

AIMs (Ancestry Informative Markers) have a long history in science and have been employed for the past decade by health and forensic experts.

But Dr Elhaik said that when his team applied traditional AIMs-finding tools to ancient DNA data, they were disappointed with their low accuracy.

"Ancient populations are much more diverse than modern ones," he said. "Their diversity was reduced over the years following events such as the Neolithic revolution and the Black Death.

"Although we have many more people today they are all far more similar to each other than ancient people. In addition, the ancient data themselves are problematic due to the large amount of degraded DNA."

To overcome these challenges, Dr Elhaik developed a specialised tool that identifies aAIMs by combining traditional methodology with a novel one that takes into account a mixture.

"Ancient genomes typically consist of hundreds of thousands and sometimes millions of markers. We demonstrated that only 13,000 markers are needed to make accurate population classifications for ancient genomes and while the field of ancient forensics does not exist yet, these aAIMs can help us get much closer to ancient people."

He added: "Until now you couldn't test people for ancient DNA ancestry because commercial microarrays, such as the ones used for genetic genealogy, don't have a lot of markers relevant for paleogenomics -- people could not study their primeval origins.

"This finding of aAIMs is like finding the fingerprints of ancient people. It allows testing of a small number of markers -- that can be found in a commonly available array -- and you can ask what part of your genome is from Roman Britons or Viking, or Chumash Indians, or ancient Israelites, etc.

"We can ask any question we want about these ancient people as long as someone sequenced these ancient markers. So this paper brings the field of paleogenomics to the public."

Read more at Science Daily

Upper-ocean warming is changing the global wave climate, making waves stronger

Increasing wave energy with climate change means more challenges for coastal risk and adaptation.
Sea level rise puts coastal areas at the forefront of the impacts of climate change, but new research shows they face other climate-related threats as well. In a study published January 14 in Nature Communications, researchers report that the energy of ocean waves has been growing globally, and they found a direct association between ocean warming and the increase in wave energy.

A wide range of long-term trends and projections carry the fingerprint of climate change, including rising sea levels, increasing global temperatures, and declining sea ice. Analyses of the global marine climate thus far have identified increases in wind speeds and wave heights in localized areas of the ocean in the high latitudes of both hemispheres. These increases have been larger for the most extreme values (e.g., winter waves) than for the mean conditions. However, a global signal of change and a correlation between the localized increases in wave heights and global warming had remained undetected.

The new study focused on the energy contained in ocean waves, which is transmitted from the wind and transformed into wave motion. This metric, called wave power, has been increasing in direct association with historical warming of the ocean surface. The upper ocean warming, measured as a rising trend in sea-surface temperatures, has influenced wind patterns globally, and this, in turn, is making ocean waves stronger.

"For the first time, we have identified a global signal of the effect of global warming in wave climate. In fact, wave power has increased globally by 0.4 percent per year since 1948, and this increase is correlated with the increasing sea-surface temperatures, both globally and by ocean regions," said lead author Borja G. Reguero, a researcher in the Institute of Marine Sciences at the University of California, Santa Cruz.

Climate change is modifying the oceans in different ways, including changes in ocean-atmosphere circulation and water warming, according to coauthor Inigo J. Losada, director of research at the Environmental Hydraulics Institute at the University of Cantabria (IHCantabria), where the study was developed.

"This study shows that the global wave power can be a potentially valuable indicator of global warming, similarly to carbon dioxide concentration, the global sea level rise, or the global surface atmospheric temperature," Losada said.

Understanding how the energy of ocean waves responds to oceanic warming has important implications for coastal communities, including anticipating impacts on infrastructure, coastal cities, and small island states. Ocean waves determine where people build infrastructure, such as ports and harbors, or require protection through coastal defenses such as breakwaters and levees. Indeed, wave action is one of the main drivers of coastal change and flooding, and as wave energy increases, its effects can become more profound. Sea level rise will further aggravate these effects by allowing more wave energy to reach shoreward.

While the study reveals a long-term trend of increasing wave energy, the effects of this increase are particularly apparent during the most energetic storm seasons, as occurred during the winter of 2013-14 in the North Atlantic, which impacted the west coast of Europe, or the devastating 2017 hurricane season in the Caribbean, which offered a harsh reminder of the destructive power and economic impacts of coastal storms.

Read more at Science Daily

The orderly chaos of black holes

The dedicated Gamma-ray Burst Polarimetry experiment POLAR on top of China’s TiangGong-2 spacelab launched on September 15, 2016. The glowing green light mimics the scintillating light when a gamma-ray photon hits one of the 1600 specially made scintillation bars. The artwork is based on a picture taken by a camera located several meters behind.
During the formation of a black hole a bright burst of very energetic light in the form of gamma-rays is produced, these events are called gamma-ray bursts. The physics behind this phenomenon includes many of the least understood fields within physics today: general gravity, extreme temperatures and acceleration of particles far beyond the energy of the most powerful particle accelerators on Earth. In order to analyse these gamma-ray bursts, researchers from the University of Geneva (UNIGE), in collaboration with the Paul Scherrer Institute (PSI) of Villigen, Switzerland, the Institute of High Energy Physics in Beijing and the National Center for Nuclear Research of Swierk in Poland, have built the POLAR instrument, sent in 2016 to the Chinese Tiangong-2 space laboratory, to analyze gamma-ray bursts. Contrary to the theories developed, the first results of POLAR reveal that the high energy photons coming from gamma-ray bursts are neither completely chaotic, nor completely organized, but a mixture of the two: within short time slices, the photons are found to oscillate in the same direction, but the oscillation direction changes with time. These unexpected results are reported in a recent issue of the journal Nature Astronomy.

When two neutron stars collide or a super massive star collapses into itself, a black hole is created. This birth is accompanied by a bright burst of gamma-rays -- very energetic light such as that emitted by radioactive sources -- called a gamma-ray burst (GRB).

Is black hole birth environment organized or chaotic?

How and where the gamma-rays are produced is still a mystery, two different schools of thought on their origin exist. The first predicts that photons from GRBs are polarized, meaning the majority of them oscillate in the same direction. If this were the case, the source of the photons would likely be a strong and well organized magnetic field formed during the violent aftermath of the black hole production. A second theory suggests that the photons are not polarized, implying a more chaotic emission environment. But how to check this?

"Our international teams have built together the first powerful and dedicated detector, called POLAR, capable of measuring the polarization of gamma-rays from GRBs. This instrument allows us to learn more about their source," said Xin Wu, professor in the Department of Nuclear and Particle Physics of the Faculty of Sciences of UNIGE. Its operating system is rather simple. It is a square of 50x50 cm2 consisting of 1600 scintillator bars in which the gamma-rays collide with the atoms that make up these bars. When a photon collides in a bar we can measure it, afterwards it can produce a second photon which can cause a second visible collision. "If the photons are polarized, we observe a directional dependency between the impact positions of the photons, continues Nicolas Produit, researcher at the Department of Astronomy of the Faculty of Sciences of UNIGE. On the contrary, if there is no polarization, the second photon resulting from the first collision will leave in a fully random direction."

Order within chaos

In six months, POLAR has detected 55 gamma-ray bursts and scientist analyzed the polarization of gamma-rays from the 5 brightest ones. The results are surprising to say the least. "When we analyse the polarization of a gamma-ray burst as a whole, we see at most a very weak polarization, which seems to clearly favour several theories," says Merlin Kole, a researcher at the Department of Nuclear and Particle Physics of the Faculty of Sciences of UNIGE and one of the main authors of the paper. Faced with this first result, the scientists looked in more detail at a very powerful 9 second long gamma-ray burst and cut it into time slices, each of 2 seconds long. "There, we discovered with surprise that, on the contrary, the photons are polarized in each slice, but the oscillation direction is different in each slice!," Xin Wu enthuses. It is this changing direction which makes the full GRB appear as very chaotic and unpolarized. "The results show that as the explosion takes place, something happens which causes the photons to be emitted with a different polarization direction, what this could be we really don't know," continues Merlin Kole.

Read more at Science Daily

Double star system flips planet-forming disk into pole position

Artist's impression of a view of the double star system and surrounding disc.
New research led by an astronomer at the University of Warwick has found the first confirmed example of a double star system that has flipped its surrounding disc to a position that leaps over the orbital plane of those stars. The international team of astronomers used the Atacama Large Millimeter/sub-millimeter Array (ALMA) to obtain high-resolution images of the Asteroid belt-sized disc.

The overall system presents the unusual sight of a thick hoop of gas and dust circling at right angles to the binary star orbit. Until now this setup only existed in theorists' minds, but the ALMA observation proves that polar discs of this type exist, and may even be relatively common.

The new research is published today (14 January) by Royal Society University Research Fellow Dr Grant M. Kennedy of the University of Warwick's Department of Physics and Centre for Exoplanets and Habitability in Nature Astronomy in a paper entitled "A circumbinary protoplanetary disc in a polar configuration."

Dr Grant M. Kennedy of the University of Warwick said:

"Discs rich in gas and dust are seen around nearly all young stars, and we know that at least a third of the ones orbiting single stars form planets. Some of these planets end up being misaligned with the spin of the star, so we've been wondering whether a similar thing might be possible for circumbinary planets. A quirk of the dynamics means that a so-called polar misalignment should be possible, but until now we had no evidence of misaligned discs in which these planets might form."

Dr Kennedy and his fellow researchers used ALMA to pin down the orientation of the ring of gas and dust in the system. The orbit of the binary was previously known, from observations that quantified how the stars move in relation to each other. By combining these two pieces of information they were able to establish that the dust ring was consistent with a perfectly polar orbit. This means that while the stellar orbits orbit each other in one plane, like two horses going around on a carousel, the disc surrounds these stars at right angles to their orbits, like a giant ferris wheel with the carousel at the centre.

Dr Grant M. Kennedy of the University of Warwick added:

"Perhaps the most exciting thing about this discovery is that the disc shows some of the same signatures that we attribute to dust growth in discs around single stars. We take this to mean planet formation can at least get started in these polar circumbinary discs. If the rest of the planet formation process can happen, there might be a whole population of misaligned circumbinary planets that we have yet to discover, and things like weird seasonal variations to consider."

If there were a planet or planetoid present at the inner edge of the dust ring, the ring itself would appear from the surface as a broad band rising almost perpendicularly from the horizon. The polar configuration means that the stars would appear to move in and out of the disc plane, giving objects two shadows at times. Seasons on planets in such systems would also be different. On Earth they vary throughout the year as we orbit the Sun. A polar circumbinary planet would have seasons that also vary as different latitudes receive more or less illumination throughout the binary orbit.

Co-author Dr Daniel Price of Monash University's Centre for Astrophysics (MoCA) and School of Physics and Astronomy added:

"We used to think other solar systems would form just like ours, with the planets all orbiting in the same direction around a single sun. But with the new images we see a swirling disc of gas and dust orbiting around two stars. It was quite surprising to also find that that disc orbits at right angles to the orbit of the two stars.

"Incredibly, two more stars were seen orbiting that disc. So if planets were born here there would be four suns in the sky!

"ALMA is just a fantastic telescope, it is teaching us so much about how planets in other solar systems are born."

Read more at Science Daily

Jan 13, 2019

Technique identifies electricity-producing bacteria

A microfluidic technique quickly sorts bacteria based on their capability to generate electricity.
Living in extreme conditions requires creative adaptations. For certain species of bacteria that exist in oxygen-deprived environments, this means finding a way to breathe that doesn't involve oxygen. These hardy microbes, which can be found deep within mines, at the bottom of lakes, and even in the human gut, have evolved a unique form of breathing that involves excreting and pumping out electrons. In other words, these microbes can actually produce electricity.

Scientists and engineers are exploring ways to harness these microbial power plants to run fuel cells and purify sewage water, among other uses. But pinning down a microbe's electrical properties has been a challenge: The cells are much smaller than mammalian cells and extremely difficult to grow in laboratory conditions.

Now MIT engineers have developed a microfluidic technique that can quickly process small samples of bacteria and gauge a specific property that's highly correlated with bacteria's ability to produce electricity. They say that this property, known as polarizability, can be used to assess a bacteria's electrochemical activity in a safer, more efficient manner compared to current techniques.

"The vision is to pick out those strongest candidates to do the desirable tasks that humans want the cells to do," says Qianru Wang, a postdoc in MIT's Department of Mechanical Engineering.

"There is recent work suggesting there might be a much broader range of bacteria that have [electricity-producing] properties," adds Cullen Buie, associate professor of mechanical engineering at MIT. "Thus, a tool that allows you to probe those organisms could be much more important than we thought. It's not just a small handful of microbes that can do this."

Buie and Wang have published their results today in Science Advances.

Just between frogs

Bacteria that produce electricity do so by generating electrons within their cells, then transferring those electrons across their cell membranes via tiny channels formed by surface proteins, in a process known as extracellular electron transfer, or EET.

Existing techniques for probing bacteria's electrochemical activity involve growing large batches of cells and measuring the activity of EET proteins -- a meticulous, time-consuming process. Other techniques require rupturing a cell in order to purify and probe the proteins. Buie looked for a faster, less destructive method to assess bacteria's electrical function.

For the past 10 years, his group has been building microfluidic chips etched with small channels, through which they flow microliter-samples of bacteria. Each channel is pinched in the middle to form an hourglass configuration. When a voltage is applied across a channel, the pinched section -- about 100 times smaller than the rest of the channel -- puts a squeeze on the electric field, making it 100 times stronger than the surrounding field. The gradient of the electric field creates a phenomenon known as dielectrophoresis, or a force that pushes the cell against its motion induced by the electric field. As a result, dielectrophoresis can repel a particle or stop it in its tracks at different applied voltages, depending on that particle's surface properties.

Researchers including Buie have used dielectrophoresis to quickly sort bacteria according to general properties, such as size and species. This time around, Buie wondered whether the technique could suss out bacteria's electrochemical activity -- a far more subtle property.

"Basically, people were using dielectrophoresis to separate bacteria that were as different as, say, a frog from a bird, whereas we're trying to distinguish between frog siblings -- tinier differences," Wang says.

An electric correlation


In their new study, the researchers used their microfluidic setup to compare various strains of bacteria, each with a different, known electrochemical activity. The strains included a "wild-type" or natural strain of bacteria that actively produces electricity in microbial fuel cells, and several strains that the researchers had genetically engineered. In general, the team aimed to see whether there was a correlation between a bacteria's electrical ability and how it behaves in a microfluidic device under a dielectrophoretic force.

The team flowed very small, microliter samples of each bacterial strain through the hourglass-shaped microfluidic channel and slowly amped up the voltage across the channel, one volt per second, from 0 to 80 volts. Through an imaging technique known as particle image velocimetry, they observed that the resulting electric field propelled bacterial cells through the channel until they approached the pinched section, where the much stronger field acted to push back on the bacteria via dielectrophoresis and trap them in place.

Some bacteria were trapped at lower applied voltages, and others at higher voltages. Wang took note of the "trapping voltage" for each bacterial cell, measured their cell sizes, and then used a computer simulation to calculate a cell's polarizability -- how easy it is for a cell to form electric dipoles in response to an external electric field.

From her calculations, Wang discovered that bacteria that were more electrochemically active tended to have a higher polarizability. She observed this correlation across all species of bacteria that the group tested.

"We have the necessary evidence to see that there's a strong correlation between polarizability and electrochemical activity," Wang says. "In fact, polarizability might be something we could use as a proxy to select microorganisms with high electrochemical activity."

Wang says that, at least for the strains they measured, researchers can gauge their electricity production by measuring their polarizability -- something that the group can easily, efficiently, and nondestructively track using their microfluidic technique.

Collaborators on the team are currently using the method to test new strains of bacteria that have recently been identified as potential electricity producers.

"If the same trend of correlation stands for those newer strains, then this technique can have a broader application, in clean energy generation, bioremediation, and biofuels production," Wang says.

Read more at Science Daily

3D printing 100 times faster with light

Rather than building up plastic filaments layer by layer, a new approach to 3D printing lifts complex shapes from a vat of liquid at up to 100 times faster than conventional 3D printing processes, University of Michigan researchers have shown.

3D printing could change the game for relatively small manufacturing jobs, producing fewer than 10,000 identical items, because it would mean that the objects could be made without the need for a mold costing upwards of $10,000. But the most familiar form of 3D printing, which is sort of like building 3D objects with a series of 1D lines, hasn't been able to fill that gap on typical production timescales of a week or two.

"Using conventional approaches, that's not really attainable unless you have hundreds of machines," said Timothy Scott, U-M associate professor of chemical engineering who co-led the development of the new 3D printing approach with Mark Burns, the T.C. Chang Professor of Engineering at U-M.

Their method solidifies the liquid resin using two lights to control where the resin hardens -- and where it stays fluid. This enables the team to solidify the resin in more sophisticated patterns. They can make a 3D bas-relief in a single shot rather than in a series of 1D lines or 2D cross-sections. Their printing demonstrations include a lattice, a toy boat and a block M.

"It's one of the first true 3D printers ever made," said Burns, professor of chemical engineering and biomedical engineering.

But the true 3D approach is no mere stunt -- it was necessary to overcome the limitations of earlier vat-printing efforts. Namely, the resin tends to solidify on the window that the light shines through, stopping the print job just as it gets started.

By creating a relatively large region where no solidification occurs, thicker resins -- potentially with strengthening powder additives -- can be used to produce more durable objects. The method also bests the structural integrity of filament 3D printing, as those objects have weak points at the interfaces between layers.

"You can get much tougher, much more wear-resistant materials," Scott said.

An earlier solution to the solidification-on-window problem was a window that lets oxygen through. The oxygen penetrates into the resin and halts the solidification near the window, leaving a film of fluid that will allow the newly printed surface to be pulled away.

But because this gap is only about as thick as a piece of transparent tape, the resin must be very runny to flow fast enough into the tiny gap between the newly solidified object and the window as the part is pulled up. This has limited vat printing to small, customized products that will be treated relatively gently, such as dental devices and shoe insoles.

By replacing the oxygen with a second light to halt solidification, the Michigan team can produce a much larger gap between the object and the window -- millimeters thick -- allowing resin to flow in thousands of times faster.

The key to success is the chemistry of the resin. In conventional systems, there is only one reaction. A photoactivator hardens the resin wherever light shines. In the Michigan system, there is also a photoinhibitor, which responds to a different wavelength of light.

Rather than merely controlling solidification in a 2D plane, as current vat-printing techniques do, the Michigan team can pattern the two kinds of light to harden the resin at essentially any 3D place near the illumination window.

U-M has filed three patent applications to protect the multiple inventive aspects of the approach, and Scott is preparing to launch a startup company.

Read more at Science Daily