Mar 9, 2019

Music captivates listeners and synchronizes their brainwaves

Music has the ability to captivate us; when listeners engage with music, they follow its sounds closely, connecting to what they hear in an affective and invested way. But what is it about music that keeps the audience engaged? A study by researchers from The City College of New York and the University of Arkansas charts new ground in understanding the neural responses to music.

Despite the importance, it has been difficult to study engagement with music given the limits of self-report. This led Jens Madsen and Lucas Parra, from CCNY's Grove School of Engineering, to measure the synchronization of brainwaves in an audience. When a listener is engaged with music, their neural responses are in sync with that of other listeners, thus inter-subject correlation of brainwaves is a measure of engagement.

According to their findings, published in the latest issue of "Scientific Reports," a listener's engagement decreases with repetition of music, but only for familiar music pieces. However, unfamiliar musical styles can sustain an audience's interest, in particular for individuals with some musical training.

"Across repeated exposures to instrumental music, inter-subject correlation decreased for music written in a familiar style," Parra and his collaborators write in "Scientific Reports."

In addition, participants with formal musical training showed more inter-subject correlation, and sustained it across exposures to music in an unfamiliar style. This distinguishes music from other domains, where interest drops with repetition.

"What is so cool about this, is that by measuring people's brainwaves we can study how people feel about music and what makes it so special." says Madsen.

Elizabeth Hellmuth Margulis and Rhimmon Simchy-Gross, both from the University of Arkansas, were among the other researchers. The study involved 60 graduate and undergraduate students from City College of New York and University of Arkansas.

From Science Daily

Horseshoe crabs are really relatives of spiders, scorpions

University of Wisconsin-Madison postdoctoral researcher Jesús Ballesteros holds a small horseshoe crab. A study he led with Integrative Biology Professor Prashant Sharma used robust genetic analysis to demonstrate that horseshoe crabs are arachnids like spiders, scorpions and ticks.
Blue-blooded and armored with 10 spindly legs, horseshoe crabs have perhaps always seemed a bit out of place.

First thought to be closely related to crabs, lobsters and other crustaceans, in 1881 evolutionary biologist E. Ray Lankester placed them solidly in a group more similar to spiders and scorpions. Horseshoe crabs have since been thought to be ancestors of the arachnids, but molecular sequence data have always been sparse enough to cast doubt.

University of Wisconsin-Madison evolutionary biologists Jesús Ballesteros and Prashant Sharma hope, then, that their recent study published in the journal Systematic Biology helps firmly plant ancient horseshoe crabs within the arachnid family tree.

By analyzing troves of genetic data and considering a vast number of possible ways to examine it, the scientists now have a high degree of confidence that horseshoe crabs do indeed belong within the arachnids.

"By showing that horseshoe crabs are part of the arachnid radiation, instead of a lineage closely related to but independent of arachnids, all previous hypotheses on the evolution of arachnids need to be revised," says Ballesteros, a postdoctoral researcher in Sharma's lab. "It's a major shift in our understanding of arthropod evolution."

Arthropods are often considered the most successful animals on the planet since they occupy land, water and sky and include more than a million species. This grouping includes insects, crustaceans and arachnids.

Horseshoe crabs have been challenging to classify within the arthropods because analysis of the animals' genome has repeatedly shown them to be related to arachnids like spiders, scorpions, mites, ticks and lesser-known creatures such as vinegaroons. Yet, "scientists assumed it was an error, that there was a problem with the data," says Ballesteros.

Moreover, horseshoe crabs possess a mix of physical characteristics observed among a variety of arthropods. They are hard-shelled like crabs but are the only marine animals known to breathe with book gills, which resemble the book lungs spiders and scorpions use to survive on land.

Only four species of horseshoe crabs are alive today, but the group first appeared in the fossil record about 450 million years ago, together with mysterious, extinct lineages like sea scorpions. These living fossils have survived major mass extinction events and today their blood is used by the biomedical industry to test for bacterial contamination.

Age is just one of the problems inherent in tracing their evolution, say Ballesteros and Sharma, since searching back through time to find a common ancestor is not easy to accomplish. And evidence from the fossil record and genetics indicates evolution happened quickly among these groups of animals, convoluting their relationships to one another.

"One of the most challenging aspects of building the tree of life is differentiating old radiations, these ancient bursts of speciation," says Sharma, a professor of integrative biology. "It is difficult to resolve without large amounts of genetic data."

Even then, genetic comparisons become tricky when looking at the histories of genes that can either unite or separate species. Some genetic changes can be misleading, suggesting relationships where none exist or dismissing connections that do. This is owed to phenomena such as incomplete lineage sorting or lateral gene transfer, by which assortments of genes aren't cleanly made across the evolution of species.

Ballesteros tested the complicated relationships between the trickiest genes by comparing the complete genomes of three out of the four living horseshoe crab species against the genome sequences of 50 other arthropod species, including water fleas, centipedes and harvestmen.

Using a complex set of matrices, taking care not to introduce biases in his analysis, he painstakingly teased the data apart. Still, no matter which way Ballesteros conducted his analysis, he found horseshoe crabs nested within the arachnid family tree.

He says his approach serves as a cautionary tale to other evolutionary biologists who may be inclined to cherry-pick the data that seem most reliable, or to toss out data that don't seem to fit. Researchers could, for example, "force" their data to place horseshoe crabs among crustaceans, says Sharma, but it wouldn't be accurate. The research team tried this and found hundreds of genes supporting incorrect trees.

Ballesteros encourages others to subject their evolutionary data to this kind of rigorous methodology, because "evolution is complicated."

Why horseshoe crabs are water dwellers while other arachnids colonized land remains an open question. These animals belong to a group called Chelicerata, which also includes sea spiders. Sea spiders are marine arthropods like horseshoe crabs, but they are not arachnids.

"What the study concludes is that the conquest of the land by arachnids is more complex than a single tradition event," says Ballesteros.

It's possible the common ancestor of arachnids evolved in water and only groups like spiders and scorpions made it to land. Or, a common ancestor may have evolved on land and then horseshoe crabs recolonized the sea.

"The big question we are after is the history of terrestrialization," says Sharma.

For Ballesteros, who is now studying the evolution of blindness in spiders living deep within caves in Israel, his motivations get to the heart of human nature itself.

Read more at Science Daily

Mar 8, 2019

Moderate alcohol consumption linked with high blood pressure

A study of more than 17,000 U.S. adults shows that moderate alcohol consumption -- seven to 13 drinks per week -- substantially raises one's risk of high blood pressure, or hypertension, according to research being presented at the American College of Cardiology's 68th Annual Scientific Session.

The findings contrast with some previous studies that have associated moderate drinking with a lower risk of some forms of heart disease. Most previous studies, however, have not assessed high blood pressure among moderate drinkers. Since hypertension is a leading risk factor for heart attack and stroke, the new study calls into question the notion that moderate alcohol consumption benefits heart health.

"I think this will be a turning point for clinical practice, as well as for future research, education and public health policy regarding alcohol consumption," said Amer Aladin, MD, a cardiology fellow at Wake Forest Baptist Health and the study's lead author. "It's the first study showing that both heavy and moderate alcohol consumption can increase hypertension."

Alcohol's impact on blood pressure could stem from a variety of factors, according to researchers. Because alcohol increases appetite and is, itself, very energy-dense, drinking often leads to greater caloric intake overall. Alcohol's activities in the brain and liver could also contribute to spikes in blood pressure.

Data for the research came from the National Health and Nutrition Examination Study (NHANES), a large, decades-long study led by the Centers for Disease Control and Prevention. Specifically, the researchers analyzed data from 17,059 U.S. adults who enrolled in the NHANES study between 1988 and 1994, the NHANES phase with data that is considered most complete and representative of the U.S. population.

Participants reported their drinking behavior on several questionnaires administered by mail and in person. Their blood pressure was recorded by trained personnel during visits in participants' homes and at a mobile examination center.

The researchers split participants into three groups: those who never drank alcohol, those who had seven to 13 drinks per week (moderate drinkers) and those who had 14 or more drinks per week (heavy drinkers). They assessed hypertension according to the 2017 ACC/AHA high blood pressure guideline, which defined Stage 1 hypertension as having systolic blood pressure between 130-139 or diastolic pressure between 80-89, and Stage 2 hypertension as having systolic pressure above 140 or diastolic pressure above 90.

Compared with those who never drank, moderate drinkers were 53 percent more likely to have stage 1 hypertension and twice as likely to have stage 2 hypertension. The pattern among heavy drinkers was even more pronounced; relative to those who never drank, heavy drinkers were 69 percent more likely to have stage 1 hypertension and 2.4 times as likely to have stage 2 hypertension. Overall, the average blood pressure was about 109/67 mm Hg among never-drinkers, 128/79 mm Hg among moderate drinkers and 153/82 mm Hg among heavy drinkers.

In their analysis, researchers adjusted for age, sex, race, income and cardiovascular risk to separate the effects from alcohol consumption from other factors with known links to hypertension.

Aladin said the study's large sample size likely helps explain why the findings appear to contrast with previous studies in this area. Studies involving fewer participants or only one medical center would not have the same statistical power as one using a large, national data set such as NHANES.

"This study is not only large but diverse in terms of race and gender," Aladin said. "The results are very informative for future research and practice. If you are drinking a moderate or large amount of alcohol, ask your provider to check your blood pressure at each visit and help you cut down your drinking and eventually quit."

Read more at Science Daily

Chimpanzees lose their behavioral and cultural diversity

Male chimpanzees of the Rekambo community groom one another at Loango National Parl, Gabon.
Chimpanzees exhibit exceptionally high levels of behavioral diversity compared to all other non-human species. This diversity has been documented in a variety of contexts, including the extraction of food resources, communication and thermoregulation. Many of these behaviors are assumed to be socially learned and group-specific, supporting the existence of chimpanzee cultures. As all other great apes, chimpanzees have come under enormous pressure by human activities, leading to a change of the natural environment. Their prime habitat, tropical rainforests and savanna woodlands, are increasingly converted to agricultural farmland, plantations and settlements, or otherwise degraded by the extraction of natural resources and infrastructure development.

Much of the empirical work and resulting debate on the loss of wildlife biodiversity has been conducted in the context of species decline or loss of genetic diversity and ecosystem functions. However, behavioral diversity is also a facet of biodiversity. Due to limited empirical data, until now it had been unclear whether behavioral diversity would similarly be negatively affected by human impact.

Data from 15 countries

An international research team, led by Hjalmar Kühl and Ammie Kalan of the Department of Primatology at the Max Planck Institute for Evolutionary Anthropology and the German Centre for Integrative Biodiversity Research (iDiv), compiled an unprecedented dataset on 31 chimpanzee behaviors across 144 social groups or communities, located throughout the entire geographic range of wild chimpanzees. Whereas part of this information was already available in the scientific literature, the international research team also conducted extensive field work at 46 locations, as part of the Pan African Programme, across 15 chimpanzee range countries over the last nine years. The particular set of behaviors considered in this study included the extraction and consumption of termites, ants, algae, nuts and honey; the use of tools for hunting or digging for tubers, and the use of stones, pools and caves among several others.

The occurrence of behaviors at a given site was investigated with respect to an aggregate measure of human impact. This measure integrates multiple levels of human impact, including human population density, roads, rivers and forest cover, all indicators for the level of disturbance and the degree of land cover change found in chimpanzee habitats. "The analysis revealed a strong and robust pattern: chimpanzees had reduced behavioral diversity at sites where human impact was high," explains Kalan, a researcher at the Max Planck Institute for Evolutionary Anthropology. "This pattern was consistent, independent of the grouping or categorization of behaviors. On average, chimpanzee behavioral diversity was reduced by 88 percent when human impact was highest compared to locations with the least human impact."

Potential mechanisms for loss of behaviors

As is known for humans, population size plays a major role in maintaining cultural traits and a similar mechanism may function in chimpanzees. Chimpanzees may also avoid conspicuous behaviors that inform hunters about their presence, such as nut cracking. Habitat degradation and resource depletion may also reduce opportunities for social learning and thus prevent the transfer of local traditions from one generation to the next. Lastly, climate change may also be important, as it may influence the production of important food resources and make their availability unpredictable. Very likely a combination of these potential mechanisms has caused the observed reduction in chimpanzee behavioral diversity.

Read more at Science Daily

'Goldilocks' stars may be 'just right' for finding habitable worlds

Artist's concept of a planet orbiting in the habitable zone of a K star.
Scientists looking for signs of life beyond our solar system face major challenges, one of which is that there are hundreds of billions of stars in our galaxy alone to consider. To narrow the search, they must figure out: What kinds of stars are most likely to host habitable planets?

A new study finds a particular class of stars called K stars, which are dimmer than the Sun but brighter than the faintest stars, may be particularly promising targets for searching for signs of life.

Why? First, K stars live a very long time -- 17 billion to 70 billion years, compared to 10 billion years for the Sun -- giving plenty of time for life to evolve. Also, K stars have less extreme activity in their youth than the universe's dimmest stars, called M stars or "red dwarfs."

M stars do offer some advantages for in the search for habitable planets. They are the most common star type in the galaxy, comprising about 75 percent of all the stars in the universe. They are also frugal with their fuel, and could shine on for over a trillion years. One example of an M star, TRAPPIST-1, is known to host seven Earth-size rocky planets.

But the turbulent youth of M stars presents problems for potential life. Stellar flares -- explosive releases of magnetic energy -- are much more frequent and energetic from young M stars than young Sun-like stars. M stars are also much brighter when they are young, for up to a billion years after they form, with energy that could boil off oceans on any planets that might someday be in the habitable zone.

"I like to think that K stars are in a 'sweet spot' between Sun-analog stars and M stars," said Giada Arney of NASA's Goddard Space Flight Center in Greenbelt, Maryland.

Arney wanted to find out what biosignatures, or signs of life, might look like on a hypothetical planet orbiting a K star. Her analysis is published in the Astrophysical Journal Letters.

Scientists consider the simultaneous presence of oxygen and methane in a planet's atmosphere to be a strong biosignature because these gases like to react with each other, destroying each other. So, if you see them present in an atmosphere together, that implies something is producing them both quickly, quite possibly life, according to Arney.

However, because planets around other stars (exoplanets) are so remote, there needs to be significant amounts of oxygen and methane in an exoplanet's atmosphere for it to be seen by observatories at Earth. Arney's analysis found that the oxygen-methane biosignature is likely to be stronger around a K star than a Sun-like star.

Arney used a computer model that simulates the chemistry and temperature of a planetary atmosphere, and how that atmosphere responds to different host stars. These synthetic atmospheres were then run through a model that simulates the planet's spectrum to show what it might look like to future telescopes.

"When you put the planet around a K star, the oxygen does not destroy the methane as rapidly, so more of it can build up in the atmosphere," said Arney. "This is because the K star's ultraviolet light does not generate highly reactive oxygen gases that destroy methane as readily as a Sun-like star."

This stronger oxygen-methane signal has also been predicted for planets around M stars, but their high activity levels might make M stars unable to host habitable worlds. K stars can offer the advantage of a higher probability of simultaneous oxygen-methane detection compared to Sun-like stars without the disadvantages that come along with an M star host.

Additionally, exoplanets around K stars will be easier to see than those around Sun-like stars simply because K stars are dimmer. "The Sun is 10 billion times brighter than an Earthlike planet around it, so that's a lot of light you have to suppress if you want to see an orbiting planet. A K star might be 'only' a billion times brighter than an Earth around it," said Arney.

Arney's research also includes discussion of which of the nearby K stars may be the best targets for future observations. Since we don't have the ability to travel to planets around other stars due to their enormous distances from us, we are limited to analyzing the light from these planets to search for a signal that life might be present. By separating this light into its component colors, or spectrum, scientists can identify the constituents of a planet's atmosphere, since different compounds emit and absorb distinct colors of light.

Read more at Science Daily

Listening to quantum radio

This quantum chip (1x1 cm big) allows the researchers to listen to the smallest radio signal allowed by quantum mechanics.
Researchers at Delft University of Technology have created a quantum circuit that enables them to listen to the weakest radio signal allowed by quantum mechanics. This new quantum circuit opens the door to possible future applications in areas such as radio astronomy and medicine (MRI). It also enables researchers to do experiments that can shed light on the interplay between quantum mechanics and gravity.

We have all been annoyed by weak radio signals at some point in our lives: our favourite song in the car turning to noise, being too far away from our wifi router to check our email. Our usual solution is to make the signal bigger, for instance by picking a different radio station or by moving to the other side of the living room. What if, however, we could just listen more carefully?

Weak radio signals are not just a challenge for people trying to find their favourite radio station, but also for magnetic resonance imaging (MRI) scanners at hospitals, as well as for the telescopes scientists use to peer into space.

In a quantum 'leap' in radio frequency detection, researchers in the group of Prof. Gary Steele in Delft demonstrated the detection of photons or quanta of energy, the weakest signals allowed by the theory of quantum mechanics.

Quantum chunks

One of the strange predictions of quantum mechanics is that energy comes in tiny little chunks called 'quanta'. What does this mean? "Say I am pushing a kid on a swing," lead researcher Mario Gely said. "In the classical theory of physics, if I want the kid to go a little bit faster I can give them a small push, giving them more speed and more energy. Quantum mechanics says something different: I can only increase the kid's energy one 'quantum step' at a time. Pushing by half of that amount is not possible."

For a kid on a swing these 'quantum steps' are so tiny that they are too small to notice. Until recently, the same was true for radio waves. However, the research team in Delft developed a circuit that can actually detect these chunks of energy in radio frequency signals, opening up the potential for sensing radio waves at the quantum level.

From quantum radio to quantum gravity?

Beyond applications in quantum sensing, the group in Delft is interested in taking quantum mechanics to the next level: mass. While the theory of quantum electromagnetism was developed nearly 100 years ago, physicists are still puzzled today on how to fit gravity into quantum mechanics.

Read more at Science Daily

Mar 7, 2019

New surprises from Jupiter and Saturn

Jupiter.
The latest data sent back by the Juno and Cassini spacecraft from giant gas planets Jupiter and Saturn have challenged a lot of current theories about how planets in our solar system form and behave.

The detailed magnetic and gravity data have been "invaluable but also confounding," said David Stevenson from Caltech, who will present an update of both missions this week at the 2019 American Physical Society March Meeting in Boston.

"Although there are puzzles yet to be explained, this is already clarifying some of our ideas about how planets form, how they make magnetic fields and how the winds blow," Stevenson said.

Cassini orbited Saturn for 13 years before its dramatic final dive into the planet's interior in 2017, while Juno has been orbiting Jupiter for two and a half years.

Juno's success as a mission to Jupiter is a tribute to innovative design. Its instruments are powered by solar energy alone and protected so as to withstand the fierce radiation environment.

Stevenson says the inclusion of a microwave sensor on Juno was a good decision.

"Using microwaves to figure out the deep atmosphere was the right, but unconventional, choice," he said. The microwave data have surprised the scientists, in particular by showing that the atmosphere is evenly mixed, something conventional theories did not predict.

"Any explanation for this has to be unorthodox," Stevenson said.

Researchers are exploring weather events concentrating significant amounts of ice, liquids and gas in different parts of the atmosphere as possible explanations, but the matter is far from sealed.

Other instruments on board Juno, gravity and magnetic sensors, have also sent back perplexing data. The magnetic field has spots (regions of anomalously high or low magnetic field) and also a striking difference between the northern and southern hemispheres.

"It's unlike anything we have seen before," Stevenson said.

The gravity data have confirmed that in the midst of Jupiter, which is at least 90 percent hydrogen and helium by mass, there are heavier elements amounting to more than 10 times the mass of Earth. However, they are not concentrated in a core but are mixed in with the hydrogen above, most of which is in the form of a metallic liquid.

The data has provided rich information about the outer parts of both Jupiter and Saturn. The abundance of heavier elements in these regions is still uncertain, but the outer layers play a larger-than-expected role in the generation of the two planets' magnetic fields. Experiments mimicking the gas planets' pressures and temperatures are now needed to help the scientists understand the processes that are going on.

For Stevenson, who has studied gas giants for 40 years, the puzzles are the hallmark of a good mission.

Read more at Science Daily

Some worms recently evolved the ability to regrow a complete head

Ribbonworm (Tubulanus sexlineatus) re-growing a head, seen as the lighter pigmented section on the left.
An international group of researchers including biologists from the University of Maryland found that at least four species of marine ribbon worms independently evolved the ability to regrow a head after amputation.

Regeneration of amputated body parts is uncommon but does exist throughout the animal world -- from salamanders, spiders and sea stars that can regrow appendages to a species of ribbon worm that can regenerate an entire individual from just a small sliver of tissue. But regenerative abilities were broadly assumed to be an ancient trait that some species managed to hold on to while most others lost through evolution.

This new study, which was published in the March 6, 2019 issue of Proceedings of the Royal Society B, turns that assumption on its head. In a survey of 35 species of marine ribbon worms, the researchers found that the ability to regenerate an entire head, including a brain, evolved relatively recently in four different species.

"This means that when we compare animal groups we cannot assume that similarities in their ability to regenerate are old and reflect shared ancestry," said Alexandra Bely, associate professor of biology at UMD and one of the study's authors. "We need to be more careful when comparing regeneration findings across different groups of animals."

All animals have some degree of regenerative ability. Even humans re-grow damaged skin over a wound. However, animal lineages that diverged very early in evolutionary history -- such as sponges, hydroids and ctenophores -- are often able to regrow entire individuals from even small amputated parts. As animals evolved greater complexity, regenerative abilities have become less dramatic and common.

Estimating where and when changes in regenerative abilities occurred on the tree of life is fundamental to understanding how regeneration evolves and what factors influence the trait. Until now, scientific understanding of how regeneration evolved was based solely on studies of animals that lost regenerative abilities. That's because all known gains in regenerative ability occurred too far in the distant past for comparative studies.

This new research presents the clearest documentation of animals gaining regenerative abilities and could shed light on the characteristics necessary for the trait to evolve.

To conduct the study, the researchers collected ribbon worms along coasts of the U.S., Argentina, Spain and New Zealand from 2012 to 2014. They performed regeneration experiments on 22 species, bisecting them front to back and observing their ability to regenerate. They also obtained information on 13 other marine ribbon worm species from previous studies.

All of the species were able to restore themselves to complete individuals by re-growing back ends. Only eight species were able to regrow their heads and restore an entire individual from just the back portion of the body. Four of these were known from previous studies and four were new.

More surprising than the number of ribbon worms that could re-grow heads was that the majority of them could not. Studies from the 1930s of the ribbon worm Lineus sanguineus showed it to be a champion of animal regeneration with the ability to regrow a whole body and head successfully from the equivalent of just one two-hundred-thousandths of an individual (that's like re-growing a 150-pound person from just 0.012 ounces of tissue or roughly 1/16th of a teaspoon). That example saddled the entire phylum of marine ribbon worms (known as Nemertea) with a reputation for being super regenerators.

The natural assumption was that it was an ancient trait passed down from a common ancestor that some ribbon worms began to lose as species diverged. However, with their survey of 35 species, the researchers reconstructed the evolutionary pattern of regeneration across the phylum and found it to be a more recently evolved ability, even among super regenerators like Lineus sanguineus.

"The ancestor of this group of worms is inferred to have been unable to regenerate a head, but four separate groups subsequently evolved the ability to do so," Bely said. "One of these origins is inferred to have occurred just 10 to 15 million years ago."

In evolutionary terms, that's recent history given that regenerative abilities are thought to have first evolved before the Cambrian Period more than 500 million years ago.

Opportunities to study gains in regenerative abilities can greatly improve scientists' understanding of the developmental strategies that enable and enhance regeneration. For example, some of the non-head regenerating worms in the study survived months without heads. That could indicate a possible precursor to evolving the ability to regenerate a head, because surviving an amputation long enough for regeneration might be the first evolutionary step.

Read more at Science Daily

Stars exploding as supernovae lose their mass to companion stars during their lives

A massive star evolving and becoming a red supergiant, and finally exploding as a supernova. A binary companion may strip the star's hydrogen away (producing supernova type IIb/Ib), and for a more massive star the stellar wind expels the remaining helium layer (producing supernova type Ic).
Stars over eight times more massive than the Sun end their lives in supernovae explosions. The composition of the star influences what happens during the explosion.

A considerable number of massive stars have a close companion star. Led by researchers at Kyoto University, a team of international researchers observed that some stars exploding as supernovae may release part of their hydrogen layers to their companion stars before the explosion.

"In a binary star system, the star can interact with the companion during its evolution. When a massive star evolves, it swells to become a red supergiant star, and the presence of a companion star may disrupt the outer layers of this supergiant star, which is rich in hydrogen. Therefore, binary interaction may remove the hydrogen layer of the evolved star either partially or completely," says Postdoctoral Researcher Hanindyo Kuncarayakti from the Department of Physics and Astronomy at the University of Turku in Finland and the Finnish Centre for Astronomy with ESO. Kuncarayakti is a member of the researcher team that made the observations.

As the star has released a significant part of its hydrogen layer due to the close companion star, its explosion can be observed as a type Ib or IIb supernova.

A star more massive explodes as a type Ic supernova after having lost its helium layer, too, due to the so-called stellar winds. Stellar winds are massive streams of energetic particles from the surface of the star that may remove the helium layer below the hydrogen layer.

"However, the companion star does not have a significant role in what happens to the exploding star's helium layer. Instead, stellar winds play a key role in the process as their intensity is dependent on the star's own initial mass. According to theoretical models and our observations, the effects of stellar winds on the mass loss of the exploding star are significant only for stars above a certain mass range," says Kuncarayakti.

The research group's observations show that the so-called hybrid mechanism is a potential model in describing the evolution of massive stars. The hybrid mechanism indicates that during its lifespan, the star may gradually lose part of its mass both to its companion star as a result of interaction as well as due to stellar winds.

Read more at Science Daily

What does the Milky Way weigh? Hubble and Gaia investigate

This illustration shows the fundamental architecture of our island city of stars, the Milky Way galaxy: a spiral disk, central bulge, and diffuse halo of stars and globular star clusters. Not shown is the vast halo of dark matter surrounding our galaxy.
We can't put the whole Milky Way on a scale, but astronomers have been able to come up with one of the most accurate measurements yet of our galaxy's mass, using NASA's Hubble Space Telescope and the European Space Agency's Gaia satellite.

The Milky Way weighs in at about 1.5 trillion solar masses (one solar mass is the mass of our Sun), according to the latest measurements. Only a few percent of this is contributed by the approximately 200 billion stars in the Milky Way and includes a 4-million-solar-mass supermassive black hole at the center. Most of the rest of the mass is locked up in dark matter, an invisible and mysterious substance that acts like scaffolding throughout the universe and keeps the stars in their galaxies.

Earlier research dating back several decades used a variety of observational techniques that provided estimates for our galaxy's mass ranging between 500 billion to 3 trillion solar masses. The improved measurement is near the middle of this range.

"We want to know the mass of the Milky Way more accurately so that we can put it into a cosmological context and compare it to simulations of galaxies in the evolving universe," said Roeland van der Marel of the Space Telescope Science Institute (STScI) in Baltimore, Maryland. "Not knowing the precise mass of the Milky Way presents a problem for a lot of cosmological questions."

The new mass estimate puts our galaxy on the beefier side, compared to other galaxies in the universe. The lightest galaxies are around a billion solar masses, while the heaviest are 30 trillion, or 30,000 times more massive. The Milky Way's mass of 1.5 trillion solar masses is fairly normal for a galaxy of its brightness.

Astronomers used Hubble and Gaia to measure the three-dimensional movement of globular star clusters -- isolated spherical islands each containing hundreds of thousands of stars each that orbit the center of our galaxy.

Although we cannot see it, dark matter is the dominant form of matter in the universe, and it can be weighed through its influence on visible objects like the globular clusters. The more massive a galaxy, the faster its globular clusters move under the pull of gravity. Most previous measurements have been along the line of sight to globular clusters, so astronomers know the speed at which a globular cluster is approaching or receding from Earth. However, Hubble and Gaia record the sideways motion of the globular clusters, from which a more reliable speed (and therefore gravitational acceleration) can be calculated.

The Hubble and Gaia observations are complementary. Gaia was exclusively designed to create a precise three-dimensional map of astronomical objects throughout the Milky Way and track their motions. It made exacting all-sky measurements that include many globular clusters. Hubble has a smaller field of view, but it can measure fainter stars and therefore reach more distant clusters. The new study augmented Gaia measurements for 34 globular clusters out to 65,000 light-years, with Hubble measurements of 12 clusters out to 130,000 light-years that were obtained from images taken over a 10-year period.

When the Gaia and Hubble measurements are combined as anchor points, like pins on a map, astronomers can estimate the distribution of the Milky Way's mass out to nearly 1 million light-years from Earth.

"We know from cosmological simulations what the distribution of mass in the galaxies should look like, so we can calculate how accurate this extrapolation is for the Milky Way," said Laura Watkins of the European Southern Observatory in Garching, Germany, lead author of the combined Hubble and Gaia study, to be published in The Astrophysical Journal. These calculations based on the precise measurements of globular cluster motion from Gaia and Hubble enabled the researchers to pin down the mass of the entire Milky Way.

The earliest homesteaders of the Milky Way, globular clusters contain the oldest known stars, dating back to a few hundred million years after the big bang, the event that created the universe. They formed prior to the construction of the Milky Way's spiral disk, where our Sun and solar system reside.

"Because of their great distances, globular star clusters are some of the best tracers astronomers have to measure the mass of the vast envelope of dark matter surrounding our galaxy far beyond the spiral disk of stars," said Tony Sohn of STScI, who led the Hubble measurements.

The international team of astronomers in this study are Laura Watkins (European Southern Observatory, Garching, Germany), Roeland van der Marel (Space Telescope Science Institute, and Johns Hopkins University Center for Astrophysical Sciences, Baltimore, Maryland), Sangmo Tony Sohn (Space Telescope Science Institute, Baltimore, Maryland), and N. Wyn Evans (University of Cambridge, Cambridge, United Kingdom).

Read more at Science Daily

Dinosaurs were thriving before asteroid strike that wiped them out

Artist's concept of asteroid striking Earth (stock illustration).
Dinosaurs were unaffected by long-term climate changes and flourished before their sudden demise by asteroid strike.

Scientists largely agree that an asteroid impact, possibly coupled with intense volcanic activity, wiped out the dinosaurs at the end of the Cretaceous period 66 million years ago.

However, there is debate about whether dinosaurs were flourishing before this, or whether they had been in decline due to long-term changes in climate over millions of years.

Previously, researchers used the fossil record and some mathematical predictions to suggest dinosaurs may have already been in decline, with the number and diversity of species falling before the asteroid impact.

Now, in a new analysis that models the changing environment and dinosaur species distribution in North America, researchers from Imperial College London, University College London and University of Bristol have shown that dinosaurs were likely not in decline before the meteorite.

Lead researcher Alessandro Chiarenza, a PhD student in the Department of Earth Science and Engineering at Imperial, said: "Dinosaurs were likely not doomed to extinction until the end of the Cretaceous, when the asteroid hit, declaring the end of their reign and leaving the planet to animals like mammals, lizards and a minor group of surviving dinosaurs: birds.

"The results of our study suggest that dinosaurs as a whole were adaptable animals, capable of coping with the environmental changes and climatic fluctuations that happened during the last few million years of the Late Cretaceous. Climate change over prolonged time scales did not cause a long-term decline of dinosaurs through the last stages of this period."

The study, published today in Nature Communications, shows how the changing conditions for fossilisation means previous analyses have underestimated the number of species at the end of the Cretaceous.

The team focused their study on North America, where many Late Cretaceous dinosaurs are preserved, such as Tyrannosaurus rex and Triceratops. During this period, the continent was split in two by a large inland sea.

In the western half there was a steady supply of sediment from the newly forming Rocky Mountains, which created perfect conditions for fossilising dinosaurs once they died. The eastern half of the continent was instead characterised by conditions far less suitable for fossilisation.

This means that far more dinosaur fossils are found in the western half, and it is this fossil record that is often used to suggest dinosaurs were in decline for the few million years before the asteroid strike.

Co-author Dr Philip Mannion, from University College London, commented: "Most of what we know about Late Cretaceous North American dinosaurs comes from an area smaller than one-third of the present-day continent, and yet we know that dinosaurs roamed all across North America, from Alaska to New Jersey and down to Mexico."

Instead of using this known record exclusively, the team employed 'ecological niche modelling'. This approach models which environmental conditions, such as temperature and rainfall, each species needs to survive.

The team then mapped where these conditions would occur both across the continent and over time. This allowed them to create a picture of where groups of dinosaur species could survive as conditions changed, rather than just where their fossils had been found.

The team found habitats that could support a range of dinosaur groups were actually more widespread at the end of the Cretaceous, but that these were in areas less likely to preserve fossils.

Read more at Science Daily

Mar 6, 2019

Swifts are born to eat and sleep in the air

Apus apus (Common Swift).
"They eat and sleep while they are airborne. This is something that researchers have believed since the 1950s, and now we can show that it's true," says Anders Hedenström, professor at the Department of Biology at Lund University.

Three years ago, the same research team at Lund University observed that within the species common swift, (Apus apus) there were individuals that live in the air for up to ten consecutive months without landing -- a world record for being airborne. A different research team has also shown that the alpine swift could live largely in the air.

In the current study, Anders Hedenström and his colleagues Susanne Åkesson, Gabriel Norevik, Arne Andersson and Johan Bäckman at Lund University, and Giovanni Boano from Italy, studied four individuals of the species pallid swift (Apus pallidus). The results show that the birds are in the air without landing for between two and three and a half months, depending on the individual.

Using micro-data loggers attached to the birds, the researchers measured movement when the wings flap. The loggers record activity every five minutes, and the bird's location once a month. Using this method, the researchers have been able to ascertain that the birds live for months at a time in the air during the winter months, the period of the year they spend in West Africa after the breeding season in Italy.

"They land when they breed under a roof tile or in a hole, otherwise they live in the air. They eat insects while they fly, and when they have reached a high altitude and start gliding, they actually sleep for short periods," says Anders Hedenström.

The breeding season dictates why pallid swifts cannot fly for as many months in a row as the common swift, i.e. ten months. Pallid swifts lay two clutches in one season, the common swift only one.

"However, it doesn't actually matter if a species spends three or ten months in the air. Both are adapted to live in that element, they are designed to fly with maximised energy efficiency, regardless of whether they are flapping or gliding," says Anders Hedenström, continuing:

"It's always said, of course, that flying is birds' most energy-intensive activity. I have calculated that a nightingale, which doesn't live in the air in the same way at all, expends as much energy as a pallid swift, which is in the air all the time."

Read more at Science Daily

Australian dingo is a unique Australian species in its own right

Dingo on the beach in Great Sandy National Park, Fraser Island Waddy Point, QLD, Australia.
Since the arrival of British settlers over 230 years ago, most Australians have assumed dingoes are a breed of wild dog. But 20 leading researchers have confirmed in a new study that the dingo is actually a unique, Australian species in its own right.

Following previous analyses of dingo skull and skin specimens to come to the same conclusion, these latest findings provide further evidence of specific characteristics that differentiate dingoes from domestic dogs, feral dogs, and other wild canids such as wolves.

The finding that a dingo is a dingo, and not a dog, offers an opposing view compared to a another recent study that the Government of Western Australia used to justify its attempt to declare the dingo as 'non-fauna', which would have given more freedom to landowners to kill them anywhere without a license.

Co-author Professor Corey Bradshaw of Flinders University in South Australia says the classification of dingoes has serious consequences for the fragile ecosystems they inhabit, and state governments are required to develop and implement management strategies for species considered native fauna.

"In fact, dingoes play a vital ecological role in Australia by outcompeting and displacing noxious introduced predators like feral cats and foxes. When dingoes are left alone, there are fewer feral predators eating native marsupials, birds and lizards."

"Dingoes can also increase profits for cattle graziers, because they target and eat kangaroos that otherwise compete with cattle for grass in semi-arid pasture lands,," says Professor Bradshaw.

Lead author, Dr Bradley Smith from Central Queensland University, says the scientific status of the dingo has remained contentious, resulting in inconsistency in government policy.

"The dingo has been geographically isolated from all other canids, and genetic mixing driven mainly by human interventions has only been occurring recently," Dr Smith says.

"Further evidence in support of dingoes being considered a 'wild type' capable of surviving in the absence of human intervention and under natural selection is demonstrated by the consistent return of dog-dingo hybrids to a dingo-like canid throughout the Australian mainland and on several islands."

"We have presented scientifically valid arguments to support the ongoing recognition of the dingo as a distinct species (Canis dingo), as was originally proposed by Meyer in 1793."

Dr Smith says little evidence exists to support the notion that any canid species are interchangeable with dingoes, despite the fact that most canids can successfully interbreed.

"There is no historical evidence of domestication once the dingo arrived in Australia, and the degree of domestication prior to arrival is uncertain and likely to be low, certainly compared to modern domestic dogs."

"We show that dingoes have survived in Australia for thousands of years, subject to the rigours of natural selection, thriving in all terrestrial habitats, and largely in the absence of human intervention or aid.

Read more at Science Daily

As sea level rises, wetlands crank up their carbon storage

This is a tidal marsh in Maryland, on a tributary of Chesapeake Bay. Wetlands store carbon more efficiently than any other natural ecosystem, and a new study shows they store even more when sea level rises.
Some wetlands perform better under pressure. A new study revealed that when faced with sea-level rise, coastal wetlands respond by burying even more carbon in their soils.

Coastal wetlands, which include marshes, mangroves and seagrasses, already store carbon more efficiently than any other natural ecosystem, including forests. The latest study, published March 7 in the journal Nature, looked at how coastal wetlands worldwide react to rising seas and discovered they can rise to the occasion, offering additional protection against climate change.

"Scientists know a fair amount about the carbon stored in our local tidal wetlands, but we didn't have enough data to see global patterns," said Pat Megonigal, a co-author and soil scientist at the Smithsonian Environmental Research Center.

To get a global picture, scientists from Australia, China, South Africa and the U.S. pooled data from 345 wetland sites on six continents. They looked at how those wetlands stored carbon for up to 6,000 years and compared whether sea levels rose, fell or stayed mostly the same over the millennia.

For wetlands that had faced rising seas, carbon concentrations doubled or nearly quadrupled in just the top 20 centimeters of soil. When the scientists looked deeper, at 50 to 100 centimeters beneath the surface, the difference hit five to nine times higher.

The extra boost comes because the carbon added to wetland soils by plant growth and sediment is buried faster as wetlands become wetter. Trapped underwater with little to no oxygen, the organic detritus does not decompose and release carbon dioxide as quickly. And the higher the waters rise, the more underwater storage space exists for the carbon to get buried.

North America and Europe faced the most sea-level rise over the past 6,000 years. Melting glaciers from the last ice age caused water levels to rise, increasing coastal flooding. Continents in the southern hemisphere, by contrast, were largely glacier-free and experienced stable or even falling sea levels.

However, the scene is changing now. The steady march of climate change is exposing even wetlands farther south to accelerated sea-level rise.

"They may be the sleeping giants of global carbon sequestration," said lead author Kerrylee Rogers of the University of Wollongong in Australia. Half of the world's tidal marshland grows along the coastlines of southern Africa, Australia, China and South America. If those wetlands doubled their carbon sequestration -- as other wetlands in the study did in response to sea-level rise -- they could sequester another 5 million tons of atmospheric carbon every year. That is the equivalent of taking more than a million cars off the road.

The trick, of course, is to ensure wetlands do not drown and disappear if waters rise too quickly.

"Preservation of coastal wetlands is critical if they are to play a role in sequestering carbon and mitigating climate change," Rogers said.

Read more at Science Daily

HIV remission achieved in second patient

Scanning electron microscopic (SEM) image revealing the presence of numerous human immunodeficiency virus-1 (HIV-1) virions budding from a cultured lymphocyte.
A second person has experienced sustained remission from HIV-1 after ceasing treatment, reports a paper led by researchers at UCL and Imperial College London.

The case report, published in Nature and carried out with partners at the University of Cambridge and the University of Oxford, comes ten years after the first such case, known as the 'Berlin Patient.'

Both patients were treated with stem cell transplants from donors carrying a genetic mutation that prevents expression of an HIV receptor CCR5.

The subject of the new study has been in remission for 18 months after his antiretroviral therapy (ARV) was discontinued. The authors say it is too early to say with certainty that he has been cured of HIV, and will continue to monitor his condition.

"At the moment the only way to treat HIV is with medications that suppress the virus, which people need to take for their entire lives, posing a particular challenge in developing countries," said the study's lead author, Professor Ravindra Gupta (UCL, UCLH and University of Cambridge).

"Finding a way to eliminate the virus entirely is an urgent global priority, but is particularly difficult because the virus integrates into the white blood cells of its host."

Close to 37 million people are living with HIV worldwide, but only 59% are receiving ARV, and drug-resistant HIV is a growing concern. Almost one million people die annually from HIV-related causes.

The report describes a male patient in the UK, who prefers to remain anonymous, and was diagnosed with HIV infection in 2003 and on antiretroviral therapy since 2012.

Later in 2012, he was diagnosed with advanced Hodgkin's Lymphoma. In addition to chemotherapy, he underwent a haematopoietic stem cell transplant from a donor with two copies of the CCR5Δ32 allele in 2016.

CCR5 is the most commonly used receptor by HIV-1. People who have two mutated copies of the CCR5 allele are resistant to the HIV-1 virus strain that uses this receptor, as the virus cannot enter host cells.

Chemotherapy can be effective against HIV as it kills cells that are dividing. Replacing immune cells with those that don't have the CCR5 receptor appears to be key in preventing HIV from rebounding after the treatment.

The transplant was relatively uncomplicated, but with some side effects including mild graft-versus-host disease, a complication of transplants wherein the donor immune cells attack the recipient's immune cells.

The patient remained on ARV for 16 months after the transplant, at which point the clinical team and the patient decided to interrupt ARV therapy to test if the patient was truly in HIV-1 remission.

Regular testing confirmed that the patient's viral load remained undetectable, and he has been in remission for 18 months since ceasing ARV therapy (35 months post-transplant). The patient's immune cells remain unable to express the CCR5 receptor.

He is only the second person documented to be in sustained remission without ARV. The first, the Berlin Patient, also received a stem cell transplant from a donor with two CCR5Δ32 alleles, but to treat leukaemia. Notable differences were that the Berlin Patient was given two transplants, and underwent total body irradiation, while the UK patient received just one transplant and less intensive chemotherapy.

Both patients experienced mild graft-versus-host disease, which may also have played a role in the loss of HIV-infected cells.

"By achieving remission in a second patient using a similar approach, we have shown that the Berlin Patient was not an anomaly, and that it really was the treatment approaches that eliminated HIV in these two people," said Professor Gupta.

The researchers caution that the approach is not appropriate as a standard HIV treatment due to the toxicity of chemotherapy, but it offers hope for new treatment strategies that might eliminate HIV altogether.

"Continuing our research, we need to understand if we could knock out this receptor in people with HIV, which may be possible with gene therapy," said Professor Gupta.

"The treatment we used was different from that used on the Berlin Patient, because it did not involve radiotherapy. Its effectiveness underlines the importance of developing new strategies based on preventing CCR5 expression," said co-author Dr Ian Gabriel (Imperial College Healthcare NHS Trust).

"While it is too early to say with certainty that our patient is now cured of HIV, and doctors will continue to monitor his condition, the apparent success of haematopoietic stem cell transplantation offers hope in the search for a long-awaited cure for HIV/AIDS," said Professor Eduardo Olavarria (Imperial College Healthcare NHS Trust and Imperial College London).

The research was funded by Wellcome, the Medical Research Council, the Foundation for AIDS Research, and National Institute for Health Research (NIHR) Biomedical Research Centres at University College London Hospitals, Oxford, Cambridge and Imperial.

Read more at Science Daily

Mar 5, 2019

How new species arise in the sea

A barred Hamlet (Hypoplectrus puella) off the coast of Panama.
For a new species to evolve, two things are essential: a characteristic -- such as a colour -- unique to one species and a mating preference for this characteristic. For example, individuals from a blue fish species prefer blue mates and individuals from a red fish species prefer red mates. If the two species interbreed, the process of sexual recombination is expected to destroy the coupling between colour and mate preferences and form red individuals with a preference for blue mates and vice versa. This will prevent the two species from diverging, and this is one of the reasons why it has been thought for a long time that new species can only evolve in absolute isolation, without interbreeding.

However, the dynamics of this process depend on the exact number and location of genes underlying species characteristics and mate preferences, the strength of natural selection acting on these genes, and the amount of interbreeding between species. In a new study, Professor Oscar Puebla from GEOMAR Helmholtz Centre for Ocean Research Kiel in Germany together with colleagues from the Smithsonian Tropical Research Institute in Panama have found that natural selection can couple the evolution of genes for colour pattern and mate preferences when species still interbreed. The study has been published today in the international journal Nature Ecology and Evolution.

"To address this question, the first challenge was to identify an animal group in which species are still young and interbreed, with clear species characteristics, and in which the bases of reproductive isolation are well understood," Oscar Puebla explains. The hamlets, a group of closely related reef fishes from the wider Caribbean, constitute exactly such a group. The hamlets are extremely close genetically, differ essentially in terms of colour pattern, and are reproductively isolated through strong visually-based mate preferences.

A second difficulty consists in identifying the genes that underlie species differences and mate preferences. The authors of the new study have assembled a reference genome for the hamlets and sequenced the whole genomes of 110 individuals from three species in Panama, Belize and Honduras. "This powerful dataset allowed us to identify four narrow regions of the genome that are highly and consistently differentiated among species in a backdrop of almost no genetic differentiation in the rest of the genome," co-author Kosmas Hench from GEOMAR says. In line with the ecology and reproductive biology of the hamlets, these four intervals include genes involved in vision and colour pattern.

The data also show that vision and colour pattern genes remain coupled despite the fact that they are located on three different chromosomes and that species still interbreed. Such a coupling had been previously reported when the two sets of genes are very close to each other on chromosomes, in which case they are protected from sexual recombination, but not when they are on different chromosomes. By capturing the very earliest stages of speciation in hamlets, the team shows how selection can contribute to the creation of new species.

Read more at Science Daily

Electrical signals kick off flatworm regeneration

This photograph shows a planarian flatworm with two heads.
Unlike most multicellular animals, planarian flatworms can regrow all their body parts after they are removed. This makes them a good model for studying the phenomenon of tissue regeneration. They are also useful for exploring fundamental questions in developmental biology about what underlies large-scale anatomical patterning.

In a study publishing March 5 in Biophysical Journal, scientists report that electrical activity is the first known step in the tissue-regeneration process, starting before the earliest known genetic machinery kicks in and setting off the downstream activities of gene transcription needed to construct new heads or tails.

"It's incredibly important to understand how cells make decisions about what to build," says senior author Michael Levin, director of the Allen Discovery Center at Tufts University. "We've found that endogenous electrical signals enable cells to communicate and make decisions about their position and overall organ structure, so they know which genes to turn on."

The species used in the study was Dugesia japonica. When parts of this flatworm are removed, the remaining tissues regrow the missing pieces at the correct ends -- whether a head or a tail. Previous studies had shown that about six hours after amputation, the first genes associated with regrowing a missing part are turned on. But until now, it wasn't known what happened before that or what mechanisms control which genes get turned on.

In the current experiments, led by Fallon Durant, who was a graduate student at the time, the heads and tails of the flatworms were removed. The researchers used voltage-sensitive fluorescent dyes that were able to indicate the various electrical potentials of the different regions. "You can literally see the electrical activity in the tissue," Levin says. "Within a few hours of when this activity is seen, we can start to measure changes in gene expression."

To show that a specific voltage pattern was responsible for turning on correct genes for each wound site, the team altered the resting potentials of cells at the different ends of the worms and observed the effects. By inducing ion flows that set each wound site to head- or tail-specific voltage patterns, they can create flatworms with two heads and no tail. They also studied the relationship between this electrical signal and the well-known Wnt protein signaling pathway, functioning downstream of the voltage-mediated decision machinery.

"Most of the people working on this problem study genetic and biochemical signals like transcription factors or growth factors," Levin says. "We've decided to focus on electrical signals, which are a very important part of cell-to-cell communication." He compares the electrical signals his group studies to those that occur in the brain. "A stimulus comes in and an electrical event triggers biochemical second-messenger events in the cells and downstream activity of the electrical network, such as decision making or forming a memory," he notes. "This electrical system is super ancient and very highly conserved."

Future research will focus on breaking down these signals in much more detail. For example, researchers would like to know how regenerated tissues make decisions about the size, shape, and scale of the new parts that they grow and how the bioelectric circuits store changes in body patterning, as is seen in two-headed worms that continue to make two-headed animals in subsequent rounds of regeneration.

Read more at Science Daily

Asteroids are stronger, harder to destroy than previously thought

This is a frame-by-frame showing how gravity causes asteroid fragments to reaccumulate in the hours following impact.
A popular theme in the movies is that of an incoming asteroid that could extinguish life on the planet, and our heroes are launched into space to blow it up. But incoming asteroids may be harder to break than scientists previously thought, finds a Johns Hopkins study that used a new understanding of rock fracture and a new computer modeling method to simulate asteroid collisions.

The findings, to be published in the March 15 print issue of Icarus, can aid in the creation of asteroid impact and deflection strategies, increase understanding of solar system formation and help design asteroid mining efforts.

"We used to believe that the larger the object, the more easily it would break, because bigger objects are more likely to have flaws. Our findings, however, show that asteroids are stronger than we used to think and require more energy to be completely shattered," says Charles El Mir, a recent Ph.D graduate from the Johns Hopkins University's Department of Mechanical Engineering and the paper's first author.

Researchers understand physical materials like rocks at a laboratory scale (about the size of your fist), but it has been difficult to translate this understanding to city-size objects like asteroids. In the early 2000s, a different research team created a computer model into which they input various factors such as mass, temperature, and material brittleness, and simulated an asteroid about a kilometer in diameter striking head-on into a 25-kilometer diameter target asteroid at an impact velocity of five kilometers per second. Their results suggested that the target asteroid would be completely destroyed by the impact.

In the new study, El Mir and his colleagues, K.T. Ramesh, director of the Hopkins Extreme Materials Institute and Derek Richardson, professor of astronomy at the University of Maryland, entered the same scenario into a new computer model called the Tonge-Ramesh model, which accounts for the more detailed, smaller-scale processes that occur during an asteroid collision. Previous models did not properly account for the limited speed of cracks in the asteroids.

"Our question was, how much energy does it take to actually destroy an asteroid and break it into pieces?" says El Mir.

The simulation was separated into two phases: a short-timescale fragmentation phase and a long-timescale gravitational reaccumulation phase. The first phase considered the processes that begin immediately after an asteroid is hit, processes that occur within fractions of a second. The second, long-timescale phase considers the effect of gravity on the pieces that fly off the asteroid's surface after the impact, with gravitational reaccumulation occurring over many hours after impact.

In the first phase, after the asteroid was hit, millions of cracks formed and rippled throughout the asteroid, parts of the asteroid flowed like sand, and a crater was created. This phase of the model examined the individual cracks and predicted overall patterns of how those cracks propagate. The new model showed that the entire asteroid is not broken by the impact, unlike what was previously thought. Instead, the impacted asteroid had a large damaged core that then exerted a strong gravitational pull on the fragments in the second phase of the simulation.

The research team found that the end result of the impact was not just a "rubble pile" -- a collection of weak fragments loosely held together by gravity. Instead, the impacted asteroid retained significant strength because it had not cracked completely, indicating that more energy would be needed to destroy asteroids. Meanwhile, the damaged fragments were now redistributed over the large core, providing guidance to those who might want to mine asteroids during future space ventures.

"It may sound like science fiction but a great deal of research considers asteroid collisions. For example, if there's an asteroid coming at earth, are we better off breaking it into small pieces, or nudging it to go a different direction? And if the latter, how much force should we hit it with to move it away without causing it to break? These are actual questions under consideration," adds El Mir.

Read more at Science Daily

Physicists analyze rotational dynamics of galaxies and influence of the photon mass

The spiral structure of our galaxy, the Milky Way.
The rotation of stars in galaxies such as our Milky Way is puzzling. The orbital speeds of stars should decrease with their distance from the center of the galaxy, but in fact stars in the middle and outer regions of galaxies have the same rotational speed. This may be due to the gravitational effect of matter that we can't see. But although researchers have been seeking it for decades, the existence of such 'dark matter' has yet to be definitively proven and we still don't know what it might be made of. With this in mind, the physicists Dmitri Ryutov, Dmitry Budker, and Victor Flambaum have suggested that the rotational dynamics of galaxies might be explained by other factors. They hypothesize that the mass of photons, which are particles of light, might be responsible.

Professor Dmitri Ryutov, who recently retired from the Lawrence Livermore National Laboratory in California, USA, is an expert in plasma physics. He was awarded the American Physical Society's (APS) 2017 Maxwell Prize for Plasma Physics for his achievements in the field. Physicists generally credit Ryutov with establishing the upper limit for the mass of the photon. As this mass, even if it is nonzero, is extremely small, it is usually ignored when analyzing atomic and nuclear processes. But even a vanishingly tiny mass of the photon could, according to the scientists' collaborative proposal, have an effect on large-scale astrophysical phenomena.

While visiting Johannes Gutenberg University Mainz (JGU), Ryutov, his host Professor Dmitry Budker of the Helmholtz Institute Mainz (HIM), and Professor Victor Flambaum, Fellow of the Gutenberg Research College of Mainz University, decided to take a closer look at the idea. They were interested in how the infinitesimally small mass of the photon could have an effect on massive galaxies. The mechanism at the core of the physicists' assumption is a consequence of what is known as Maxwell-Proca equations. These would allow additional centripetal forces to be generated as a result of the electromagnetic stresses in a galaxy.

Are the effects as strong as those exerted by dark matter?

"The hypothetical effect we are investigating is not the result of increased gravity," explained Dmitry Budker. This effect may occur concurrently with the assumed influence of dark matter. It may even -- under certain circumstances -- completely eliminate the need to evoke dark matter as a factor when it comes to explaining rotation curves. Rotation curves express the relationship between the orbital speeds of stars in a galaxy and their radial distance from the galaxy's center. "By assuming a certain photon mass, much smaller than the current upper limit, we can show that this mass would be sufficient to generate additional forces in a galaxy and that these forces would be roughly large enough to explain the rotation curves," said Budker. "This conclusion is extremely exciting."

Read more at Science Daily

Mar 4, 2019

Physicists solve 35-year-old mystery about quarks

Quarks, the smallest particles in the universe, are far smaller and operate at much higher energy levels than the protons and neutrons in which they are found. In 1983, physicists at CERN, as part of the European Muon Collaboration (EMC), observed for the first time what would become known as "the EMC effect": In the nucleus of an iron atom containing many protons and neutrons, quarks move significantly more slowly than quarks in deuterium, which contains a single proton and neutron.

Now physicists from Tel Aviv University, the Massachusetts Institute of Technology (MIT) and the Thomas Jefferson National Accelerator Facility know why quarks, the building blocks of the universe, move more slowly inside atomic nuclei.

"Researchers have been seeking an answer to this for 35 years," says Prof. Eli Piasetzky of TAU's Raymond and Beverly Sackler School of Physics & Astronomy. Prof. Piasetzky; Meytal Duer, also of TAU's School of Physics; and Prof. Or Hen, Dr. Barak Schmookler and Dr. Axel Schmidt of MIT have now led the international CLAS Collaboration at the Thomas Jefferson National Accelerator Facility to identify an explanation for the EMC effect. Their conclusions were published on February 20 in the journal Nature.

The researchers discovered that the speed of a quark depends on the number of protons and neutrons forming short-ranged correlated pairs in an atom's nucleus. The more such pairs there are in a nucleus, the larger the number of slow-moving quarks within the atom's protons and neutrons.

Atoms with larger nuclei intrinsically have more protons and neutrons, so they are more likely to have a higher number of proton-neutron pairs. The team concluded that the larger the atom, the more pairs it is likely to contain. This results in slower-moving quarks in that particular atom.

"In short-range correlated or SRC pairs, an atom's protons and neutrons can pair up constantly, but only momentarily, before splitting apart and going their separate ways," Duer explains. "During this brief, high-energy interaction, quarks, in their respective particles, may have a larger space to play in."

The team's new explanation can help to illuminate subtle yet important differences in the behavior of quarks, the most basic building blocks of the visible world.

For the research, the scientists harnessed a Large Acceptance Spectrometer, or CLAS detector, a four-story spherical particle detector, in an experiment conducted over several months at the Continuous Electron Beam Accelerator Facility (CEBAF) at the Thomas Jefferson National Accelerator Facility. The experiment amassed billions of interactions between electrons and quarks, allowing the researchers to calculate the speed of the quark in each interaction based on the electron's energy after it scattered, and to compare the average quark speed among the various atoms.

"These high-momentum pairs are the reason for these slow-moving quarks," Prof. Hen explains. "How much a quark's speed is slowed depends on the number of SRC pairs in an atomic nucleus. Quarks in lead, for instance, were far slower than those in aluminum, which themselves were slower than iron, and so on."

Read more at Science Daily

New portal of entry for influenza viruses

Researchers from the Medical Center -- University of Freiburg and the University of Zurich have discovered an entirely new infection route for influenza A viruses. While all previously known influenza A viruses bind sialic acid moieties on the host cell surface, the recently discovered bat-derived influenza A virus subtypes infect human and animal cells by utilizing MHC class II proteins. The immunologically relevant MHC class II molecules are ubiquitously found in many animal species, which is why the discovery will play an important role in assessing the risk of spill-over infections to other species than bats. The study, published on 20 February 2019 in the journal Nature, also provides new approaches to the evolutionary genesis of influenza viruses.

"In the lab, bat viruses can use the MHC class II complexes of mice, pigs, chickens, or humans to enter the cell. It is thus not unlikely that these bat-derived influenza viruses could be transmitted naturally from bats to other vertebrates and even humans," says Prof. Dr. Martin Schwemmle, study and research group leader at the Institute of Virology at the University Medical Center Freiburg.

Gene Expression Analysis and Gene Scissors Lead to Success

With a two-pronged strategy and a lot of effort, the researchers from Freiburg and Zurich finally succeeded in finding the cellular factor mediating the virus's entry into the host cell. First, the group of Prof. Dr. Silke Sterz from the Institute of Medical Virology of the University of Zurich compared the proteins produced in infectible cells to those produced in non-infectible cells. Using a technique called transcriptomic profiling, the researchers estimated the amount of cellular proteins via mRNA copies. This approach already provided strong indications for the MHC class II complex as the receptor candidate. Then, the team from Freiburg led by Prof. Schwemmle conducted a screening experiment in which they cut one of a total of 20,000 genes in single cells using the molecular scissor CRISPR-Cas. "Cells in which we switched off MHC class II were immune to infection. That was the final proof that the virus enters the cell with the help of MHC class II molecules," says the virologist.

The discovery of this second, sialic acid-independent, mechanism also raises the question which strategy was first in evolutionary terms. "It is quite possible that the newly discovered route of infection via MHC class II originates from the already known sialic acid pathway," says Prof. Schwemmle. The current study also raises new research questions: Are there other influenza viruses that use MHC class II proteins as host cell receptor? How simple can influenza viruses switch their receptors, and is it even possible that influenza viruses emerge, which can infect target cells by both receptors? "These are all questions that we now aiming to investigate, because influenza viruses are evidently more versatile than previously thought," says the virologist Prof. Schwemmle.

From Science Daily

The case of the over-tilting exoplanets

Yale researchers have discovered a surprising link between the tilting of exoplanets and their orbit in space. The discovery may help explain a long-standing puzzle about exoplanetary orbital architectures.
For almost a decade, astronomers have tried to explain why so many pairs of planets outside our solar system have an odd configuration -- their orbits seem to have been pushed apart by a powerful unknown mechanism. Yale researchers say they've found a possible answer, and it implies that the planets' poles are majorly tilted.

The finding could have a big impact on how researchers estimate the structure, climate, and habitability of exoplanets as they try to identify planets that are similar to Earth. The research appears in the March 4 online edition of the journal Nature Astronomy.

NASA's Kepler mission revealed that about 30% of stars similar to our Sun harbor "Super-Earths." Their sizes are somewhere between that of Earth and Neptune; they have nearly circular and coplanar orbits; and it takes them fewer than 100 days to go around their star. Yet curiously, a great number of these planets exist in pairs with orbits that lie just outside natural points of stability.

That's where obliquity -- the amount of tilting between a planet's axis and its orbit -- comes in, according to Yale astronomers Sarah Millholland and Gregory Laughlin.

"When planets such as these have large axial tilts, as opposed to little or no tilt, their tides are exceedingly more efficient at draining orbital energy into heat in the planets," said first author Millholland, a graduate student at Yale. "This vigorous tidal dissipation pries the orbits apart."

A similar, but not identical, situation exists between Earth and its moon. The moon's orbit is slowly growing due to dissipation from tides, but Earth's day is gradually lengthening.

Laughlin, who is a professor of astronomy at Yale, said there is a direct connection between the over-tilting of these exoplanets and their physical characteristics. "It impacts several of their physical features, such as their climate, weather, and global circulations," Laughlin said. "The seasons on a planet with a large axial tilt are much more extreme than those on a well-aligned planet, and their weather patterns are probably non-trivial."

Millholland said she and Laughlin already have started work on a follow-up study that will examine how these exoplanets' structures respond to large obliquities over time.

Read more at Science Daily

Chemical pollutants in the home degrade fertility in both men and dogs, study finds

Chemicals commonly found in homes, at concentrations relevant to environmental exposure, have the same damaging effect on sperm from both man and dog.
New research by scientists at the University of Nottingham suggests that environmental contaminants found in the home and diet have the same adverse effects on male fertility in both humans and in domestic dogs.

There has been increasing concern over declining human male fertility in recent decades with studies showing a 50% global reduction in sperm quality in the past 80 years. A previous study by the Nottingham experts showed that sperm quality in domestic dogs has also sharply declined, raising the question of whether modern day chemicals in the home environment could be at least partly to blame.

In a new paper published in Scientific Reports, the Nottingham team set out to test the effects of two specific human-made chemicals namely the common plasticizer DEHP, widely abundant in the home (e.g. carpets, flooring, upholstery, clothes, wires, toys) and the persistent industrial chemical polychlorinated biphenyl 153, which although banned globally, remains widely detectable in the environment including food.

The researchers carried out identical experiments in both species using samples of sperm from donor men and stud dogs living in the same region of the UK. The results show that the chemicals, at concentrations relevant to environmental exposure, have the same damaging effect on sperm from both man and dog.

Leading the work, Associate Professor and Reader in Reproductive Biology at the School of Veterinary Medicine and Science, Richard Lea, said: "This new study supports our theory that the domestic dog is indeed a 'sentinel' or mirror for human male reproductive decline and our findings suggest that human-made chemicals that have been widely used in the home and working environment may be responsible for the fall in sperm quality reported in both man and dog that share the same environment."

"Our previous study in dogs showed that the chemical pollutants found in the sperm of adult dogs, and in some pet foods, had a detrimental effect on sperm function at the concentrations previously found in the male reproductive tract. This new study is the first to test the effect of two known environmental contaminants, DEHP and PCB153, on both dog and human sperm in vitro, in the same concentrations as found in vivo.

Rebecca Sumner, who carried out the experimental work as part of her PhD, said "In both cases and in both subjects, the effect was reduced sperm motility and increased fragmentation of DNA.

Dr Sumner added: "We know that when human sperm motility is poor, DNA fragmentation is increased and that human male infertility is linked to increased levels of DNA damage in sperm. We now believe this is the same in pet dogs because they live in the same domestic environment and are exposed to the same household contaminants. This means that dogs may be an effective model for future research into the effects of pollutants on declining fertility, particularly because external influences such as diet are more easily controlled than in humans."

Read more at Science Daily

Mar 3, 2019

How listening to music 'significantly impairs' creativity

Student listening to music.
The popular view that music enhances creativity has been challenged by researchers who say it has the opposite effect.

Psychologists from the University of Central Lancashire, University of Gävle in Sweden and Lancaster University investigated the impact of background music on performance by presenting people with verbal insight problems that are believed to tap creativity.

They found that background music "significantly impaired" people's ability to complete tasks testing verbal creativity -- but there was no effect for background library noise.

For example, a participant was shown three words (e.g., dress, dial, flower), with the requirement being to find a single associated word (in this case "sun") that can be combined to make a common word or phrase (i.e., sundress, sundial and sunflower).

The researchers used three experiments involving verbal tasks in either a quiet environment or while exposed to:

  • Background music with foreign (unfamiliar) lyrics
  • Instrumental music without lyrics
  • Music with familiar lyrics

Dr Neil McLatchie of Lancaster University said: "We found strong evidence of impaired performance when playing background music in comparison to quiet background conditions."

Researchers suggest this may be because music disrupts verbal working memory.

The third experiment -- exposure to music with familiar lyrics- impaired creativity regardless of whether the music also boosted mood, induced a positive mood, was liked by the participants, or whether participants typically studied in the presence of music.

However, there was no significant difference in performance of the verbal tasks between the quiet and library noise conditions.

Researchers say this is because library noise is a "steady state" environment which is not as disruptive.

Read more at Science Daily