Feb 8, 2020

Astronomers reveal rare double nucleus in nearby 'Cocoon Galaxy'

The so-called "Cocoon Galaxy" not only has a unique shape, it has a rare double-nucleus structure, astronomers report in a new paper.

After studying data from optical and radio telescopes based on the ground and in space, a team of astronomers determined that a galaxy known as NGC 4490 (and nicknamed the "Cocoon Galaxy" because of its shape) has "a clear double nucleus structure," according to their paper.

One nucleus can be seen in optical wavelengths. The other is hidden in dust and can only be seen in infrared and radio wavelengths.

The paper reporting the discovery is now online and has been accepted for publication in the Astrophysical Journal. First author is Allen Lawrence, who earned a master's degree in astronomy from Iowa State University in 2018 and continues to work with Iowa State astronomers.

Co-authors are Iowa State's Charles Kerton, an associate professor of physics and astronomy; and Curtis Struck, a professor of physics and astronomy; as well as East Tennessee State University's Beverly Smith, a professor of physics and astronomy.

Lawrence started the study in 2013 while taking astronomy classes at the University of Wisconsin-Madison. He had the chance to study one of two galaxy systems and picked NGC 4490, which is interacting with a smaller galaxy, NGC 4485. The system is about 20% the size of the Milky Way, located in the Northern Hemisphere and about 30 million light years from Earth.

"I saw the double nucleus about seven years ago," Lawrence said. "It had never been observed -- or nobody had ever done anything with it before."

Some astronomers may have seen one nucleus with their optical telescopes. And others may have seen the other with their radio telescopes. But he said the two groups never compared notes to observe and describe the double nucleus.

The new paper says both nuclei are similar in size, mass and luminosity. It says both are similar in mass and luminosity to the nuclei observed in other interacting galaxy pairs. And, it says the double nucleus structure could also explain why the galaxy system is surrounded by an enormous plume of hydrogen.

"The most straightforward interpretation of the observations is that NGC 4490 is itself a late-stage merger remnant" of a much-earlier collision of two galaxies, the authors wrote. A merger could drive and extend the high level of star formation necessary to create such a large hydrogen plume.

The astronomers said there are other reasons they find the study of this system interesting:

Struck, who studies colliding galaxies, said double-nucleus galaxies are very rare, especially in smaller galaxies such as this one. And, he said astronomers think a double nucleus could contribute to the buildup of super massive black holes found in the center of some galaxies.

Read more at Science Daily

Simulating a universe in which Newton's laws are only partially valid

For the first time, researchers from the Universities of Bonn and Strasbourg have simulated the formation of galaxies in a universe without dark matter. To replicate this process on the computer, they have instead modified Newton's laws of gravity. The galaxies that were created in the computer calculations are similar to those we actually see today. According to the scientists, their assumptions could solve many mysteries of modern cosmology. The results are published in the Astrophysical Journal.

Cosmologists nowadays assume that matter was not distributed entirely evenly after the Big Bang. The denser places attracted more and more matter from their surroundings due to their stronger gravitational forces. Over the course of several billion years, these accumulations of gas eventually formed the galaxies we see today.

An important ingredient of this theory is the so-called dark matter. On the one hand, it is said to be responsible for the initial uneven distribution that led to the agglomeration of the gas clouds. It also explains some puzzling observations. For instance, stars in rotating galaxies often move so fast that they should actually be ejected. It appears that there is an additional source of gravity in the galaxies that prevents this -- a kind of "star putty" that cannot be seen with telescopes: dark matter.

However, there is still no direct proof of its existence. "Perhaps the gravitational forces themselves simply behave differently than previously thought," explains Prof. Dr. Pavel Kroupa from the Helmholtz Institute for Radiation and Nuclear Physics at the University of Bonn and the Astronomical Institute of Charles University in Prague. This theory bears the abbreviation MOND (MOdified Newtonian Dynamics); it was discovered by the Israeli physicist Prof. Dr. Mordehai Milgrom. According to the theory, the attraction between two masses obeys Newton's laws only up to a certain point. Under very low accelerations, as is the case in galaxies, it becomes considerably stronger. This is why galaxies do not break apart as a result of their rotational speed.

Results close to reality

"In cooperation with Dr. Benoit Famaey in Strasbourg, we have now simulated for the first time whether galaxies would form in a MOND universe and if so, which ones," says Kroupa's doctoral student Nils Wittenburg. To do this he used a computer program for complex gravitational calculations which was developed in Kroupa's group. Because with MOND, the attraction of a body depends not only on its own mass, but also on whether other objects are in its vicinity.

The scientists then used this software to simulate the formation of stars and galaxies, starting from a gas cloud several hundred thousand years after the Big Bang. "In many aspects, our results are remarkably close to what we actually observe with telescopes," explains Kroupa. For instance, the distribution and velocity of the stars in the computer-generated galaxies follow the same pattern that can be seen in the night sky. "Furthermore, our simulation resulted mostly in the formation of rotating disk galaxies like the Milky Way and almost all other large galaxies we know," says the scientist. "Dark matter simulations, on the other hand, predominantly create galaxies without distinct matter disks -- a discrepancy to the observations that is difficult to explain."

Calculations based on the existence of dark matter are also very sensitive to changes in certain parameters, such as the frequency of supernovae and their effect on the distribution of matter in galaxies. In the MOND simulation, however, these factors hardly played a role.

Read more at Science Daily

Feb 7, 2020

Majority of US adults believe climate change is most important issue today

As the effects of climate change become more evident, more than half of U.S. adults (56%) say climate change is the most important issue facing society today, yet 4 in 10 have not made any changes in their behavior to reduce their contribution to climate change, according to a new poll by the American Psychological Association.

While 7 in 10 say they wish there were more they could do to combat climate change, 51% of U.S. adults say they don't know where to start. And as the election race heats up, 62% say they are willing to vote for a candidate because of his or her position on climate change.

The survey was conducted online from Dec. 12-16, 2019, by The Harris Poll on behalf of the American Psychological Association.

People are taking some steps to combat climate change, with 6 in 10 saying they have changed a behavior to reduce their contribution to climate change. Nearly three-quarters (72%) say they are very or somewhat motivated to make changes.

Among those who have already made behavior changes to reduce their contribution to climate change, when asked why they have not done more, 1 in 4 (26%) cite not having the resources, such as time, money or skills, to make changes. Some people are unwilling to make any changes in their behavior to reduce their contribution to climate change. When those who have not changed their behavior were asked if anything would motivate them to reduce their contribution to climate change, 29% said nothing would motivate them to do so.

Concern about climate change may be having an impact on mental health, with more than two-thirds of adults (68%) saying that they have at least a little "eco-anxiety," defined as any anxiety or worry about climate change and its effects. These effects may be disproportionately having an impact on the country's youngest adults; nearly half of those age 18-34 (47%) say the stress they feel about climate change affects their daily lives.

"The health, economic, political and environmental implications of climate change affect all of us. The tolls on our mental health are far reaching," said Arthur C. Evans Jr., PhD, APA's chief executive officer. "As climate change is created largely by human behavior, psychologists are continuing to study ways in which we can encourage people to make behavioral changes -- both large and small -- so that collectively we can help our planet."

Psychological research shows us that when people learn about and experience local climate impacts, their understanding of the effects of climate change increases. A quarter of those who have not yet made a behavior change to reduce their contribution to climate change say personally experiencing environmental impacts of climate change (e.g., natural disasters, extreme weather conditions) (25%) or seeing environmental impacts of climate change in their community (24%) would make them want to try to reduce their contribution to climate change.

The most common behavior changes people have already made or are willing to make include: reducing waste, including recycling (89%); upgrading insulation in their homes (81%); limiting utility use in their homes (79%); using renewable energy sources, such as solar panels or purchasing electricity from a renewable energy supplier (78%); consuming less in general (77%); or limiting air travel (75%).

Adults are less likely to say they have changed or are willing to change daily transportation habits (e.g., carpool, drive an electric or hybrid vehicle, use public transportation, walk or bike) (67%) or their diet (e.g., eat less red meat or switch to a vegetarian or vegan diet) (62%).

A majority (70%) also say that they have already or are willing to take action such as working with their community to reduce emissions, for example by installing bike paths, hosting farmers markets, or using community solar panels. And nearly 6 in 10 (57%) say that they have already or are willing to write or lobby elected officials about climate change action with a similar proportion (57%) saying they already have or are willing to join an organization or committee working on climate change action.

Read more at Science Daily

Why bumble bees are going extinct in time of 'climate chaos'

When you were young, were you the type of child who would scour open fields looking for bumble bees? Today, it is much harder for kids to spot them, since bumble bees are drastically declining in North America and in Europe.

A new study from the University of Ottawa found that in the course of a single human generation, the likelihood of a bumble bee population surviving in a given place has declined by an average of over 30%.

Peter Soroye, a PhD student in the Department of Biology at the University of Ottawa, Jeremy Kerr, professor at the University of Ottawa and head of the lab group Peter is in, along with Tim Newbold, research fellow at UCL (University College London), linked the alarming idea of ''climate chaos'' to extinctions, and showed that those extinctions began decades ago.

"We've known for a while that climate change is related to the growing extinction risk that animals are facing around the world," first author Peter Soroye explained. "In this paper, we offer an answer to the critical questions of how and why that is. We find that species extinctions across two continents are caused by hotter and more frequent extremes in temperatures."

"We have now entered the world's sixth mass extinction event, the biggest and most rapid global biodiversity crisis since a meteor ended the age of the dinosaurs." -- Peter Soroye

Massive decline of the most important pollinators on Earth

"Bumble bees are the best pollinators we have in wild landscapes and the most effective pollinators for crops like tomato, squash, and berries," Peter Soroye observed. "Our results show that we face a future with many less bumble bees and much less diversity, both in the outdoors and on our plates."

The researchers discovered that bumble bees are disappearing at rates "consistent with a mass extinction."

"If declines continue at this pace, many of these species could vanish forever within a few decades," Peter Soroye warned.

The technique

"We know that this crisis is entirely driven by human activities," Peter Soroye said. "So, to stop this, we needed to develop tools that tell us where and why these extinctions will occur."

The researchers looked at climate change and how it increases the frequency of really extreme events like heatwaves and droughts, creating a sort of "climate chaos" which can be dangerous for animals. Knowing that species all have different tolerances for temperature (what's too hot for some might not be for others), they developed a new measurement of temperature.

"We have created a new way to predict local extinctions that tells us, for each species individually, whether climate change is creating temperatures that exceed what the bumble bees can handle," Dr. Tim Newbold explained.

Using data on 66 different bumble bee species across North America and Europe that have been collected over a 115-year period (1900-2015) to test their hypothesis and new technique, the researchers were able to see how bumble bee populations have changed by comparing where bees are now to where they used to be historically.

"We found that populations were disappearing in areas where the temperatures had gotten hotter," Peter Soroye said. "Using our new measurement of climate change, we were able to predict changes both for individual species and for whole communities of bumble bees with a surprisingly high accuracy."

A new horizon of research

This study doesn't end here. In fact, it opens the doors to new research horizons to track extinction levels for other species like reptiles, birds and mammals.

"Perhaps the most exciting element is that we developed a method to predict extinction risk that works very well for bumble bees and could in theory be applied universally to other organisms," Peter Soroye indicated. "With a predictive tool like this, we hope to identify areas where conservation actions would be critical to stopping declines."

"Predicting why bumble bees and other species are going extinct in a time of rapid, human-caused climate change could help us prevent extinction in the 21st century." -- Dr. Jeremy Kerr

There is still time to act

"This work also holds out hope by implying ways that we might take the sting out of climate change for these and other organisms by maintaining habitats that offer shelter, like trees, shrubs, or slopes, that could let bumble bees get out of the heat," Dr. Kerr said. "Ultimately, we must address climate change itself and every action we take to reduce emissions will help. The sooner the better. It is in all our interests to do so, as well as in the interests of the species with whom we share the world."

Read more at Science Daily

One small grain of moon dust, one giant leap for lunar studies

Moon surface
Back in 1972, NASA sent their last team of astronauts to the Moon in the Apollo 17 mission. These astronauts brought some of the Moon back to Earth so scientists could continue to study lunar soil in their labs. Since we haven't returned to the Moon in almost 50 years, every lunar sample is precious. We need to make them count for researchers now and in the future. In a new study in Meteoritics & Planetary Science, scientists found a new way to analyze the chemistry of the Moon's soil using a single grain of dust. Their technique can help us learn more about conditions on the surface of the Moon and formation of precious resources like water and helium there.

"We're analyzing rocks from space, atom by atom," says Jennika Greer, the paper's first author and a PhD student at the Field Museum and University of Chicago. " It's the first time a lunar sample has been studied like this. We're using a technique many geologists haven't even heard of.

"We can apply this technique to samples no one has studied," Philipp Heck, a curator at the Field Museum, associate professor at the University of Chicago, and co-author of the paper, adds. "You're almost guaranteed to find something new or unexpected. This technique has such high sensitivity and resolution, you find things you wouldn't find otherwise and only use up a small bit of the sample."

The technique is called atom probe tomography (APT), and it's normally used by materials scientists working to improve industrial processes like making steel and nanowires. But its ability to analyze tiny amounts of materials makes it a good candidate for studying lunar samples. The Apollo 17 sample contains 111 kilograms (245 pounds) of lunar rocks and soil -- the grand scheme of things, not a whole lot, so researchers have to use it wisely. Greer's analysis only required one single grain of soil, about as wide as a human hair. In that tiny grain, she identified products of space weathering, pure iron, water and helium, that formed through the interactions of the lunar soil with the space environment. Extracting these precious resources from lunar soil could help future astronauts sustain their activities on the Moon.

To study the tiny grain, Greer used a focused beam of charged atoms to carve a tiny, super-sharp tip into its surface. This tip was only a few hundred atoms wide -- for comparison, a sheet of paper is hundreds of thousands of atoms thick. "We can use the expression nanocarpentry," says Philipp Heck. "Like a carpenter shapes wood, we do it at the nanoscale to minerals."

Once the sample was inside the atom probe at Northwestern University, Greer zapped it with a laser to knock atoms off one by one. As the atoms flew off the sample, they struck a detector plate. Heavier elements, like iron, take longer to reach the detector than lighter elements, like hydrogen. By measuring the time between the laser firing and the atom striking the detector, the instrument is able to determine the type of atom at that position and its charge. Finally, Greer reconstructed the data in three dimensions, using a color-coded point for each atom and molecule to make a nanoscale 3D map of the Moon dust.

It's the first time scientists can see both the type of atoms and their exact location in a speck of lunar soil. While APT is a well-known technique in material science, nobody had ever tried using it for lunar samples before. Greer and Heck encourage other cosmochemists to try it out. "It's great for comprehensively characterizing small volumes of precious samples," Greer says. "We have these really exciting missions like Hayabusa2 and OSIRIS-REx returning to Earth soon -- uncrewed spacecrafts collecting tiny pieces of asteroids. This is a technique that should definitely be applied to what they bring back because it uses so little material but provides so much information."

Studying soil from the moon's surface gives scientists insight into an important force within our Solar System: space weathering. Space is a harsh environment, with tiny meteorites, streams of particles coming off the Sun, and radiation in the form of solar and cosmic rays. While Earth's atmosphere protects us from space weathering, other bodies like the Moon and asteroids don't have atmospheres. As a result, the soil on the Moon's surface has undergone changes caused by space weathering, making it fundamentally different from the rock that the rest of the Moon is composed of. It's kind of like a chocolate-dipped ice cream cone: the outer surface doesn't match what's inside. With APT, scientists can look for differences between space weathered surfaces and unexposed moon dirt in a way that no other method can. By understanding the kinds of processes that make these differences happen, they can more accurately predict what's just under the surface of moons and asteroids that are too far away to bring to Earth.

Because Greer's study used a nanosized tip, her original grain of lunar dust is still available for future experiments. This means new generations of scientists can make new discoveries and predictions from the same precious sample. "Fifty years ago, no one anticipated that someone would ever analyze a sample with this technique, and only using a tiny bit of one grain," Heck states. "Thousands of such grains could be on the glove of an astronaut, and it would be sufficient material for a big study."

Greer and Heck emphasize the need for missions where astronauts bring back physical samples because of the variety of terrains in outer space. "If you only analyze space weathering from the one place on the Moon, it's like only analyzing weathering on Earth in one mountain range," Greer says. We need to go to other places and objects to understand space weathering in the same way we need to check out different places on Earth like the sand in deserts and outcrops in mountain ranges on Earth."

We don't yet know what surprises we might find from space weathering. "It's important to understand these materials in the lab so we understand what we're seeing when we look through a telescope," Greer says. "Because of something like this, we understand what the environment is like on the Moon. It goes way beyond what astronauts are able to tell us as they walk on the Moon. This little grain preserves millions of years of history.

Read more at Science Daily

Molecular 'switch' reverses chronic inflammation and aging

Hourglass, aging concept
Chronic inflammation, which results when old age, stress or environmental toxins keep the body's immune system in overdrive, can contribute to a variety of devastating diseases, from Alzheimer's and Parkinson's to diabetes and cancer.

Now, scientists at the University of California, Berkeley, have identified a molecular "switch" that controls the immune machinery responsible for chronic inflammation in the body. The finding, which appears online Feb. 6 in the journal Cell Metabolism, could lead to new ways to halt or even reverse many of these age-related conditions.

"My lab is very interested in understanding the reversibility of aging," said senior author Danica Chen, associate professor of metabolic biology, nutritional sciences and toxicology at UC Berkeley. "In the past, we showed that aged stem cells can be rejuvenated. Now, we are asking: to what extent can aging be reversed? And we are doing that by looking at physiological conditions, like inflammation and insulin resistance, that have been associated with aging-related degeneration and diseases."

In the study, Chen and her team show that a bulky collection of immune proteins called the NLRP3 inflammasome -- responsible for sensing potential threats to the body and launching an inflammation response -- can be essentially switched off by removing a small bit of molecular matter in a process called deacetylation.

Overactivation of the NLRP3 inflammasome has been linked to a variety of chronic conditions, including multiple sclerosis, cancer, diabetes and dementia. Chen's results suggest that drugs targeted toward deacetylating, or switching off, this NLRP3 inflammasome might help prevent or treat these conditions and possibly age-related degeneration in general.

"This acetylation can serve as a switch," Chen said. "So, when it is acetylated, this inflammasome is on. When it is deacetylated, the inflammasome is off."

By studying mice and immune cells called macrophages, the team found that a protein called SIRT2 is responsible for deacetylating the NLRP3 inflammasome. Mice that were bred with a genetic mutation that prevented them from producing SIRT2 showed more signs of inflammation at the ripe old age of two than their normal counterparts. These mice also exhibited higher insulin resistance, a condition associated with type 2 diabetes and metabolic syndrome.

The team also studied older mice whose immune systems had been destroyed with radiation and then reconstituted with blood stem cells that produced either the deacetylated or the acetylated version of the NLRP3 inflammasome. Those who were given the deacetylated, or "off," version of the inflammasome had improved insulin resistance after six weeks, indicating that switching off this immune machinery might actually reverse the course of metabolic disease.

"I think this finding has very important implications in treating major human chronic diseases," Chen said. "It's also a timely question to ask, because in the past year, many promising Alzheimer's disease trials ended in failure. One possible explanation is that treatment starts too late, and it has gone to the point of no return. So, I think it's more urgent than ever to understand the reversibility of aging-related conditions and use that knowledge to aid a drug development for aging-related diseases."

Read more at Science Daily

Feb 6, 2020

Breathing may change your mind about free will

Have you ever gone ahead and eaten that piece of chocolate, despite yourself?

Do you inadvertently make decisions because you are hungry or cold? In other words, does the brain's processing of internal bodily signals interfere with your ability to act freely?

This line of thinking is at the heart of research that questions our ability to act on thoughts of free will. We already know that inner body signals, like the heartbeat, affect our mental states, can be used to reduce the perception of pain and are of fundamental importance for bodily self-consciousness.

Thanks to a new discovery, it turns out that these inner body signals do indeed affect acts of volition.

Scientists at EPFL in Switzerland have shown that you are more likely to initiate a voluntary decision as you exhale. Published in today's issue of Nature Communications, these findings propose a new angle on an almost 60-year-old neuroscientific debate about free will and the involvement of the human brain.

"We show that voluntary action is indeed linked to your body's inner state, especially with breathing and expiration but not with some other bodily signals, such as the heartbeat," explains Olaf Blanke, EPFL's Foundation Bertarelli Chair in Cognitive Neuroprosthetics and senior author.

At the center of these results is the readiness potential (RP), a signal of brain activity observed in the human cortex that appears not only before voluntary muscle movement, but also before one becomes aware of the intention to move. The RP is the signature of voluntary action since it consistently appears in brain activity measurements right before acts of free will (like being aware that one wants to reach for the chocolate).

Interpretations of the RP have been debated for decades. Some interpret the RP to show that free will is an illusion, since the RP precedes the conscious experience of free will. It seems to show that the brain commits to a decision (chocolate) before we are even consciously aware of having made that decision.

More recently, it was suggested that the RP could be an artefact of measurement, potentially putting free will back into our command.

But if we take on the view that our conscious decisions arise from a cascade of firing neurons, then the origin of the RP may actually provide insight into the mechanisms that lead to voluntary action and free will. The way the brain's neurons work together to come to a decision is still poorly understood. Our conscious experience of free will, our ability to make decisions freely, may then be intricately wired to the rest of our body.

The EPFL results suggest that the origin of the RP is linked to breathing, providing a new perspective on experiences of free will: the regular cycle of breathing is part of the mechanism that leads to conscious decision-making and acts of free will. Moreover, we are more likely to initiate voluntary movements as we exhale. (Did you reach for that piece of chocolate during an exhale?)

These findings suggest that the breathing pattern may be used to predict 'when' people begin voluntary action. Your breathing patterns could also be used to predict consumer behavior, like when you click on that button. Medical devices that use brain-computer interfaces could be tuned and improved according to breathing. The breathing-action coupling could be used in research and diagnostic tools for patients with deficits in voluntary action control, like obsessive compulsive disorders, Parkinson disease, and Tourette syndromes. Blanke and Hyeong-Dong Park, first author of this research, have filed a patent based on these findings.

Free will hijacked by interoceptive signals?

More generally, the EPFL findings suggest that acts of free will are affected by signals from other systems of the body. Succumbing to that urge to eat chocolate may depend more on your body's internal signals than you may realize!

Blanke elaborates, "That voluntary action, an internally or self-generated action, is coupled with an interoceptive signal, breathing, may be just one example of how acts of free will are hostage to a host of inner body states and the brain's processing of these internal signals. Interestingly, such signals have also been shown to be of relevance for self-consciousness."

You may be tempted to blame acts of chocolate binging on interoceptive electrical signals hijacking your free will. The gut-mind connection is an active field of research and interoceptive messages sent to the brain certainly impact food cravings. For now, this latest EPFL research only improves predictions of when you will indulge in that craving, and not what you actually crave.

Acts of free will and inner states of the body

The prevailing view in neuroscience is that consciousness is an emergent phenomenon of the brain. Firing of the brain's neurons leads to consciousness and the feeling of free will or voluntary action. By belonging to the physical universe, the brain's electrical activity within the constraints of anatomy, is subject to the laws of physics. In this sense, brain signals encoding the body, lungs and heart might naturally affect the brain's cognitive states too and therefore influence acts of free will.

To test whether the RP depends on the body's inner state and the brain's representation thereof, Blanke and colleagues asked 52 subjects to press a button at will at Campus Biotech in Geneva. EEGs monitored brain activity, a belt around the chest measured breathing activity and cardiac activity was recorded.

The scientists found that the RP and voluntary action (pressing the button) is linked to the body's inner state -- the regular breathing cycle -- but not to the heartbeat. Participants initiated voluntary movements more frequently during an exhale than an inhale and were completely unaware of this breathing-action coupling. The RP was also modulated depending on the breathing cycle.

EPFL scientist and first author of the study Hyeong-Dong Park explains, "The RP no longer corresponds only to cortical activity 'unconsciously preparing' voluntary action. The RP, at least partly, reflects respiration-related cortical processing that is coupled to voluntary action. More generally, it further suggests that higher-level motor control, such as voluntary action, is shaped or affected by the involuntary and cyclic motor act of our internal body organs, in particular the lungs. Still the precise neural activity that controls breathing remains to be mapped."

The readiness potential and interpretations

Philosophers, psychologists, and more recently neuroscientists have long debated our ability to act freely. The meaning of the readiness potential (RP) has been questioned ever since its discovery by neuroscientists Hans Helmut Kornhuber and Lüder Deecke in 1965, and later regarding its relation to free will in neuroscientist Benjamin Libet's experiments.

The entire brain consists of approximately 100 billion neurons, and each individual neuron transmits electrical signals as the brain works. Electrodes placed on the head can measure the collective electrical activity of the brain's neurons, seen as wavy lines called an electroencephalogram (EEG).

In 1965, neuroscientists Hans Helmut Kornhuber and Lüder Deecke conducted a seminal experiment to test voluntary action and discovered a recurring pattern of brain activity. They placed EEG electrodes on top of the subject's head, and asked the subject to press a button at will. Kornhuber and Deecke discovered that the EEG consistently exhibited a rising slope of wavy lines, the readiness potential, 1 second or more before voluntary movement.

In the early 1980s, neuroscientist Benjamin Libet further tested the relationship between the RP and conscious awareness or intention of voluntary action. His highly influential results showed that approximately 200ms before his subjects pressed the button, they were aware of an urge or the intention to act, something Libet referred to as the W time, and yet the RP consistently preceded W time.

Libet suggested that these findings showed that even before we make a conscious decision of voluntary action, the brain was already unconsciously activated and involved in planning the action.

Some have interpreted the relation between the RP and W time as an indication that human free will might be an illusion. The RP is viewed as the brain committing to a decision (to press the button) before the subject is even aware of having made that decision. If commitment to a decision is being made before we are even aware of it, then what mechanism is making the decision for us?

For the neuroscientist who considers consciousness to arise from brain activity (versus brain activity arising from consciousness), Libet's results may not surprising, since the conscious experience of free will is viewed as an emergent phenomenon of brain activity.

Read more at Science Daily

Study takes a stand against prolonged sitting

In many workplaces, standing desks and walking meetings are addressing the health dangers of sitting too long each day, but for universities, the natural question is how to make such adjustments for classrooms.

The question appealed to emerita dance professor Angelia Leung from the UCLA Department of World Arts & Cultures/Dance. Sitting too long was never an issue for Leung's students. But for most college students, desk time is more common than dance time. In an unusual collaboration between the arts and sciences, Leung partnered with Burt Cowgill, an assistant adjunct professor with the UCLA Fielding School of Public Health, to find ways to help students stand up.

The team's research, published in the Journal of American College Health on Feb. 6, hit upon solutions that students and faculty can agree on. However, all the solutions, the researchers said, would work best if joined with an effort to raise awareness about the health risks of extended sitting, aimed at shifting cultural expectations and norms about classroom etiquette.

Studies have linked prolonged sitting with health concerns such as heart disease, cancer, depression, diabetes and obesity. Research shows that breaking up long periods of sitting with movement at least once an hour reduces those risks, while regular exercise at other times of day does not. Despite those risks, the UCLA research found that more than half of students interviewed considered it socially unacceptable to stand up and stretch in the middle of class, and nearly two-thirds felt the same about doing so during smaller discussion sections.

"A cultural change has to take place -- that it's OK to take a stretch break, to stand up during a lecture, to fidget when needed -- it's 'good' for health's sake," Leung said. "My students have an advantage because dance classes naturally involve movement, but we can extend these benefits to any class on campus with something as simple as short stretching breaks -- no dancing required."

Some of the recommendations are simple: Take hourly breaks to stand and stretch during long classes; include more small-group activities that require moving to switch desks; and create more open classrooms with space to walk without squeezing past fellow students and room to install standing desk areas.

To overcome social stigma, the researchers emphasized that professors and instructors will have to take the lead in offering group breaks at specific times rather than suggesting students can get up any time they wish. They also recommended that professors encourage students to get up and move during their breaks; and suggested that university administrators establish policies that call for building more open classrooms and adding features such as adjustable desks.

The research was funded by the Semel Healthy Campus Initiative Center at UCLA, a campuswide effort to make the healthy choice the easy choice, and to promote wellness through education and research. For the study, moderators conducted eight focus-group interviews and guided discussions with 66 UCLA students, roughly half undergraduates and half graduate students. The researchers also interviewed eight faculty members. The researchers looked at how much students and faculty knew about the health risks of sitting, investigated whether the participants could avoid prolonged sitting in class, and gathered ideas for feasible solutions.

"We need to change the way we teach so that we can offer more standing breaks, create opportunities for in-class movement, and even change the built environment so that there's more room for moving around," Cowgill said.

But even though the study found that students and faculty were broadly supportive of making changes, Cowgill said he doubts people will, ahem, stand up against the status quo if there isn't also an effort to raise awareness about the health risks. Social norms and the physical classroom environment are barriers, but awareness is the biggest obstacle.

Cowgill said he was surprised to learn that many of the participants were not aware of the health problems that prolonged sitting can cause, even for people who are otherwise active. "Many people thought they would be fine if they also squeezed in a 30-minute jog, and that's just not what research shows us."

Read more at Science Daily

Chemical found in drinking water linked to tooth decay in children

Children with higher concentrations of a certain chemical in their blood are more likely to get cavities, according to a new study by West Virginia University School of Dentistry researchers.

Manufactured chemical groups called perfluoroalkyl and polyfluoroalkyl substances are universal as a result of extensive manufacturing and use. Although manufacturers no longer use PFAS to make nonstick cookware, carpet, cardboard and other products, they persist in the environment. Scientists have linked them to a range of health problems -- from heart disease to high cholesterol -- but now R. Constance Wiener and Christopher Waters are exploring how they affect dental health.

They investigated whether higher concentrations of PFAS were associated with greater tooth decay in children. One of them -- perfluorodecanoic acid -- was linked to dental cavities. Their findings appear in the Journal of Public Health Dentistry.

"Due to the strong chemical bonds of PFAS, it is difficult for them to breakdown, which makes them more likely to be persistent within the environment, especially in drinking water systems," said Waters, who directs the School of Dentistry's research labs. "A majority of people may not be aware that they are using water and other products that contain PFAS."

The 629 children who participated in the study were 3 to 11 years old and were part of the National Health and Nutrition Examination Survey. Samples of the children's blood were analyzed for PFAS in 2013 and 2014. Their tooth decay and other factors -- such as their race, their BMI and how often they brushed their teeth -- were assessed.

Of the seven PFAS that Wiener and Waters analyzed, perfluorodecanoic acid was the one that correlated with higher levels of tooth decay.

"Perfluorodecanoic acid, in particular, has a long molecular structure and strong chemical bonds; therefore, it remains in the environment longer. As a result, it is more likely to have negative health consequences such as dental caries," said Dr. Wiener, an associate professor in the Department of Dental Practice and Rural Health.

But how does that influence happen? Wiener and Waters have a hypothesis. According to other research, perfluorodecanoic acid may disrupt the healthy development of enamel, which is what makes teeth hard. That disruption can leave teeth susceptible to decay.

However, when it comes to cavities, scientists haven't parsed perfluorodecanoic acid's mechanism of action yet. The topic warrants further investigation.

"While the findings of this study are important, there are some study limitations, and more work is needed to fully understand how this molecule impacts normal tooth formation," said Fotinos Panagakos, the School of Dentistry's vice dean for administration and research.

"The good news is that, in our study, about half of the children did not have any measurable amount of PFAS. Perhaps this is due to certain PFAS no longer being made in the US," Wiener said.

Another piece of good news is that the study reaffirmed the importance of dental hygiene and checkups. Children who brushed once a day or less frequently had significantly higher tooth decay than those who brushed at least twice daily.

Likewise, children who had not been to the dentist within the previous year were twice as likely to have higher rates of tooth decay than kids who hadn't.

So, even though parents cannot control what is in their children's drinking water, they can still protect their children's teeth by fostering thorough, regular brushing and scheduling dental exams.

Read more at Science Daily

Normal resting heart rate appears to vary widely from person to person

Monitoring heart rate.
A person's normal resting heart rate is fairly consistent over time, but may vary from others' by up to 70 beats per minute, according to analysis of the largest dataset of daily resting heart rate ever collected. Giorgio Quer of the Scripps Research Translational Institute in La Jolla, California, and colleagues present these findings in the open-access journal PLOS ONE on February 5, 2020 as part of an upcoming PLOS Collection on Digital Health Technology.

A routine visit to the doctor usually involves a measurement of resting heart rate, but such measurements are rarely actionable unless they deviate significantly from a "normal" range established by population-level studies. However, wearables that track heart rate now provide the opportunity to continuously monitor heart rate over time, and identify normal resting heart rates at the individual level.

In the largest study of its kind to date, Quer and colleagues retrospectively analyzed de-identified heart rate data from wearables worn for a median of 320 days by 92,457 people from across the U.S. Nearly 33 million days' worth of heart rate data were collected in total. The researchers used the data to examine variations in resting heart rate for individuals over time, as well as between individuals with different characteristics.

The analysis showed that one person's mean daily resting heart rate may differ by up to 70 beats per minute from another person's normal rate. Taken together, age, sex, body mass index (BMI), and average daily sleep duration accounted for less than 10 percent of the observed variation between individuals.

The authors observed also a small seasonal trend in the resting heart rate, with slightly higher values observed in January and slightly lower values in July. The researchers also found that some individuals may occasionally experience brief periods when their resting heart rate differs by 10 or more beats per minute from their normal range.

These findings suggest the potential value of further research to investigate whether tracking a person's daily resting heart rate could enable earlier detection of clinically important changes.

Read more at Science Daily

Feb 5, 2020

Why males pack a powerful punch

Man's fist
Elk have antlers. Rams have horns. In the animal kingdom, males develop specialized weapons for competition when winning a fight is critical. Humans do too, according to new research from the University of Utah. Males' upper bodies are built for more powerful punches than females', says the study, published in the Journal of Experimental Biology, suggesting that fighting may have long been a part of our evolutionary history.

"In mammals in general," says U professor David Carrier of the School of Biological Sciences, "the difference between males and females is often greatest in the structures that are used as weapons."

Assembling evidence

For years, Carrier has been exploring the hypothesis that generations of interpersonal male-male aggression long in the past have shaped structures in human bodies to specialize for success in fighting. Past work has shown that the proportions of the hand aren't just for manual dexterity -- they also protect the hand when it's formed into a fist. Other studies looked at the strength of the bones of the face (as a likely target of a punch) and how our heels, planted on the ground, can confer additional upper body power.

"One of the predictions that comes out of those," Carrier says, "is if we are specialized for punching, you might expect males to be particularly strong in the muscles that are associated with throwing a punch."

Jeremy Morris, then a doctoral student and now an assistant professor at Wofford College, designed an experiment with Carrier, doctoral student Jenna Link and associate professor James C. Martin to explore the sexual dimorphism, or physical differences between men and women, of punching strength. It's already known that males' upper bodies, on average, have 75% more muscle mass and 90% more strength than females'. But it's not known why.

"The general approach to understanding why sexual dimorphism evolves," Morris says, "is to measure the actual differences in the muscles or the skeletons of males and females of a given species, and then look at the behaviors that might be driving those differences."

Cranking through a punch

To test their hypothesis the researchers had to measure punching strength, but carefully. If participants directly punched a bag or other surface, they risked hand injury. Instead, the researchers rigged up a hand crank that would mimic the motions of a punch. They also measured participants' strength in pulling a line forward over their head, akin to the motion of throwing a spear. This tested an alternative hypothesis that males' upper body strength may have developed for the purpose of throwing or spear hunting.

Twenty men and 19 women participated. "We had them fill out an activity questionnaire," Morris says, "and they had to score in the 'active' range. So, we weren't getting couch potatoes, we were getting people that were very fit and active."

But even with roughly uniform levels of fitness, the males' average power during a punching motion was 162% greater than females', with the least-powerful man still stronger than the most powerful woman. Such a distinction between genders, Carrier says, develops with time and with purpose.

"It evolves slowly," he says, "and this is a dramatic example of sexual dimorphism that's consistent with males becoming more specialized for fighting, and males fighting in a particular way, which is throwing punches."

They didn't find the same magnitude of difference in overhead pulling strength, lending additional weight to the conclusion that males' upper body strength is specialized for punching rather than throwing weapons.

Breaking a legacy of violence

It's an uncomfortable thought to consider that men may be designed for fighting. That doesn't mean, however, that men today are destined to live their ancestor's violent lives.

"Human nature is also characterized by avoiding violence and finding ways to be cooperative and work together, to have empathy, to care for each other, right?" Carrier says. "There are two sides to who we are as a species. If our goal is to minimize all forms of violence in the future, then understanding our tendencies and what our nature really is, is going to help."

Read more at Science Daily

Fruit flies respond to rapid changes in the visual environment

Vision is fundamentally based on the perception of contrast. When light conditions change, the eye needs a certain period of time to adapt and restore its ability to estimate contrast correctly. These processes are relatively well understood. However, researchers at Johannes Gutenberg University Mainz (JGU) have now discovered a mechanism employed by the fruit fly Drosophila melanogaster that broadens our understanding of visual perception. Their results explain why the eye can correctly evaluate contrast, even in suddenly changing light conditions. "Fruit flies can do this because they have nerve cells in their visual system that react to luminance. These nerve cells make it possible for the flies to adjust their behavior when visual stimuli dynamically change," explained Professor Marion Silies, head of the research project at JGU.

Sensory systems of living organisms have evolved in a way that they tend to note changes rather than absolute sensory inputs. "For example, you might well forget that you're wearing a necklace during the day, but if an insect lands on your skin you feel it immediately," added Silies. Vision works in the same way, as it is adaptable and designed to respond to changes in the environment. Many nerve cells respond to contrasts rather than to luminance itself. That is why many animals have visual systems that work particularly well at dawn, at dusk, in daylight, or in a rapidly changing environment.

The performance of photoreceptors in the retina plays a key role in both vertebrates and invertebrates. These photoreceptors ensure that contrast is detected regardless of the background luminance. However, this retinal adaptation alone cannot explain the mechanism that copes with sudden changes, such when, for example, an animal moves rapidly or when viewing an object moving from bright sunlight into a shadow. In such cases, background luminance can change within milliseconds.

Contrast-sensitive lamina neurons alone are not enough / Luminance acts as a corrective signal

During their investigation of Drosophila, Professor Marion Silies and her team of neuroscientists have focused on the processes that take place directly downstream of the photoreceptors in the nervous system. They paid particular attention to the pathways involving the lamina neurons that are specialized to detect an increase or decrease in contrast. "Here, we uncovered a luminance-sensitive pathway in the Drosophila visual system. Contrast-sensitive neuronal responses alone are insufficient to account for behavioral responses to changing visual stimuli, arguing for the presence of a corrective signal that scales contrast-sensitive responses when background luminance suddenly declines," the authors write in their article for Current Biology. "We have been able to show that information about luminance acts as a corrective signal which intervenes when it suddenly goes dim. This implies that information about luminance is needed in order to accurately recognize contrasts," added lead author Madhura Ketkar. To date, it had been assumed that the relative contrast conveyed by other lamina neurons was alone necessary in order to see accurately in rapidly changing light conditions, making it possible to correctly compute visual responses when, for instance, a football moves from light into the shade.

L3 neurons are sensitive to brightness and particularly active in low light conditions

The neurobiologists were able to demonstrate this by measuring the calcium signals in the nerve cells with the help of two-photon microscopy. This technique enabled them to determine the activity of individual nerve cells in live fruit flies. "Our measurements showed that there are cells which react to luminance and not contrast," emphasized Silies. The team confirmed these findings by behavioral experiments in which the flies were made to walk on a small air-cushioned ball in front of a dynamically changing background. "We were also able to clearly demonstrate that these luminance-sensitive cells are necessary for the fly to respond when the background quickly turned dim," Silies continued. When L3 lamina neurons were not active, there was no appropriate behavioral response.

Read more at Science Daily

Scientists unravel mystery of photosynthesis

Sunlight and leaves.
Plants have been harnessing the sun's energy for hundreds of millions of years.

Algae and photosynthetic bacteria have been doing the same for even longer, all with remarkable efficiency and resiliency.

It's no wonder, then, that scientists have long sought to understand exactly how they do this, hoping to use this knowledge to improve human-made devices such as solar panels and sensors.

Scientists from the U.S. Department of Energy's (DOE) Argonne National Laboratory, working closely with collaborators at Washington University in St. Louis, recently solved a critical part of this age-old mystery, homing in on the initial, ultrafast events through which photosynthetic proteins capture light and use it to initiate a series of electron transfer reactions.

"In order to understand how biology fuels all of its engrained activities, you must understand electron transfer," said Argonne biophysicist Philip Laible. "The movement of electrons is crucial: it's how work is accomplished inside a cell."

In photosynthetic organisms, these processes begin with the absorption of a photon of light by pigments localized in proteins.

Each photon propels an electron across a membrane located inside specialized compartments within the cell.

"The separation of charge across a membrane -- and stabilization of it -- is critical as it generates energy that fuels cell growth," said Argonne biochemist Deborah Hanson.

The Argonne and Washington University research team has gained valuable insight on the initial steps in this process: the electron's journey.

Nearly 35 years ago, when the first structure of these types of complexes was unveiled, scientists were surprised to discover that after the absorption of light, the electron transfer processes faced a dilemma: there are two possible pathways for the electron to travel.

In nature, plants, algae and photosynthetic bacteria use just one of them -- and scientists had no idea why.

What they did know was that the propulsion of the electron across the membrane -- effectively harvesting the energy of the photon -- required multiple steps.

Argonne and Washington University scientists have managed to interfere with each one of them to change the electron's trajectory.

"We've been on this trail for more than three decades, and it is a great accomplishment that opens up many opportunities," said Dewey Holten, a chemist at Washington University.

The scientists' recent article, "Switching sides -- Reengineered primary charge separation in the bacterial photosynthetic reaction center," published in the Proceedings of the National Academy of Sciences, shows how they discovered an engineered version of this protein complex that switched the utilization of the pathways, enabling the one that was inactive while disabling the other.

"It is remarkable that we have managed to switch the direction of initial electron transfer," said Christine Kirmaier, Washington University chemist and project leader. "In nature, the electron chose one path 100 percent of the time. But through our efforts, we have been able to make the electron switch to an alternate path 90 percent of the time. These discoveries pose exciting questions for future research."

As a result of their efforts, the scientists are now closer than ever to being able to design electron transfer systems in which they can send an electron down a pathway of their choosing.

Read more at Science Daily

Astronomers discover unusual monster galaxy in the very early universe

W. M. Keck Observatory.
An international team of astronomers led by scientists at the University of California, Riverside, has found an unusual monster galaxy that existed about 12 billion years ago, when the universe was only 1.8 billion years old.

Dubbed XMM-2599, the galaxy formed stars at a high rate and then died. Why it suddenly stopped forming stars is unclear.

"Even before the universe was 2 billion years old, XMM-2599 had already formed a mass of more than 300 billion suns, making it an ultramassive galaxy," said Benjamin Forrest, a postdoctoral researcher in the UC Riverside Department of Physics and Astronomy and the study's lead author. "More remarkably, we show that XMM-2599 formed most of its stars in a huge frenzy when the universe was less than 1 billion years old, and then became inactive by the time the universe was only 1.8 billion years old."

The team used spectroscopic observations from the W. M. Keck Observatory's powerful Multi-Object Spectrograph for Infrared Exploration, or MOSFIRE, to make detailed measurements of XMM-2599 and precisely quantify its distance.

Study results appear in the Astrophysical Journal.

"In this epoch, very few galaxies have stopped forming stars, and none are as massive as XMM-2599," said Gillian Wilson, a professor of physics and astronomy at UCR in whose lab Forrest works. "The mere existence of ultramassive galaxies like XMM-2599 proves quite a challenge to numerical models. Even though such massive galaxies are incredibly rare at this epoch, the models do predict them. The predicted galaxies, however, are expected to be actively forming stars. What makes XMM-2599 so interesting, unusual, and surprising is that it is no longer forming stars, perhaps because it stopped getting fuel or its black hole began to turn on. Our results call for changes in how models turn off star formation in early galaxies."

The research team found XMM-2599 formed more than 1,000 solar masses a year in stars at its peak of activity -- an extremely high rate of star formation. In contrast, the Milky Way forms about one new star a year.

"XMM-2599 may be a descendant of a population of highly star-forming dusty galaxies in the very early universe that new infrared telescopes have recently discovered," said Danilo Marchesini, an associate professor of astronomy at Tufts University and a co-author on the study.

The evolutionary pathway of XMM-2599 is unclear.

"We have caught XMM-2599 in its inactive phase," Wilson said. "We do not know what it will turn into by the present day. We know it cannot lose mass. An interesting question is what happens around it. As time goes by, could it gravitationally attract nearby star-forming galaxies and become a bright city of galaxies?"

Co-author Michael Cooper, a professor of astronomy at UC Irvine, said this outcome is a strong possibility.

"Perhaps during the following 11.7 billion years of cosmic history, XMM-2599 will become the central member of one of the brightest and most massive clusters of galaxies in the local universe," he said. "Alternatively, it could continue to exist in isolation. Or we could have a scenario that lies between these two outcomes."

The team has been awarded more time at the Keck Observatory to follow up on unanswered questions prompted by XMM-2599.

"We identified XMM-2599 as an interesting candidate with imaging alone," said co-author Marianna Annunziatella, a postdoctoral researcher at Tufts University. "We used Keck to better characterize and confirm its nature and help us understand how monster galaxies form and die. MOSFIRE is one of the most efficient and effective instruments in the world for conducting this type of research."

Read more at Science Daily

Feb 4, 2020

Grey seals discovered clapping underwater to communicate

Seal swimming underwater.
Marine mammals like whales and seals usually communicate vocally using calls and whistles.

But now a Monash University-led international study has discovered that wild grey seals can also clap their flippers underwater during the breeding season, as a show of strength that warns off competitors and advertises to potential mates.

This is the first time a seal has been seen clapping completely underwater using its front flippers.

"The discovery of 'clapping seals' might not seem that surprising, after all, they're famous for clapping in zoos and aquaria," said lead study author Dr David Hocking from Monash University's School of Biological Sciences.

"But where zoo animals are often trained to clap for our entertainment -- these grey seals are doing it in the wild of their own accord."

The research, published today in the journal Marine Mammal Science, is based on video footage collected by naturalist Dr Ben Burville, a Visiting Researcher with Newcastle University, UK.

The footage -- which took Dr Burville 17 years of diving to catch on film -- shows a male grey seal clapping its paw-like flippers to produce a gunshot-like 'crack!' sound.

"The clap was incredibly loud and at first I found it hard to believe what I had seen," Dr Burville said.

"How could a seal make such a loud clap underwater with no air to compress between its flippers?"

"Other marine mammal species can produce similar types of percussive sound by slapping the water with their body or tail," said Associate Professor Alistair Evans from Monash University, who was also involved in the study.

The loud high-frequency noise produced by clapping cuts through background noise, sending out a clear signal to any other seals in the area.

"Depending on the context, the claps may help to ward off competitors and/or attract potential mates," Dr Hocking said.

"Think of a chest-beating male gorilla, for example. Like seal claps, those chest beats carry two messages: I am strong, stay away; and I am strong, my genes are good."

Dr Hocking said clapping seals demonstrates just how much there still is to learn about the animals living around us.

Clapping appears to be an important social behaviour for grey seals, so anything that disturbed it could impact breeding success and survival for this species.

"Human noise pollution is known to interfere with other forms of marine mammal communication, including whale song," Dr Hocking said.

"But if we do not know a behaviour exists, we cannot easily act to protect it."

Read more at Science Daily

Sand dunes can 'communicate' with each other

Sand dunes
Even though they are inanimate objects, sand dunes can 'communicate' with each other. A team from the University of Cambridge has found that as they move, sand dunes interact with and repel their downstream neighbours.

Using an experimental dune 'racetrack', the researchers observed that two identical dunes start out close together, but over time they get further and further apart. This interaction is controlled by turbulent swirls from the upstream dune, which push the downstream dune away. The results, reported in the journal Physical Review Letters, are key for the study of long-term dune migration, which threatens shipping channels, increases desertification, and can bury infrastructure such as highways.

When a pile of sand is exposed to wind or water flow, it forms a dune shape and starts moving downstream with the flow. Sand dunes, whether in deserts, on river bottoms or sea beds, rarely occur in isolation and instead usually appear in large groups, forming striking patterns known as dune fields or corridors.

It's well-known that active sand dunes migrate. Generally speaking, the speed of a dune is inverse to its size: smaller dunes move faster and larger dunes move slower. What hasn't been understood is if and how dunes within a field interact with each other.

"There are different theories on dune interaction: one is that dunes of different sizes will collide, and keep colliding, until they form one giant dune, although this phenomenon has not yet been observed in nature," said Karol Bacik, a PhD candidate in Cambridge's Department of Applied Mathematics and Theoretical Physics, and the paper's first author. "Another theory is that dunes might collide and exchange mass, sort of like billiard balls bouncing off one another, until they are the same size and move at the same speed, but we need to validate these theories experimentally."

Now, Bacik and his Cambridge colleagues have shown results that question these explanations. "We've discovered physics that hasn't been part of the model before," said Dr Nathalie Vriend, who led the research.

Most of the work in modelling the behaviour of sand dunes is done numerically, but Vriend and the members of her lab designed and constructed a unique experimental facility which enables them to observe their long-term behaviour. Water-filled flumes are common tools for studying the movement of sand dunes in a lab setting, but the dunes can only be observed until they reach the end of the tank. Instead, the Cambridge researchers have built a circular flume so that the dunes can be observed for hours as the flume rotates, while high-speed cameras allow them to track the flow of individual particles in the dunes.

Bacik hadn't originally meant to study the interaction between two dunes: "Originally, I put multiple dunes in the tank just to speed up data collection, but we didn't expect to see how they started to interact with each other," he said.

The two dunes started with the same volume and in the same shape. As the flow began to move across the two dunes, they started moving. "Since we know that the speed of a dune is related to its height, we expected that the two dunes would move at the same speed," said Vriend, who is based at the BP Institute for Multiphase Flow. "However, this is not what we observed."

Initially, the front dune moved faster than the back dune, but as the experiment continued, the front dune began to slow down, until the two dunes were moving at almost the same speed.

Crucially, the pattern of flow across the two dunes was observed to be different: the flow is deflected by the front dune, generating 'swirls' on the back dune and pushing it away. "The front dune generates the turbulence pattern which we see on the back dune," said Vriend. "The flow structure behind the front dune is like a wake behind a boat, and affects the properties of the next dune."

As the experiment continued, the dunes got further and further apart, until they form an equilibrium on opposite sides of the circular flume, remaining 180 degrees apart.

Read more at Science Daily

Pluto's icy heart makes winds blow

Pluto.
A "beating heart" of frozen nitrogen controls Pluto's winds and may give rise to features on its surface, according to a new study.

Pluto's famous heart-shaped structure, named Tombaugh Regio, quickly became famous after NASA's New Horizons mission captured footage of the dwarf planet in 2015 and revealed it isn't the barren world scientists thought it was.

Now, new research shows Pluto's renowned nitrogen heart rules its atmospheric circulation. Uncovering how Pluto's atmosphere behaves provides scientists with another place to compare to our own planet. Such findings can pinpoint both similar and distinctive features between Earth and a dwarf planet billions of miles away.

Nitrogen gas -- an element also found in air on Earth -- comprises most of Pluto's thin atmosphere, along with small amounts of carbon monoxide and the greenhouse gas methane. Frozen nitrogen also covers part of Pluto's surface in the shape of a heart. During the day, a thin layer of this nitrogen ice warms and turns into vapor. At night, the vapor condenses and once again forms ice. Each sequence is like a heartbeat, pumping nitrogen winds around the dwarf planet.

New research in AGU's Journal of Geophysical Research: Planets suggests this cycle pushes Pluto's atmosphere to circulate in the opposite direction of its spin -- a unique phenomenon called retro-rotation. As air whips close to the surface, it transports heat, grains of ice and haze particles to create dark wind streaks and plains across the north and northwestern regions.

"This highlights the fact that Pluto's atmosphere and winds -- even if the density of the atmosphere is very low -- can impact the surface," said Tanguy Bertrand, an astrophysicist and planetary scientist at NASA's Ames Research Center in California and the study's lead author.

Most of Pluto's nitrogen ice is confined to Tombaugh Regio. Its left "lobe" is a 1,000-kilometer (620-mile) ice sheet located in a 3-kilometer (1.9-mile) deep basin named Sputnik Planitia -- an area that holds most of the dwarf planet's nitrogen ice because of its low elevation. The heart's right "lobe" is comprised of highlands and nitrogen-rich glaciers that extend into the basin.

"Before New Horizons, everyone thought Pluto was going to be a netball -- completely flat, almost no diversity," Bertrand said. "But it's completely different. It has a lot of different landscapes and we are trying to understand what's going on there."

Western winds

Bertrand and his colleagues set out to determine how circulating air -- which is 100,000 times thinner than that of Earth's -- might shape features on the surface. The team pulled data from New Horizons' 2015 flyby to depict Pluto's topography and its blankets of nitrogen ice. They then simulated the nitrogen cycle with a weather forecast model and assessed how winds blew across the surface.

The group discovered Pluto's winds above 4 kilometers (2.5 miles) blow to the west -- the opposite direction from the dwarf planet's eastern spin -- in a retro-rotation during most of its year. As nitrogen within Tombaugh Regio vaporizes in the north and becomes ice in the south, its movement triggers westward winds, according to the new study. No other place in the solar system has such an atmosphere, except perhaps Neptune's moon Triton.

The researchers also found a strong current of fast-moving, near-surface air along the western boundary of the Sputnik Planitia basin. The airflow is like wind patterns on Earth, such as the Kuroshio along the eastern edge of Asia. Atmospheric nitrogen condensing into ice drives this wind pattern, according to the new findings. Sputnik Planitia's high cliffs trap the cold air inside the basin, where it circulates and becomes stronger as it passes through the western region.

The intense western boundary current's existence excited Candice Hansen-Koharcheck, a planetary scientist with the Planetary Science Institute in Tucson, Arizona who wasn't involved with the new study.

"It's very much the kind of thing that's due to the topography or specifics of the setting," she said. "I'm impressed that Pluto's models have advanced to the point that you can talk about regional weather."

On the broader scale, Hansen-Koharcheck thought the new study was intriguing. "This whole concept of Pluto's beating heart is a wonderful way of thinking about it," she added.

These wind patterns stemming from Pluto's nitrogen heart may explain why it hosts dark plains and wind streaks to the west of Sputnik Planitia. Winds could transport heat -- which would warm the surface -- or could erode and darken the ice by transporting and depositing haze particles. If winds on the dwarf planet swirled in a different direction, its landscapes might look completely different.

"Sputnik Planitia may be as important for Pluto's climate as the ocean is for Earth's climate," Bertrand said. "If you remove Sputnik Planitia -- if you remove the heart of Pluto -- you won't have the same circulation," he added.

Read more at Science Daily

First childhood flu helps explain why virus hits some people harder than others

Taking temperature of child
Why are some people better able to fight off the flu than others? Part of the answer, according to a new study, is related to the first flu strain we encounter in childhood.

Scientists from UCLA and the University of Arizona have found that people's ability to fight off the flu virus is determined not only by the subtypes of flu they have had throughout their lives, but also by the sequence in which they are been infected by the viruses. Their study is published in the open-access journal PLoS Pathogens.

The research offers an explanation for why some people fare much worse than others when infected with the same strain of the flu virus, and the findings could help inform strategies for minimizing the effects of the seasonal flu.

In addition, UCLA scientists, including Professor James Lloyd-Smith, who also was a senior author of the PLoS Pathogens research, recently completed a study that analyzes travel-related screening for the new novel coronavirus 2019-nCoV.

The researchers report that screening travelers is not very effective for the 2019 coronavirus -- that it will catch less than half of infected travelers, on average -- and that most infected travelers are undetectable, meaning that they have no symptoms yet, and are unaware that they have been exposed. So stopping the spread of the virus is not a matter of just enhancing screening methods at airports and other travel hubs.

"This puts the onus on government officials and public health officials to follow up with travelers after they arrive, to isolate them and trace their contacts if they get sick later," said Lloyd-Smith, a UCLA professor of ecology and evolutionary biology. Many governments have started to impose quarantines, or even travel bans, as they realize that screening is not sufficient to stop the spread of the coronavirus.

One major concern, Lloyd-Smith said, is that other countries, especially developing nations, lack the infrastructure and resources for those measures, and are therefore vulnerable to importing the disease.

"Much of the public health world is very concerned about the virus being introduced into Africa or India, where large populations exist do not have access to advanced medical care," he said.

The researchers, including scientists from the University of Chicago and the London School of Tropical Hygiene and Medicine, have developed a free online app where people can calculate the effectiveness of travel screening based on a range of parameters.

Solving a decades-old question


The PLoS Pathogens study may help solve a problem that had for decades vexed scientists and health care professionals: why the same strain of the flu virus affects people with various degrees of severity.

A team that included some of the same UCLA and Arizona scientists reported in 2016 that exposure to influenza viruses during childhood gives people partial protection for the rest of their lives against distantly related influenza viruses. Biologists call the idea that past exposure to the flu virus determines a person's future response to infections "immunological imprinting."

The 2016 research helped overturn a commonly held belief that previous exposure to a flu virus conferred little or no immunological protection against strains that can jump from animals into humans, such as those causing the strains known as swine flu or bird flu. Those strains, which have caused hundreds of spillover cases of severe illness and death in humans, are of global concern because they could gain mutations that allow them to readily jump not only from animal populations to humans, but also to spread rapidly from person to person.

In the new study, the researchers investigated whether immunological imprinting could explain people's response to flu strains already circulating in the human population and to what extent it could account for observed discrepancies in how severely the seasonal flu affects people in different age groups.

To track how different strains of the flu virus affect people at different ages, the team analyzed health records that the Arizona Department of Health Services obtains from hospitals and private physicians.

Two subtypes of influenza virus, H3N2 and H1N1, have been responsible for seasonal outbreaks of the flu over the past several decades. H3N2 causes the majority of severe cases in high-risk elderly people and the majority of deaths from the flu. H1N1 is more likely to affect young and middle-aged adults, and causes fewer deaths.

The health record data revealed a pattern: People first exposed to the less severe strain, H1N1, during childhood were less likely to end up hospitalized if they encountered H1N1 again later in life than people who were first exposed to H3N2. And people first exposed to H3N2 received extra protection against H3N2 later in life.

The researchers also analyzed the evolutionary relationships between the flu strains. H1N1 and H3N2, they learned, belong to two separate branches on the influenza "family tree," said James Lloyd-Smith, a UCLA professor of ecology and evolutionary biology and one of the study's senior authors. While infection with one does result in the immune system being better prepared to fight a future infection from the other, protection against future infections is much stronger when one is exposed to strains from the same group one has battled before, he said.

The records also revealed another pattern: People whose first childhood exposure was to H2N2, a close cousin of H1N1, did not have a protective advantage when they later encountered H1N1. That phenomenon was much more difficult to explain, because the two subtypes are in the same group, and the researchers' earlier work showed that exposure to one can, in some cases, grant considerable protection against the other.

"Our immune system often struggles to recognize and defend against closely related strains of seasonal flu, even though these are essentially the genetic sisters and brothers of strains that circulated just a few years ago," said lead author Katelyn Gostic, who was a UCLA doctoral student in Lloyd-Smith's laboratory when the study was conducted and is now a postdoctoral fellow at the University of Chicago. "This is perplexing because our research on bird flu shows that deep in our immune memory, we have some ability to recognize and defend against the distantly related, genetic third cousins of the strains we saw as children.

"We hope that by studying differences in immunity against bird flus -- where our immune system shows a natural ability to deploy broadly effective protection -- and against seasonal flus -- where our immune system seems to have bigger blind spots -- we can uncover clues useful to universal influenza vaccine development."

Around the world, influenza remains a major killer. The past two flu seasons have been more severe than expected, said Michael Worobey, a co-author of the study and head of the University of Arizona's department of ecology and evolutionary biology. In the 2017-18 season, 80,000 people died in the U.S., more than in the swine flu pandemic of 2009, he said.

People who had their first bout of flu as children in 1955 -- when the H1N1 was circulating but the H3N2 virus was not -- were much more likely to be hospitalized with an H3N2 infection than an H1N1 infection last year, when both strains were circulating, Worobey said.

"The second subtype you're exposed to is not able to create an immune response that is as protective and durable as the first," he said.

The researchers hope that their findings could help predict which age groups might be severely affected during future flu seasons based on the subtype circulating. That information could also help health officials prepare their response, including decisions about who should receive certain vaccines that are only available in limited quantities.

Read more at Science Daily

Feb 3, 2020

Showing how the tiniest particles in our universe saved us from complete annihilation

Recently discovered ripples of spacetime called gravitational waves could contain evidence to prove the theory that life survived the Big Bang because of a phase transition that allowed neutrino particles to reshuffle matter and anti-matter, explains a new study by an international team of researchers.

How we were saved from a complete annihilation is not a question in science fiction or a Hollywood movie. According to the Big Bang theory of modern cosmology, matter was created with an equal amount of anti-matter. If it had stayed that way, matter and anti-matter should have eventually met and annihilated one to one, leading up to a complete annihilation.

But our existence contradicts this theory. To overcome a complete annihilation, the Universe must have turned a small amount of anti-matter into matter creating an imbalance between them. The imbalance needed is only a part in a billion. But it has remained a complete mystery when and how the imbalance was created.

"The Universe becomes opaque to light once we look back to around a million years after its birth. This makes the fundamental question of 'why are we here?' difficult to answer," says paper co-author Jeff Dror, postdoctoral fellow at the University of California, Berkeley, and physics researcher at Lawrence Berkeley National Laboratory.

Since matter and anti-matter have the opposite electrical charges, they cannot turn into each other, unless they are electrical neutral. Neutrinos are the only electrical neutral matter particles we know, and they are the strongest contender to do this job. A theory many researchers support is that the Universe went through a phase transition so that neutrinos could reshuffle matter and anti-matter.

"A phase transition is like boiling water to vapor, or cooling water to ice. The behavior of matter changes at specific temperatures called critical temperature. When a certain metal is cooled to a low temperature, it loses electrical resistance completely by a phase transition, becoming a superconductor. It is the basis of Magnetic Resonance Imaging (MRI) for cancer diagnosis or maglev technology that floats a train so that it can run at 300 miles an hour without causing dizziness. Just like a superconductor, the phase transition in the early Universe may have created a very thin tube of magnetic fields called cosmic strings," explains paper co-author Hitoshi Murayama, MacAdams Professor of Physics at the University of California, Berkeley, Principal Investigator at the Kavli Institute for the Physics and Mathematics of the Universe, University of Tokyo, and senior faculty scientist at Lawrence Berkeley National Laboratory.

Dror and Murayama are part of a team of researchers from Japan, US and Canada who believe the cosmic strings then try to simplify themselves, leading up to tiny wobbling of spacetime called gravitational waves. These could be detected by future space-borne observatories such as LISA, BBO (European Space Agency) or DECIGO (Japanese Astronautical Exploration Agency) for nearly all possible critical temperatures.

"The recent discovery of gravitational waves opens up a new opportunity to look back further to a time, as the Universe is transparent to gravity all the way back to the beginning. When the Universe might have been a trillion to a quadrillion times hotter than the hottest place in the Universe today, neutrinos are likely to have behaved in just the way we require to ensure our survival. We demonstrated that they probably also left behind a background of detectable gravitational ripples to let us know," says paper co-author Graham White, a postdoctoral fellow at TRIUMF.

"Cosmic strings used to be popular as a way of creating small variations in mass densities that eventually became stars and galaxies, but it died because recent data excluded this idea. Now with our work, the idea comes back for a different reason. This is exciting!" says Takashi Hiramatsu, a postdoctoral fellow at the Institute for Cosmic Ray Research, University of Tokyo, which runs Japan's gravitational wave detector KAGRA and Hyper-Kamiokande experiments.

Read more at Science Daily

Early life experiences biologically and functionally mature the brain

Experiences early in life have an impact on the brain's biological and functional development, shows a new study by a team of neuroscientists. Its findings, which centered on changes in mice and rats, reveal how learning and memory abilities may vary, depending on the nature of individual experiences in early life.

"The implications of this are many, including environmental influences on mental health, the role of education, the significance of poverty, and the impact of social settings," says Cristina Alberini, a professor in New York University's Center for Neural Science and the senior author of the paper, which appears in the journal Nature Communications.

"These results also offer promise for potential therapeutic interventions," add Alberini and Benjamin Bessieres, an NYU postdoctoral researcher and the paper's co-lead author. "By identifying critical time periods for brain development, they provide an indicator of when pharmaceutical, behavioral or other type of interventions may be most beneficial."

In general, very little is known about the mechanisms that underlie the development of learning and memory abilities. The Nature Communications study sought to shed new light on this process studying the biological elements linked to episodic memories -- those of specific events or experiences -- in infants by using rats and mice.

In their experiments, the scientists tested whether and how different types of experiences mature learning and memory abilities.

In one experience, infant mice and rats were placed in a small compartment -- a procedure paired with a mild foot shock (a commonly used method to test memory for a context). Their memory was tested by placing them back in these compartments; if they revealed a hesitation, it indicated that they had formed a memory of previously being in the compartment.

In a different type of experience, the infant mice and rats were exposed to novel objects in a given spatial configuration. Here, rodents that have a memory for this experience show more exploration toward a novel object location when presented with a combination of new and old locations, simply because they have a natural tendency to explore more new object locations. This reveals a memory of object location. Both types of experiences, context and object location, are stored by the same memory system.

The authors then asked two questions.

The first was: Does learning mature memory abilities?

The results showed that it does as both context and object location experiences matured the brain at both biological and functional levels. Overall, in fact, researchers found that the episodic experiences of the young mice and rats led to unique biological changes, specifically indicating maturation in the hippocampus -- a region critical for episodic memory formation. However, they did not find the same changes in older mice and rats.

Furthermore, they saw that with each type of learning, context or object location, the infant animal matured its performance and became capable of remembering long-term, more like an older animal does.

The team's second question was: Does the maturation produced by one type of experience develop the entire memory system and all its abilities? Or is the maturation selective for the type of experience that the animal had?

They found that the maturation produced by one type of experience (context) did not transfer to the other learning (object location) and vice versa, leading them to conclude that the maturation of learning and memory abilities is selective for the type of experiences encountered early in life.

"Because the biological maturation changes no longer occurred with episodic learning at later ages, it's clear that the infant brain employs distinct biological mechanisms to form and store episodic memories," write Alberini and Bessieres. "We found that this biological maturation is paralleled by and required for the functional maturation of memory -- that is, the ability to express memory long-term."

"Our results indicate that specific experiences during the infantile developmental period make a major contribution to individual differences in learning and memory abilities," they add. "Although all individuals are exposed to general learning of facts, people, things, time, and spaces, and therefore must develop a wide range of abilities and competences processed by the hippocampal memory system, our data suggest that the individual history shapes the maturation of selective abilities."

Read more at Science Daily

Low-energy solar particles from beyond Earth found near the Sun

The Sun
Using data from NASA's Parker Solar Probe (PSP), a team led by Southwest Research Institute identified low-energy particles lurking near the Sun that likely originated from solar wind interactions well beyond Earth orbit. PSP is venturing closer to the Sun than any previous probe, carrying hardware SwRI helped develop. Scientists are probing the enigmatic features of the Sun to answer many questions, including how to protect space travelers and technology from the radiation associated with solar events.

"Our main goal is to determine the acceleration mechanisms that create and transport dangerous high-energy particles from the solar atmosphere into the solar system, including the near-Earth environment," said Dr. Mihir Desai, a mission co-investigator on the Integrated Science Investigation of the Sun (IS?IS) instrument suite, a multi-institutional project led by Principal Investigator Prof. Dave McComas of Princeton University.. IS?IS consists of two instruments, Energetic Particle Instrument-High (EPI-Hi) and Energetic Particle Instrument-Low (EPI-Lo). "With EPI-Lo, we were able to measure extremely low-energy particles unexpectedly close to the solar environment. We considered many explanations for their presence, but ultimately determined they are the smoking gun pointing to interactions between slow- and fast-moving regions of the solar wind that accelerate high-energy particles from beyond the orbit of Earth. Some of those travel back toward the Sun, slowing against the tide of the outpouring solar wind but still retaining surprisingly high energies."

PSP, which will travel within 4 million miles of the Sun's surface, is collecting new solar data to help scientists understand how solar events, such as coronal mass ejections, impact life on Earth. During the rising portion of the Sun's activity cycle, our star releases huge quantities of energized matter, magnetic fields and electromagnetic radiation in the form of coronal mass ejections (CMEs). This material is integrated into the solar wind, the steady stream of charged particles released from the Sun's upper atmosphere. The high-energy solar energetic particles (SEPs) present a serious radiation threat to human explorers living and working outside low-Earth orbit and to technological assets such as communications and scientific satellites in space. The mission is making the first-ever direct measurements of both the low-energy source populations as well as the more hazardous, higher-energy particles in the near-Sun environment, where the acceleration takes place.

When the Sun's activity reaches a lull, roughly about every 11 years, solar equatorial regions emit slower solar wind streams, traveling around 1 million miles per hour, while the poles spew faster streams, traveling twice as fast at 2 million miles per hour. Stream Interaction Regions (SIRs) are created by interactions at boundaries between the fast and slow solar wind. Fast-moving streams tend to overtake slower streams that originate westward of them on the Sun, forming turbulent corotating interaction regions (CIRs) that produce shock waves and accelerated particles, not unlike those produced by CMEs.

Read more at Science Daily