Sep 8, 2018

A study of ants provides information on the evolution of social insects

Ooceraea biroi clonal raider ants with larvae.
One of the great puzzles of evolutional biology is what induced certain living creatures to abandon solitary existence in favor of living in collaborative societies, as seen in the case of ants and other social, colony-forming insects. A major characteristic of so-called eusocial species is the division of labor between queens that lay eggs and workers that take care of the brood and perform other tasks. But what is it that determines that a queen should lay eggs and that workers shouldn't reproduce? And how did this distinction come about during the course of evolution? Evolutionary biologist Dr. Romain Libbrecht has been considering these problems over the past years and in cooperation with researchers at Rockefeller University in New York City has found a completely unexpected answer: one single gene called insulin-like peptide 2 (ILP2), which is probably activated by better nutrition, stimulates the ovaries and triggers reproduction.

"It may seem almost inconceivable that just one single gene should make all the difference," Libbrecht pointed out. The researchers drew their conclusion from a comparison of 5,581 genes in seven ant species in four different subfamilies that differ from each other with regard to numerous characteristics. But in one thing are they all alike: there is always a greater expression of ILP2 in the brain of reproductive insects. Queens thus have higher levels than workers. A further finding indicates that this peptide is found only in the brain, where it is produced in a small cluster of just 12 to 15 cells.

Division of reproduction and brood care as the basis of social colony formation

It is postulated that the origins of social behavior in insects are to be found in wasp-like ancestors that alternated between reproduction and brood care phases. A female wasp would lay an egg and take care of the larva until it pupated. However, these two phases were separated and their associated duties assigned to different individuals, namely queens and workers, during the evolution of eusociality.

Libbrecht and his colleagues in New York City investigated the ant species Ooceraea biroi to determine the molecular mechanisms underlying this division of labor. O. biroi is a small species of 2 to 3 millimeters in length that originated from Asia but has spread throughout the tropics. The insects live in underground passages, attack the nests of other ant species, and feed on their brood. The unusual thing about the O. biroi species is that there are no queens, only female workers. However, every female worker can reproduce through parthenogenesis. This means that a female produces another identical female -- the insects in effect clone themselves. And they always follow a specific cycle: all female workers lay eggs during an 18-day period, after which they spend 16 days gathering food and feeding the larvae. The cycle then begins once again.

This cyclical behavior is comparable to that of the solitary wasp-like ancestors and is controlled by the presence of larvae. When the first larvae hatch at the end of the reproductive phase, their presence suppresses ovarian activity and triggers brood care behavior. When the larvae begin pupation at the end of the brood care phase, ovarian activity is scaled up and foraging scaled back. "What we did was break this cycle," explained Libbrecht. The researchers synthesized the peptide ILP2 and injected it into the ants. This caused the ants to lay eggs in the presence of larvae.

Libbrecht used a brood substitution approach to investigate what happens when larvae are introduced into the colony during the reproductive phase and, conversely, when they are removed during the brood care phase. "What we see is that gene expression in the brain changes in both phases and the ants change their behavior and physiology accordingly. This response, however, happens at a faster rate if we confront egg-laying ants with larvae." The insects then stop laying eggs and start to care for the brood. "This does make sense. It is, after all, important for survival to quickly start feeding the larvae," he added. This experiment also revealed that the expression of ILP2 in the brain changed quickly and significantly in response to the change in social conditions.

From asymmetry in nutrition to asymmetry in reproduction


The researchers also considered the relevance of nutrition, which is known to be of importance when it comes to the differentiation between queens and workers. A large quantity or a good quality of nutritional protein favors the development of female larvae into queens. In colonies of the species O. biroi, a small proportion of the ants are so-called intercastes. These insects are slightly bigger, have eyes, and are more reproductive. Because of this, they can be compared to some extent with normal queens. The probability of a larva becoming an intercaste increases if it receives better nourishment. Fluorescence imaging shows that these intercastes have more ILP2 in their brains than normal workers.

"Something comparable may well have taken place in the case of the ancestors of eusocial insects," Dr. Romain Libbrecht suggested. "Perhaps a minor asymmetry with regard to nutrition of larvae led to asymmetry in the reproductive behavior of adults developing from those larvae." The assumption that the division into queens and workers might therefore have commenced with one single difference is supported by experiments conducted in a total of seven different ant species.

Read more at Science Daily

Single molecule control for a millionth of a billionth of a second

The Scanning Tunnelling Microscope.
Physicists at the University of Bath have discovered how to manipulate and control individual molecules for a millionth of a billionth of a second, after being intrigued by some seemingly odd results.

Their new technique is the most sensitive way of controlling a chemical reaction on some of the smallest scales scientists can work -- at the single molecule level. It will open up research possibilities across the fields of nanoscience and nanophysics.

An experiment at the extreme limit of nanoscience called "STM (scanning tunnelling microscope) molecular manipulation" is often used to observe how individual molecules react when excited by adding a single electron.

A traditional chemist may use a test-tube and a Bunsen burner to drive a reaction; here they used a microscope and its electrical current to drive the reaction. The current is so small it is more akin to series of individual electrons hitting the target molecule. But this whole experiment is a passive process- once the electron is added to the molecule researchers only observe what happens.

But when Dr Kristina Rusimova reviewed her data from the lab while on holiday, she discovered some anomalous results in a standard experiment, which on further investigation couldn't be explained away. When the electric current is turned up, reactions always goes faster, except here it didn't.

Dr Rusimova and colleagues spent months thinking of possible explanations to debunk the effect, and repeating the experiments, but eventually realised they had found a way to control single-molecule experiments to an unprecedented degree, in new research published in Science.

The team discovered that by keeping the tip of their microscope extremely close to the molecule being studied, within 600-800 trillionths of a metre, the duration of how long the electron sticks to the target molecule can be reduced by over two orders of magnitude, and so the resulting reaction, here driving individual toluene molecules to lift off (desorb) from a silicon surface, can be controlled.

The team believes this is because the tip and molecule interact to create a new quantum state, which offers a new channel for the electron to hop to from the molecule, hence reducing the time the electron spends on the molecule and so reducing the chances of that electron causing a reaction.

At its most sensitive this means the time of the reaction can be controlled for its natural limit to 10 femtoseconds down to just 0.1 femtoseconds.

Dr Rusimova said: "This was data from an utterly standard experiment we were doing because we thought we had exhausted all the interesting stuff -- this was just a final check. But my data looked 'wrong' -- all the graphs were supposed to go up and mine went down. "

Dr Peter Sloan, lead author on the study, added: "If this was correct, we had a completely new effect but we knew if we were going to claim anything so striking we needed to do some work to make sure it's real and not down to false positives."

"I always think our microscope is a bit like the Millennium Falcon, not too elegant, held together by the people who run it, but utterly fantastic at what it does. Between Kristina and PhD student Rebecca Purkiss the level of spatial control they had over the microscope was the key to unlocking this new physics."

Dr Sloan added: "The fundamental aim of this work is to develop the tools to allow us to control matter at this extreme limit. Be it breaking chemical bonds that nature doesn't really want you to break, or producing molecular architectures that are thermodynamically forbidden. Our work offers a new route to control single molecules and their reaction. Essentially we have a new dial we can set when running our experiment. The extreme nature of working on these scales makes it hard to do, but we have extreme resolution and reproducibility with this technique."

Read more at Science Daily

Sep 7, 2018

Unravelling the reasons why mass extinctions occur

Dead forest in Deadvlei, Namibia.
Scientists from the University of Leicester have shed new light on why mass extinctions have occurred through history -- and how this knowledge could help in predicting upcoming ecological catastrophes.

The international team has investigated sudden ecological transitions throughout history, from mass mortality events in the far past to more recent extinctions which have occurred over the last few decades.

In a paper published in the journal Science, co-authored by Professor Sergei Petrovskii and Dr Andrew Morozov from the University of Leicester's Department of Mathematics, and a group of leading scientists from the USA and Canada, the team has explored the long-standing mystery of why these ecological transitions occur.

Ecological systems sometimes experience sudden changes in their properties or function which often results in species extinction and significant loss of biodiversity.

Understanding why these significant changes occur remains a challenge, in particular because transitions often happen under apparently steady, constant conditions and therefore cannot be directly linked to a specific environmental change.

By bringing together empirical data, insights from ecological theory and mathematical models, the team has revealed that abrupt transitions in an ecosystem can occur as a result of long-term transient dynamics, including 'ghost attractors' and 'crawl-bys'.

An attractor is an 'end-state' of a given ecosystem, that is where it expected to be found over an infinitely long period of time and/or where it returns after small perturbations.

A 'ghost attractor' is a special configuration of a dynamical system that exhibits the same behaviour as an attractor but only for a finite time within an ecosystem. After that time, the system would normally experience a fast evolution or transition to another state which can have very different properties. Such transition would therefore correspond to a catastrophe or major ecological shift.

'Crawl-bys', on the other hand, exist when changes to the dynamic of an ecosystem happen slowly over a long period of time.

Professor Petrovskii explained: "An ecological catastrophe emerging from a 'ghost attractor' or a 'crawl-by' may be a debt that we have to pay for the actions or mistakes -- for example unsustainable use of natural resources -- made many generations ago.

"Our research shows that a healthy ecosystem will not necessarily remain healthy, even in the absence of any significant environmental change. Therefore, better monitoring of the state of an ecosystem is required to mitigate potential disasters.

"We also can predict an approaching catastrophe in the sense that our study advises where to look for its signs and what is the relevant time scale: the environmental change (whether it is natural or human-made) that will finally lead to big changes might have happened a very long time ago.

Read more at Science Daily

Ancient farmers spared us from glaciers but profoundly changed Earth's climate

Plowing with an ox team
Millennia ago, ancient farmers cleared land to plant wheat and maize, potatoes and squash. They flooded fields to grow rice. They began to raise livestock. And unknowingly, they may have been fundamentally altering the climate of Earth.

A study published in the journal Scientific Reports provides new evidence that ancient farming practices led to a rise in the atmospheric emission of the heat-trapping gases carbon dioxide and methane -- a rise that has continued since, unlike the trend at any other time in Earth's geologic history.

It also shows that without this human influence, by the start of the Industrial Revolution, the planet would have likely been headed for another ice age.

"Had it not been for early agriculture, Earth's climate would be significantly cooler today," says lead author, Stephen Vavrus, a senior scientist in the University of Wisconsin-Madison Center for Climatic Research in the Nelson Institute for Environmental Studies. "The ancient roots of farming produced enough carbon dioxide and methane to influence the environment."

The findings are based on a sophisticated climate model that compared our current geologic time period, called the Holocene, to a similar period 800,000 years ago. They show the earlier period, called MIS19, was already 2.3 degrees Fahrenheit (1.3 C) cooler globally than the equivalent time in the Holocene, around the year 1850. This effect would have been more pronounced in the Arctic, where the model shows temperatures were 9-to-11 degrees Fahrenheit colder.

Using climate reconstructions based on ice core data, the model also showed that while MIS19 and the Holocene began with similar carbon dioxide and methane concentrations, MIS19 saw an overall steady drop in both greenhouse gases while the Holocene reversed direction 5,000 years ago, hitting peak concentrations of both gases by 1850. The researchers deliberately cut the model off at the start of the Industrial Revolution, when sources of greenhouse gas emissions became much more numerous.

For most of Earth's 4.5-billion-year history, its climate has largely been determined by a natural phenomenon known as Milankovitch cycles, periodic changes in the shape of Earth's orbit around the sun -- which fluctuates from more circular to more elliptical -- and the way Earth wobbles and tilts on its axis.

Astronomers can calculate these cycles with precision and they can also be observed in the geological and paleoecological records. The cycles influence where sunlight is distributed on the planet, leading to cold glacial periods or ice ages as well as warmer interglacial periods. The last glacial period ended roughly 12,000 years ago and Earth has since been in the Holocene, an interglacial period. The Holocene and MIS19 share similar Milankovitch cycle characteristics.

All other interglacial periods scientists have studied, including MIS19, begin with higher levels of carbon dioxide and methane, which gradually decline over thousands of years, leading to cooler conditions on Earth. Ultimately, conditions cool to a point where glaciation begins.

Fifteen years ago, study co-author William Ruddiman, emeritus paleoclimatologist at the University of Virginia, was studying methane and carbon dioxide trapped in Antarctic ice going back tens of thousands of years when he observed something unusual.

"I noticed that methane concentrations started decreasing about 10,000 years ago and then reversed direction 5,000 years ago and I also noted that carbon dioxide also started decreasing around 10,000 years ago and then reversed direction about 7,000 years ago," says Ruddiman. "It alerted me that there was something strange about this interglaciation ... the only explanation I could come up with is early agriculture, which put greenhouse gases into the atmosphere and that was the start of it all."

Ruddiman named this the Early Anthropogenic Hypothesis and a number of studies have recently emerged suggesting its plausibility. They document widespread deforestation in Europe beginning around 6,000 years ago, the emergence of large farming settlements in China 7,000 years ago, plus the spread of rice paddies -- robust sources of methane -- throughout northeast Asia by 5,000 years ago.

Ruddiman and others have also been working to test the hypothesis. He has collaborated with Vavrus, an expert in climate modeling, for many years and their newest study used the Community Climate System Model 4 to simulate what would have happened in the Holocene if not for human agriculture. It offers higher resolution than climate models the team has used previously and provides new insights into the physical processes underlying glaciation.

For instance, in a simulation of MIS19, glaciation began with strong cooling in the Arctic and subsequent expansion of sea ice and year-round snow cover. The model showed this beginning in an area known as the Canadian archipelago, which includes Baffin Island, where summer temperatures dropped by more than 5 degrees Fahrenheit.

"This is consistent with geologic evidence," says Vavrus.

Today, the Arctic is warming. But before we laud ancient farmers for staving off a global chill, Vavrus and Ruddiman caution that this fundamental alteration to our global climate cycle is uncharted territory.

"People say (our work) sends the wrong message, but science takes you where it takes you," says Vavrus. "Things are so far out of whack now, the last 2,000 years have been so outside the natural bounds, we are so far beyond what is natural."

The reality is, we don't know what happens next. And glaciers have long served as Earth's predominant source of freshwater.

Read more at Science Daily

New exoplanet found very close to its star

A size comparison of the Earth, Wolf 503b and Neptune. The color blue for Wolf 503b is imaginary; nothing is yet known about the atmosphere or surface of the planet.
Wolf 503b, an exoplanet twice the size of Earth, has been discovered by an international team of Canadian, American and German researchers using data from NASA's Kepler Space Telescope. The find is described in a new study whose lead author is Merrin Peterson, an Institute for research on exoplanets (iREx) graduate student who started her master's degree at Université de Montréal (UdeM) in May.

Wolf 503b is about 145 light years from Earth in the Virgo constellation; it orbits its star every six days and is thus very close to it, about 10 times closer than Mercury is to the Sun.

"The discovery and confirmation of this new exoplanet was very rapid, thanks to the collaboration that I and my advisor, Björn Benneke, are a part of," Peterson said. "In May, when the latest release of Kepler K2 data came in, we quickly ran a program that allowed us to find as many interesting candidate exoplanets as possible. Wolf 503b was one of them."

The program the team used identifies distinct, periodic dips that appear in the light curve of a star when a planet passes in front of it. In order to better characterize the system Wolf 503b is part of, the astronomers first obtained a spectrum of the host star at the NASA Infrared Telescope Facility. This confirmed the star is an old 'orange dwarf', slightly less luminous than the Sun but about twice as old, and allowed a precise determination of the radius of both the star and its companion.

To confirm the companion was indeed a planet and to avoid making a false positive identification, the team obtained adaptive optics measurements from Palomar Observatory and also examined archival data. With these, they were able to confirm that there were no binary stars in the background and that the star did not have another, more massive companion that could be interpreted as a transiting planet.

Wolf 503b is interesting, firstly, because of its size. Thanks to the Kepler telescope, we know that most of the planets in the Milky Way that orbit close to their stars are about as big as Wolf 503b, somewhere between that the size of the Earth and Neptune (which is 4 times bigger than Earth). Since there is nothing like them in our solar system, astronomers wonder whether these planets are small and rocky 'super-Earths' or gaseous mini versions of Neptune. One recent discovery also shows that there are significantly fewer planets that are between 1.5 and 2 times the size of Earth than those either smaller or larger than that. This drop, called the Fulton gap, could be what distinguishes the two types of planets from each other, researchers say in their study of the discovery, published in 2017.

"Wolf 503b is one of the only planets with a radius near the gap that has a star that is bright enough to be amenable to more detailed study that will better constrain its true nature," explained Björn Benneke, an UdeM professor and member of iREx and CRAQ. "It provides a key opportunity to better understand the origin of this radius gap as well as the nature of the intriguing populations of 'super-Earths' and 'sub-Neptunes' as a whole."

The second reason for interest in the Wolf 503b system is that the star is relatively close to Earth, and thus very bright. One of the possible follow-up studies for bright stars is the measurement of their radial velocity to determine the mass of the planets in orbit around them. A more massive planet will have a greater gravitational influence on its star, and the variation in line-of-sight velocity of the star over time will be greater. The mass, together with the radius determined by Kepler's observations, gives the bulk density of the planet, which in turn tells us something about its composition. For example, at its radius, if the planet has a composition similar to Earth, it would have to be about 14 times its mass. If, like Neptune, it has an atmosphere rich in gas or volatiles, it would be approximately half as massive.

Because of its brightness, Wolf 503 will also be a prime target for the upcoming James Webb Space Telescope. Using a technique called transit spectroscopy, it will be possible to study the chemical content of the planet's atmosphere, and to detect the presence of molecules like hydrogen and water. This is crucial to verify if it is similar to that of the Earth, Neptune or completely different from the atmospheres of planets in our solar system.

Similar observations can't be made of most planets found by Kepler, because their host stars are usually much fainter. As a result, the bulk densities and atmospheric compositions of most exoplanets are still unknown.

Read more at Science Daily

Pluto should be reclassified as a planet, experts say

Should Pluto be reclassified a planet again? UCF scientist Philip Metzger says yes based on his research.
The reason Pluto lost its planet status is not valid, according to new research from the University of Central Florida in Orlando.

In 2006, the International Astronomical Union, a global group of astronomy experts, established a definition of a planet that required it to "clear" its orbit, or in other words, be the largest gravitational force in its orbit.

Since Neptune's gravity influences its neighboring planet Pluto, and Pluto shares its orbit with frozen gases and objects in the Kuiper belt, that meant Pluto was out of planet status. However, in a new study published online Wednesday in the journal Icarus, UCF planetary scientist Philip Metzger, who is with the university's Florida Space Institute, reported that this standard for classifying planets is not supported in the research literature.

Metzger, who is lead author on the study, reviewed scientific literature from the past 200 years and found only one publication -- from 1802 -- that used the clearing-orbit requirement to classify planets, and it was based on since-disproven reasoning.

He said moons such as Saturn's Titan and Jupiter's Europa have been routinely called planets by planetary scientists since the time of Galileo.

"The IAU definition would say that the fundamental object of planetary science, the planet, is supposed to be a defined on the basis of a concept that nobody uses in their research," Metzger said. "And it would leave out the second-most complex, interesting planet in our solar system." "We now have a list of well over 100 recent examples of planetary scientists using the word planet in a way that violates the IAU definition, but they are doing it because it's functionally useful," he said. "It's a sloppy definition," Metzger said of the IAU's definition. "They didn't say what they meant by clearing their orbit. If you take that literally, then there are no planets, because no planet clears its orbit."

The planetary scientist said that the literature review showed that the real division between planets and other celestial bodies, such as asteroids, occurred in the early 1950s when Gerard Kuiper published a paper that made the distinction based on how they were formed.

However, even this reason is no longer considered a factor that determines if a celestial body is a planet, Metzger said.

Study co-author Kirby Runyon, with Johns Hopkins University Applied Physics Laboratory in Laurel, Maryland, said the IAU's definition was erroneous since the literature review showed that clearing orbit is not a standard that is used for distinguishing asteroids from planets, as the IAU claimed when crafting the 2006 definition of planets.

"We showed that this is a false historical claim," Runyon said. "It is therefore fallacious to apply the same reasoning to Pluto," he said. Metzger said that the definition of a planet should be based on its intrinsic properties, rather than ones that can change, such as the dynamics of a planet's orbit. "Dynamics are not constant, they are constantly changing," Metzger said. "So, they are not the fundamental description of a body, they are just the occupation of a body at a current era."

Instead, Metzger recommends classifying a planet based on if it is large enough that its gravity allows it to become spherical in shape.

"And that's not just an arbitrary definition, Metzger said. "It turns out this is an important milestone in the evolution of a planetary body, because apparently when it happens, it initiates active geology in the body."

Pluto, for instance, has an underground ocean, a multilayer atmosphere, organic compounds, evidence of ancient lakes and multiple moons, he said.

Read more at Science Daily

Sep 6, 2018

A new theory for phantom limb pain points the way to more effective treatment

The patient, missing his right arm, can see himself on screen in augmented reality, with a virtual limb. He can control it through the electrodes attached to his skin, which allows the patient to stimulate and reactivate those dormant areas of the brain.
Dr Max Ortiz Catalan of Chalmers University of Technology, Sweden, has developed a new theory for the origin of the mysterious condition, 'phantom limb pain'. Published in the journal Frontiers in Neurology, his hypothesis builds upon his previous work on a revolutionary treatment for the condition, that uses machine learning and augmented reality.

Phantom limb pain is a poorly understood phenomenon, in which people who have lost a limb can experience severe pain, seemingly located in that missing part of the body. The condition can be seriously debilitating and can drastically reduce the sufferer's quality of life. But current ideas on its origins cannot explain clinical findings, nor provide a comprehensive theoretical framework for its study and treatment.

Now, Max Ortiz Catalan, Associate Professor at Chalmers University of Technology, has published a paper that offers up a promising new theory -- one that he terms 'stochastic entanglement'.

He proposes that after an amputation, neural circuitry related to the missing limb loses its role and becomes susceptible to entanglement with other neural networks -- in this case, the network responsible for pain perception.

"Imagine you lose your hand. That leaves a big chunk of 'real estate' in your brain, and in your nervous system as a whole, without a job. It stops processing any sensory input, it stops producing any motor output to move the hand. It goes idle -- but not silent," explains Max Ortiz Catalan.

Neurons are never completely silent. When not processing a particular job, they might fire at random. This may result in coincidental firing of neurons in that part of the sensorimotor network, at the same time as from the network of pain perception. When they fire together, that will create the experience of pain in that part of the body.

"Normally, sporadic synchronised firing wouldn't be a big deal, because it's just part of the background noise, and it won't stand out," continues Max Ortiz Catalan. "But in patients with a missing limb, such event could stand out when little else is going on at the same time. This can result in a surprising, emotionally charged experience -- to feel pain in a part of the body you don't have. Such a remarkable sensation could reinforce a neural connection, make it stick out, and help establish an undesirable link."

Through a principle known as 'Hebb's Law' -- 'neurons that fire together, wire together' -- neurons in the sensorimotor and pain perception networks become entangled, resulting in phantom limb pain. The new theory also explains why not all amputees suffer from the condition- the randomness, or stochasticity, means that simultaneous firing may not occur, and become linked, in all patients.

In the new paper, Max Ortiz Catalan goes on to examine how this theory can explain the effectiveness of Phantom Motor Execution (PME), the novel treatment method he previously developed. During PME treatment, electrodes attached to the patient's residual limb pick up electrical signals intended for the missing limb, which are then translated through AI algorithms, into movements of a virtual limb in real time. The patients see themselves on a screen, with a digitally rendered limb in place of their missing one, and can then control it just as if it were their own biological limb . This allows the patient to stimulate and reactivate those dormant areas of the brain.

"The patients can start reusing those areas of brain that had gone idle. Making use of that circuitry helps to weaken and disconnect the entanglement to the pain network. It's a kind of 'inverse Hebb's law' -- the more those neurons fire apart, the weaker their connection. Or, it can be used preventatively, to protect against the formation of those links in the first place," he says.

Read more at Science Daily

Galactic 'wind' stifling star formation is most distant yet seen

Artist impression of an outflow of molecular gas from an active star-forming galaxy.
For the first time, a powerful "wind" of molecules has been detected in a galaxy located 12 billion light-years away. Probing a time when the universe was less than 10 percent of its current age, University of Texas at Austin astronomer Justin Spilker's research sheds light on how the earliest galaxies regulated the birth of stars to keep from blowing themselves apart. The research will appear in the Sept. 7 issue of the journal Science.

"Galaxies are complicated, messy beasts, and we think outflows and winds are critical pieces to how they form and evolve, regulating their ability to grow," Spilker said.

Some galaxies such as the Milky Way and Andromeda have relatively slow and measured rates of starbirth, with about one new star igniting each year. Other galaxies, known as starburst galaxies, forge hundreds or even thousands of stars each year. This furious pace, however, cannot be maintained indefinitely.

To avoid burning out in a short-lived blaze of glory, some galaxies throttle back their runaway starbirth by ejecting -- at least temporarily -- vast stores of gas into their expansive halos, where the gas either escapes entirely or slowly rains back in on the galaxy, triggering future bursts of star formation.

Until now, however, astronomers have been unable to directly observe these powerful outflows in the very early universe, where such mechanisms are essential to prevent galaxies from growing too big, too fast.

Spilker's observations with the Atacama Large Millimeter/submillimeter Array (ALMA), show -- for the first time -- a powerful galactic wind of molecules in a galaxy seen when the universe was only 1 billion years old. This result provides insights into how certain galaxies in the early universe were able to self-regulate their growth so they could continue forming stars across cosmic time.

Astronomers have observed winds with the same size, speed and mass in nearby starbursting galaxies, but the new ALMA observation is the most distant unambiguous outflow ever seen in the early universe.

The galaxy, known as SPT2319-55, is more than 12 billion light-years away. It was discovered by the National Science Foundation's South Pole Telescope.

ALMA was able to observe this object at such tremendous distance with the aid of a gravitational lens provided by a different galaxy that sits almost exactly along the line of sight between Earth and SPT2319-55. Gravitational lensing -- the bending of light due to gravity -- magnifies the background galaxy to make it appear brighter, which allows the astronomers to observe it in more detail than they would otherwise be able to. Astronomers use specialized computer programs to unscramble the effects of gravitational lensing to reconstruct an accurate image of the more-distant object.

This lens-aided view revealed a powerful wind of star-forming gas exiting the galaxy at nearly 800 kilometers per second. Rather than a constant, gentle breeze, the wind is hurtling away in discrete clumps, removing the star-forming gas just as quickly as the galaxy can turn that gas into new stars.

The outflow was detected by the millimeter-wavelength signature of a molecule called hydroxyl (OH), which appeared as an absorption line: essentially, the shadow of an OH fingerprint in the galaxy's bright infrared light.

Molecular winds are an efficient way for galaxies to self-regulate their growth, the researchers note. These winds are probably triggered by either the combined effects of all the supernova explosions that go along with rapid, massive star formation, or by a powerful release of energy as some of the gas in the galaxy falls down onto the supermassive black hole at its center.

Read more at Science Daily

Mysterious 'lunar swirls' point to moon's volcanic, magnetic past

This is an image of the Reiner Gamma lunar swirl from NASA's Lunar Reconnaissance Orbiter.
The mystery behind lunar swirls, one of the solar system's most beautiful optical anomalies, may finally be solved thanks to a joint Rutgers University and University of California Berkeley study.

The solution hints at the dynamism of the moon's ancient past as a place with volcanic activity and an internally generated magnetic field. It also challenges our picture of the moon's existing geology.

Lunar swirls resemble bright, snaky clouds painted on the moon's dark surface. The most famous, called Reiner Gamma, is about 40 miles long and popular with backyard astronomers. Most lunar swirls share their locations with powerful, localized magnetic fields. The bright-and-dark patterns may result when those magnetic fields deflect particles from the solar wind and cause some parts of the lunar surface to weather more slowly.

"But the cause of those magnetic fields, and thus of the swirls themselves, had long been a mystery," said Sonia Tikoo, coauthor of the study recently published in the Journal of Geophysical Research -- Planets and an assistant professor in Rutgers University-New Brunswick's Department of Earth and Planetary Sciences. "To solve it, we had to find out what kind of geological feature could produce these magnetic fields -- and why their magnetism is so powerful."

Working with what is known about the intricate geometry of lunar swirls, and the strengths of the magnetic fields associated with them, the researchers developed mathematical models for the geological "magnets." They found that each swirl must stand above a magnetic object that is narrow and buried close to the moon's surface.

The picture is consistent with lava tubes, long, narrow structures formed by flowing lava during volcanic eruptions; or with lava dikes, vertical sheets of magma injected into the lunar crust.

But this raised another question: How could lava tubes and dikes be so strongly magnetic? The answer lies in a reaction that may be unique to the moon's environment at the time of those ancient eruptions, over 3 billion years ago.

Past experiments have found that many moon rocks become highly magnetic when heated more than 600 degrees Celsius in an oxygen-free environment. That's because certain minerals break down at high temperatures and release metallic iron. If there happens to be a strong enough magnetic field nearby, the newly formed iron will become magnetized along the direction of that field.

This doesn't normally happen on earth, where free-floating oxygen binds with the iron. And it wouldn't happen today on the moon, where there is no global magnetic field to magnetize the iron.

But in a study published last year, Tikoo found that the moon's ancient magnetic field lasted 1 billion to 2.5 billion years longer than had previously been thought -- perhaps concurrent with the creation of lava tubes or dikes whose high iron content would have become strongly magnetic as they cooled.

"No one had thought about this reaction in terms of explaining these unusually strong magnetic features on the moon. This was the final piece in the puzzle of understanding the magnetism that underlies these lunar swirls," Tikoo said.

Read more at Science Dialy

DNA of early medieval Alemannic warriors and their entourage decoded

Comb with etui.
In 1962, an Alemannic burial site containing human skeletal remains was discovered in Niederstotzingen (Baden-Württemberg, Germany). Researchers at the Eurac Research Centre in Bozen-Bolzano, Italy, and at the Max Planck Institute for the Science of Human History in Jena, Germany, have now examined the DNA of these skeletal remains.

This has enabled them to determine not only the sex and the degree of kinship of those people but also their ancestral origins, which provides new insights into societal structures in the Early Middle Ages. The results of this study demonstrate that genetic research can complement research made by archaeologists and anthropologists through more conventional methods. The research was featured on the front cover of the academic journal Science Advances.

Archaeologists recovered thirteen human skeletons, the remains of three horses and some excellently preserved grave goods of diverse origin. This burial, which was discovered near a Roman road not far from Ulm, is considered one of the most important Alemannic gravesites in Germany. The site consists of individual and multiple graves, from which it was hypothesised that the individuals had not all been buried at the same time. The molecular genetic investigations have now brought new details to light about the individuals and their final resting place in this high-ranking warrior type burial.

Using DNA analysis the researchers were able to reconstruct maternal as well as paternal kinship. On the basis of tooth samples the scientists could ascertain that five of the individuals were either first- or second-degree relatives. In addition, the deceased displayed a variety of patterns of genetic origin, indicating Mediterranean and northern European roots. "These results prove the existence of remarkable transregional contacts. The fact that they were buried together also indicates a link between the families and their entourage which went beyond death," explains Niall O'Sullivan, who did his doctorate at Eurac Research and carried out some of the analyses at the Max Planck Institute for the Science of Human History in Jena.

In this context the grave goods, with which the multiple graves were adorned and which are of Frankish, Lombard and Byzantine origin, are also very interesting. Their diverse origin in combination with the new genetic data indicates a cultural openness and demonstrates how members of the same family were receptive to different cultures.

In addition to the kinship analysis the researchers also determined the sex of the individuals using molecular testing. One of the skeletons had a gracile physique and thus could not be clearly classified as male or female. "Anthropologists determine the sex of skeletal remains by using specific physical sexual characteristics, but if the bones of certain body areas are missing, then this will make gender determination much more difficult. DNA-analyses open new paths in this respect -- and in this specific case we were able to identify the young individual molecularly as a male, and thus exclude the possibility that we were dealing with an early medieval female warrior," explains Frank Maixner, microbiologist at the Institute for Mummies and the Iceman at Eurac Research.

Read more at Science Daily

Sep 5, 2018

Family tree of blood production reveals hundreds of thousands of stem cells

Adult humans have many more blood-creating stem cells in their bone marrow than previously thought, ranging between 50,000 and 200,000 stem cells. Researchers from the Wellcome Sanger Institute and Wellcome -- MRC Cambridge Stem Cell Institute developed a new approach for studying stem cells, based on methods used in ecology.

The results, published today (5 September) in Nature, present a new opportunity for studying, in humans, how stem cells throughout the body change during aging and disease. Using whole genome sequencing to build and analyse a family tree of cells, this work could lead to insights into how cancers develop and why some stem cell therapies are more effective than others.

All of the organs in our body rely on stem cells in order to maintain their function. Adult stem cells found in tissues or organs are a self-sustaining population of cells whose offspring make all of the specialised cell types within a tissue.

Blood stem cells drive the production of blood, and are used in treatments and therapies such as bone marrow transplantations -- a treatment for leukemia that replaces cancerous blood cells with healthy blood stem cells.

However, blood stem cells in humans are not fully understood, with even some of the most basic questions, such as how many cells there are and how they change with age, not yet answered.

For the first time, scientists have been able to determine how many blood stem cells are actively contributing in a healthy human. Researchers adapted a method traditionally used in ecology for tracking population size to estimate that a healthy adult has between 50,000 and 200,000 stem cells contributing to their blood cells at any one time.

Dr Peter Campbell, a joint senior author from the Wellcome Sanger Institute, said: "We discovered that healthy adults have between 50,000 and 200,000 blood stem cells, which is about ten times more than previously thought. Whereas previous estimates of blood stem cell numbers were extrapolated from studies in mice, cats or monkeys, this is the first time stem cell numbers have been directly quantified in humans. This new approach opens up avenues into studying stem cells in other human organs and how they change between health and disease, and as we age."

Scientists found the number of stem cells in the blood increases rapidly through childhood and reaches a plateau by adolescence. The number of stem cells stays relatively constant throughout adulthood.

In the study, researchers conducted whole genome sequencing on 140 blood stem cell colonies from a healthy 59 year-old man. The team adapted a capture-recapture* method, traditionally used in ecology to monitor species populations, to 'tag' stem cells and compare them to the population of blood cells.

Henry Lee-Six, the first author from the Wellcome Sanger Institute, said: "We isolated a number of stem cells from the blood and bone marrow and sequenced their genomes to find mutations. The mutations act like barcodes, each of which uniquely tags a stem cell and its descendants. We then looked for these mutations in the rest of the blood to see what fraction of blood cells carry the same barcodes and from this, we could estimate how many stem cells there were in total."

Current methods for measuring stem cell population size typically involve genome engineering, meaning they are limited to model organisms, such as mice. By analysing naturally-occurring mutations in human cells, researchers can use the accumulation of mutations to track stem cells to see how stem cell dynamics change over a person's lifetime.

Read more at Science Daily

Clown fish: Whence the white stripes?

The full spectrum of clown fish colors is not limited to orange or red but ranges from yellow to black. Species differ in the number of white stripes they display: zero, one (head), two (head and trunk), or three (head, trunk, and tail). Four species of clown fish (genus Amphiprion), clockwise from top left: A. ephippium, A. frenatus, A. ocellaris, and A. bicinctus.
Coral reef fish are known for the wide range of colors and patterns they display, but the mechanisms governing the acquisition of these characteristics are still poorly understood. These researchers focused on clown fish, a group including thirty-some species distinguished by numbers of white stripes (zero to three) and by their colors, including yellow, orange, red, and black.

The team first demonstrated that stripes are essential for individual fish to recognize others of their species. Such recognition is critical to the social organization of clown fish living among sea anemones where several species may be simultaneously present and young fish seek to establish permanent homes.

The researchers then deciphered the sequences of stripe appearance and disappearance during the life of a clown fish. Stripes appear one at a time, starting near the head and progressing towards the tail, during the transition from the larval to the juvenile stage. The team further observed that some stripes are occasionally lost between the juvenile and adult stages, this time beginning at the tail end.

In an attempt to understand the origin of these patterns, the scientists delved into the evolutionary history of clown fish. They discovered that their common ancestor sported three stripes. Just like today's clown fish, these ancestral stripes were made up of pigmented cells called iridophores containing reflective crystals. Over the course of evolutionary history, some species of clown fish gradually lost stripes, resulting in today's range of color patterns.

The research team would like to follow up by identifying the genes that control the acquisition of white stripes for a greater understanding of how they evolved. This should clue them in to the processes behind color diversification and the role color plays in the social organization of reef fish.

From Science Daily

Falling stars hold clue for understanding dying stars

We can estimate the age of heavy elements in the primordial Solar System by measuring the traces left in meteorites by specific radioactive nuclei synthesized in certain types of supernovae.
An international team of researchers has proposed a new method to investigate the inner workings of supernovae explosions. This new method uses meteorites and is unique in that it can determine the contribution from electron anti-neutrinos, enigmatic particles which can't be tracked through other means.

Supernovae are important events in the evolution of stars and galaxies, but the details of how the explosions occur are still unknown. This research, led by Takehito Hayakawa, a visiting professor at the National Astronomical Observatory of Japan, found a method to investigate the role of electron anti-neutrinos in supernovae. By measuring the amount of 98Ru (an isotope of Ruthenium) in meteorites, it should be possible to estimate how much of its progenitor 98Tc (a short-lived isotope of Technetium) was present in the material from which the Solar System formed. The amount of 98Tc in turn is sensitive to the characteristics, such as temperature, of electron anti-neutrinos in the supernova process; as well as to how much time passed between the supernova and the formation of the Solar System. The expected traces of 98Tc are only a little below the smallest currently detectable levels, raising hopes that they will be measured in the near future.

Hayakawa explains, "There are six neutrino species. Previous studies have shown that neutrino-isotopes are predominantly produced by the five neutrino species other than the electron anti-neutrino. By finding a neutrino-isotope synthesized predominantly by the electron anti-neutrino, we can estimate the temperatures of all six neutrino species, which are important for understanding the supernova explosion mechanism."

Read more at Science Dialy

Why we stick to false beliefs: Feedback trumps hard evidence

People's beliefs are more likely to be reinforced by the positive or negative reactions they receive than by logic, reasoning and scientific data.
Ever wonder why flat earthers, birthers, climate change and Holocaust deniers stick to their beliefs in the face of overwhelming evidence to the contrary?

New findings from researchers at the University of California, Berkeley, suggest that feedback, rather than hard evidence, boosts people's sense of certainty when learning new things or trying to tell right from wrong.

Developmental psychologists have found that people's beliefs are more likely to be reinforced by the positive or negative reactions they receive in response to an opinion, task or interaction, than by logic, reasoning and scientific data.

Their findings, published today in the online issue of the journal Open Mind, shed new light on how people handle information that challenges their worldview, and how certain learning habits can limit one's intellectual horizons.

"If you think you know a lot about something, even though you don't, you're less likely to be curious enough to explore the topic further, and will fail to learn how little you know," said study lead author Louis Marti, a Ph.D. student in psychology at UC Berkeley.

This cognitive dynamic can play out in all walks of actual and virtual life, including social media and cable-news echo chambers, and may explain why some people are easily duped by charlatans.

"If you use a crazy theory to make a correct prediction a couple of times, you can get stuck in that belief and may not be as interested in gathering more information," said study senior author Celeste Kidd, an assistant professor of psychology at UC Berkeley.

Specifically, the study examined what influences people's certainty while learning. It found that study participants' confidence was based on their most recent performance rather than long-term cumulative results. The experiments were conducted at the University of Rochester.

For the study, more than 500 adults, recruited online through Amazon's Mechanical Turk crowdsourcing platform, looked at different combinations of colored shapes on their computer screens. They were asked to identify which colored shapes qualified as a "Daxxy," a make-believe object invented by the researchers for the purpose of the experiment.

With no clues about the defining characteristics of a Daxxy, study participants had to guess blindly which items constituted a Daxxy as they viewed 24 different colored shapes and received feedback on whether they had guessed right or wrong. After each guess, they reported on whether or not they were certain of their answer.

The final results showed that participants consistently based their certainty on whether they had correctly identified a Daxxy during the last four or five guesses instead of all the information they had gathered throughout.

"What we found interesting is that they could get the first 19 guesses in a row wrong, but if they got the last five right, they felt very confident," Marti said. "It's not that they weren't paying attention, they were learning what a Daxxy was, but they weren't using most of what they learned to inform their certainty."

An ideal learner's certainty would be based on the observations amassed over time as well as the feedback, Marti said.

Read more at Science Daily

Evidence of 7,200-year-old cheese-making found on the Dalmatian Coast

The archaeological site of Pokrovnik during excavation with the modern village, Dalmatia, Croatia.
Analysis of fatty residue in pottery from the Dalmatian Coast of Croatia revealed evidence of fermented dairy products -- soft cheeses and yogurts -- from about 7,200 years ago, according to an international team of researchers.

"This pushes back cheese-making by 4,000 years," said Sarah B. McClure, associate professor of anthropology.

The presence of milk in pottery in this area is seen as early as 7,700 years ago, 500 years earlier than fermented products, said the researchers. DNA analysis of the populations in this area indicate that the adults were lactose-intolerant, but the children remained able to consume milk comfortably up to the age of ten.

"First, we have milking around, and it was probably geared for kids because it is a good source of hydration and is relatively pathogen-free," said McClure. "It wouldn't be a surprise for people to give children milk from another mammal."

However, about 500 years later, the researchers see a shift not only from pure milk to fermented products, but also in the style and form of pottery vessels.

"Cheese production is important enough that people are making new types of kitchenware," said McClure. "We are seeing that cultural shift."

When only meat, fish and some milk residue is found in pottery, during the Early Neolithic, the pottery is a style called "Impressed Ware" found throughout the area.

500 years later, in the Middle Neolithic, another pottery style using different technology existed -- Danilo pottery -- which defines the era in this area and includes plates and bowls. There are three subtypes of Danilo pottery.

Figulina makes up five percent of this type and is highly fired and buff-colored, often slipped and decorated. All this pottery contained milk residue. The other Danilo wares contained animal fats and fresh water fish residue.

Rhyta, which are footed vessels with round bodies and are often animal- or human-shaped, have large openings on the sides and distinctive handles. The researchers found that three of the four rhyta in their sample showed evidence of cheese.

The third category of Danilo ware is sieves, which are often used in cheese-making to strain treated milk when it separates into curds and whey. Three of the four sieves in the sample showed evidence of secondary milk processing into either cheese or other fermented dairy products.

"This is the earliest documented lipid residue evidence for fermented dairy in the Mediterranean region, and among the earliest documented anywhere to date," the researchers report today (Sept. 5) in PLOS One.

The researchers looked at pottery from two sites in Croatia in Dalmatia -- Pokrovnik and Danilo Bitinj. When possible, they selected samples from unwashed pottery, but because some pottery forms are rarer, used washed samples for the sieves. They tested the pottery residue for carbon isotopes, which can indicate the type of fat and can distinguish between meat, fish, milk and fermented milk products. They used radiocarbon dating on bone and seeds to determine the pottery's age.

According to the researchers, dairying -- and especially cheese and fermented milk products -- may have opened northern European areas for farming because it reduced infant mortality and allowed for earlier weaning, decreasing the birth interval and potentially increasing population. It also supplied a storable form of nutrition for adults, because the fermentation of cheese and yogurt reduce the lactose content of milk products, making it palatable for adults as well as children.

Read more at Science Daily

Sep 4, 2018

Superradiance: Quantum effect detected in tiny diamonds

In the diamond lattice, there are special kinds of defects, consisting of a nitrogen atom (white) and a missing carbon atom.
The effect has been predicted theoretically decades ago -- but it is very hard to provide experimental evidence for it: "Superradiance" is the phenomenon of one atom giving off energy in the form of light and causing a large number of other atoms in its immediate vicinity to emit energy as well at the same time. This creates a short, intense flash of light.

Up until now, this phenomenon could only be studied with free atoms (and with the use of special symmetries). Now, at TU Wien (Vienna), it was measured in a solid-state system. The team used nitrogen atoms, built into tiny diamonds that can be coupled with microwave radiation. The results have now been published in the journal Nature Physics.

A bright flash of quantum light

According to the laws of quantum physics, atoms can be in different states. "When the atom absorbs energy, it is shifted into a so-called excited state. When it returns to a lower energy state, the energy is released again in the form of a photon. This usually happens randomly, at completely unpredictable points in time," says Johannes Majer, research group leader at the Institute of Atomic and Subatomic Physics (TU Wien). However, if several atoms are located close to each other, an interesting quantum effect can occur: one of the atoms emits a photon (spontaneously and randomly), thereby affecting all other excited atoms in its neighborhood. Many of them release their excess energy at the same moment, producing an intense flash of quantum light. This phenomenon is called "superradiance."

"Unfortunately, this effect cannot be directly observed with ordinary atoms," says Andreas Angerer, first author of the study. "Super radiance is only possible if you place all the atoms in an area that is significantly smaller than the wavelength of the photons." So you would have to focus the atoms to less than 100 nanometers -- and then, the interactions between the atoms would be so strong that the effect would no longer be possible.

Defects in the diamond lattice

One solution to this problem is using a quantum system that Majer and his team have been researching for years: tiny defects built into diamonds. While ordinary diamonds consist of a regular grid of carbon atoms, lattice defects have been deliberately incorporated into the diamonds in Majer's lab. At certain points, instead of a carbon atom, there is a nitrogen atom, and the adjacent point in the diamond lattice is unoccupied.

These special diamonds with lattice defects were made in Japan by Junichi Isoya and his team at the University of Tsukuba. They have succeeded in producing the world's highest concentration of these desired defects without causing any other damage. The theoretical basis of the effect was developed by Kae Nemoto (National Institute of Informatics) and William Munro (NTT Basic Research Laboratories) in Tokyo, Japan.

Just like ordinary atoms, these diamond defects can also be switched into an excited state -- but this is achieved with photons in the microwave range, with a very large wavelength. "Our system has the decisive advantage that we can work with electromagnetic radiation that has a wavelength of several centimeters -- so it is no problem to concentrate the individual defect sites within the radius of one wavelength," explains Andreas Angerer.

When many diamond defects are switched to an excited state, it can usually take hours for all of them to return to the lower-energy state. Due to the superradiance effect, however, this happens within about 100 nanoseconds. The first photon that is sent out spontaneously causes all other defect sites to emit photons as well.

Similar to lasers

Superradiance is based on the same basic principle as the laser -- in both cases there is a stimulated emission of photons, triggered by a photon hitting energetically excited atoms. Nevertheless, these are two quite different phenomena: In the laser, a permanent background of many photons is needed, constantly stimulating new atoms. In superradiance, a single photon triggers a flash of light all by itself.

Read more at Science Daily

Greenhouse emissions from Siberian rivers peak as permafrost thaws

Western Siberia.
As permafrost degrades, previously frozen carbon can end up in streams and rivers where it will be processed and emitted as greenhouse gases from the water surface directly into the atmosphere. Quantifying these river greenhouse gas emissions is particularly important in Western Siberia -- an area that stores vast amounts of permafrost carbon and is a home to the Arctic's largest watershed, Ob' River.

Now researchers from Umeå University (and collaborators from SLU, Russia, France, and United Kingdom) have shown that river greenhouse gas emissions peak in the areas where Western Siberian permafrost has been actively degrading and decrease in areas where climate is colder, and permafrost has not started to thaw yet. The research team has also found out that greenhouse gas emissions from rivers exceed the amount of carbon that rivers transport to the Arctic Ocean.

"This was an unexpected finding as it means that Western Siberian rivers actively process and release large part of the carbon they receive from degrading permafrost and that the magnitude of these emissions might increase as climate continues to warm" says Svetlana Serikova, doctoral student in the Department of Ecology and Environmental sciences, Umeå University, and one of the researchers in the team.

Quantifying river greenhouse gas emissions from permafrost-affected areas in general and in Western Siberia in particular is important as it improves our understanding the role such areas play in the global carbon cycle as well as increases our abilities of predicting the impacts of a changing climate on the Arctic.

"The large-scale changes that take place in the Arctic due to warming exert a strong influence on the climate system and have far-reaching consequences for the rest of the world. That is why it is important we focus on capturing how climate warming affects the Arctic now before these dramatic changes happen" says Svetlana Serikova.

From Science Daily

Veiled supernovae provide clue to stellar evolution

This is an artist's impression of a red supergiant surrounded with thick circumstellar matter.
At the end of its life, a red supergiant star explodes in a hydrogen-rich supernova. By comparing observation results to simulation models, an international research team found that in many cases this explosion takes place inside a thick cloud of circumstellar matter shrouding the star. This result completely changes our understanding of the last stage of stellar evolution.

The research team led by Francisco Förster at the University of Chile used the Blanco Telescope to find 26 supernovae coming from red supergiants. Their goal was to study the shock breakout, a brief flash of light preceding the main supernova explosion. But they could not find any signs of this phenomenon. On the other hand, 24 of the supernovae brightened faster than expected.

To solve this mystery, Takashi Moriya at the National Astronomical Observatory of Japan (NAOJ) simulated 518 models of supernovae brightness variations and compared them with the observational results. The team found that models with a layer of circumstellar matter about 10 percent the mass of the Sun surrounding the supernovae matched the observations well. This circumstellar matter hides the shock breakout, trapping its light. The subsequent collision between the supernova ejecta and the circumstellar matter creates a strong shock wave that produces extra light, causing it to brighten more quickly.

Moriya explains, "Near the end of its life, some mechanism in the star's interior must cause it to shed mass that then forms a layer around the star. We don't yet have a clear idea of the mechanism causing this mass loss. Further study is needed to get a better understanding of the mass loss mechanism. This will also be important in revealing the supernova explosion mechanism and the origin of the diversity in supernovae."

Read more at Science Daily

Quantum weirdness in 'chicken or egg' paradox

What came first, the chicken or the egg? Or both?
The "chicken or egg" paradox was first proposed by philosophers in Ancient Greece to describe the problem of determining cause-and-effect.

Now, a team of physicists from The University of Queensland and the NÉEL Institute has shown that, as far as quantum physics is concerned, the chicken and the egg can both come first.

Dr Jacqui Romero from the ARC Centre of Excellence for Engineered Quantum Systems said that in quantum physics, cause-and-effect is not always as straightforward as one event causing another.

"The weirdness of quantum mechanics means that events can happen without a set order," she said.

"Take the example of your daily trip to work, where you travel partly by bus and partly by train.

"Normally, you would take the bus then the train, or the other way round.

"In our experiment, both of these events can happen first," Dr Romero said.

"This is called `indefinite causal order' and it isn't something that we can observe in our everyday life."

To observe this effect in the lab, the researchers used a setup called a photonic quantum switch.

UQ's Dr Fabio Costa said that with this device the order of events -- transformations on the shape of light -- depends on polarisation.

"By measuring the polarisation of the photons at the output of the quantum switch, we were able to show the order of transformations on the shape of light was not set."

"This is just a first proof of principle, but on a larger scale indefinite causal order can have real practical applications, like making computers more efficient or improving communication."

Read more at Science Daily

Artificial cells are tiny bacteria fighters

Biomedical engineers at UC Davis have created that mimic some of the properties of living cells. The artificial cells do not grow and divide, but could detect, react to and destroy bacteria in a lab dish.
"Lego block" artificial cells that can kill bacteria have been created by researchers at the University of California, Davis Department of Biomedical Engineering. The work is reported Aug. 29 in the journal ACS Applied Materials and Interfaces.

"We engineered artificial cells from the bottom-up -- like Lego blocks -- to destroy bacteria," said Assistant Professor Cheemeng Tan, who led the work. The cells are built from liposomes, or bubbles with a cell-like lipid membrane, and purified cellular components including proteins, DNA and metabolites.

"We demonstrated that artificial cells can sense, react and interact with bacteria, as well as function as systems that both detect and kill bacteria with little dependence on their environment," Tan said.

The team's artificial cells mimic the essential features of live cells, but are short-lived and cannot divide to reproduce themselves. The cells were designed to respond to a unique chemical signature on E. coli bacteria. They were able to detect, attack and destroy the bacteria in laboratory experiments.

Artificial cells previously only had been successful in nutrient-rich environments, Tan said. However, by optimizing the artificial cells' membranes, cytosol and genetic circuits, the team made them work in a wide variety of environments with very limited resources such as water, emphasizing their robustness in less-than-ideal or changing conditions. These improvements significantly broaden the overall potential application of artificial cells.

Antibacterial artificial cells might one day be infused into patients to tackle infections resistant to other treatments. They might also be used to deliver drugs at the specific location and time, or as biosensors.

Read more at Science Daily

Sep 3, 2018

Now we can see brain cells 'talk' and that will shed light on neurological diseases

The new fluorescence sensor lights up neurons when they talk. The sensor was developed by researchers at the University of Virginia School of Medicine and their colleagues in China.
Scientists have developed a way to see brain cells talk -- to actually see neurons communicate in bright, vivid color. The new lab technique is set to provide long-needed answers about the brain and neurological diseases such as Alzheimer's disease, schizophrenia and depression. Those answers will facilitate new and vastly improved treatments for conditions that have largely resisted scientists' efforts to understand them.

"Before we didn't have any way to understand how [such neurotransmissions] work," said researcher J. Julius Zhu, PhD, of the University of Virginia School of Medicine. "In the case of Alzheimer's, in particular, we spent billions of dollars and we have almost no effective treatment. ... Now, for the first time, we can see what is happening."

Understanding Neurological Diseases

To demonstrate the technique's effectiveness, Zhu's team in Charlottesville and colleagues in China have used it to visualize a poorly understood neurotransmitter called acetylcholine. "Acetylcholine has an important role in how we behave because it affects our memory and mood," Zhu explained. "It affects Alzheimer's, schizophrenia, emotions, depression, all kind of emotion-related diseases and mental problems." (Acetylcholine also plays critical roles elsewhere in the body, such as regulating insulin secretion in the pancreas and in controlling stress and blood pressure.)

Drugs designed to combat Alzheimer's disease actually inhibit acetylcholinesterase, an enzyme that degrades acetylcholine, to boost the effect of diminishing acetylcholine released in the brain, Zhu said. But doctors haven't fully understood how the drugs work, and there's been no way to determine just how much inhibition is needed. "These drugs are not very effective," he said. "They only offer a minor improvement, and once you stop the drug [the symptoms] just seem much worse. So probably in trying to treat these patients, you temporarily enhance them but you actually make them even worse."

By being able to see acetylcholine and other neurotransmitters in action in fluorescent color, doctors will be able to establish a baseline for good health and then work to restore that in patients with neurological diseases.

Read more at Science Daily

Tracking marine migrations across geopolitical boundaries aids conservation

Leatherback sea turtles, a critically endangered species, may visit over 30 countries during their migrations.
The leatherback sea turtle is the largest living turtle and a critically endangered species. Saving leatherback turtles from extinction in the Pacific Ocean will require a lot of international cooperation, however, because the massive turtles may visit more than 30 different countries during their migrations.

A new study uses tracking data for 14 species of migratory marine predators, from leatherback turtles to blue whales and white sharks, to show how their movements relate to the geopolitical boundaries of the Pacific Ocean. The results provide critical information for designing international cooperative agreements needed to manage these species.

"If a species spends most of its time in the jurisdiction of one or two countries, conservation and management is a much easier issue than it is for species that migrate through many different countries," said Daniel Costa, professor of ecology and evolutionary biology at UC Santa Cruz and a coauthor of the study, published September 3 in Nature Ecology & Evolution.

"For these highly migratory species, we wanted to know how many jurisdictional regions they go through and how much time they spend in the open ocean beyond the jurisdiction of any one country," Costa said.

Under international law, every coastal nation can establish an exclusive economic zone (EEZ) extending up to 200 nautical miles from shore, giving it exclusive rights to exploit resources and regulate fisheries within that zone. The high seas beyond the EEZs are a global commons and are among the least protected areas on Earth. Discussions have been under way at the United Nations since 2016 to negotiate a global treaty for conservation and management of the high seas.

First author Autumn-Lynn Harrison, now at the Smithsonian Conservation Biology Institute in Washington, D.C., began the study as a graduate student in Costa's lab at UC Santa Cruz. Costa is a cofounder, with coauthor Barbara Block of Stanford University, of the Tagging of Pacific Predators (TOPP) program, which began tracking the movements of top ocean predators throughout the Pacific Ocean in 2000. Harrison wanted to use the TOPP data to address conservation issues, and as she looked at the data she began wondering how many countries the animals migrate through.

"I wanted to see if we could predict when during the year a species would be in the waters of a particular country," Harrison said. "Some of these animals are mostly hidden beneath the sea, so being able to show with tracking data which countries they are in can help us understand who should be cooperating to manage these species."

Harrison also began attending meetings on issues related to the high seas, which focused her attention on the time migratory species spend in these relatively unregulated waters. "Figuring out how much time these animals spend in the high seas was directly motivated by questions I was being asked by policy makers who are interested in high seas conservation," she said.

The TOPP data set, part of the global Census of Marine Life, is one of the most extensive data sets available on the movements of large marine animals. Many of the top predators in the oceans are declining or threatened, partly because their mobility exposes them to a wide array of threats in different parts of the ocean.

Leatherback turtle populations in the Pacific could face a 96 percent decline by 2040, according to the IUCN Red List of Threatened Species, and leatherbacks are a priority species for the National Oceanic and Atmospheric Administration (NOAA). Laysan and black-footed albatrosses, both listed as near threatened on the IUCN Red List, spend most of their time on the high seas, where they are vulnerable to being inadvertently caught on long lines during commercial fishing operations.

White sharks are protected in U.S. and Mexican waters, but the TOPP data show that they spend about 60 percent of their time in the high seas. Pacific bluefin tuna, leatherback turtles, Laysan albatross, and sooty shearwaters all travel across the Pacific Ocean during their migrations.

"Bluefin tuna breed in the western North Pacific, then cross the Pacific Ocean to feed in the California Current off the United States and Mexico," Costa said. "Sooty shearwaters not only cross the open ocean, they use the entire Pacific Ocean from north to south and go through the jurisdictions of more than 30 different countries."

International cooperation has led to agreements for managing some of these migratory species, in some cases through regional fisheries management organizations. The Inter-American Tropical Tuna Commission (IATTC), for example, oversees conservation and management of tunas and other marine resources in the eastern Pacific Ocean.

The first session of a U.N. Intergovernmental Conference to negotiate an international agreement on the conservation of marine biological diversity beyond areas of national jurisdiction will be held in September. Harrison said she has already been asked to provide preprints and figures from the paper for this session.

Read more at Science Daily

Mud from the deep sea reveals clues about ancient monsoon

Monsoon.
Analyzing traces of leaf waxes from land plants that over millennia accumulated in deep sea sediments, a team of researchers led by the University of Arizona reconstructed the history of monsoon activity in northern Mexico. Their results, published online on Sept. 3 in the journal Nature Geoscience, help settle a long-standing debate over whether monsoon activity shut down completely under the influence of cooling brought about by the ice sheets that covered much of North America, or was merely suppressed.

During the Last Glacial Maximum, about 20,000 years ago, when mammoths and other prehistoric beasts roamed what is now northern Mexico and the southwestern United States, summer rains contributed a 35 percent of the annual rainfall, compared with about 70 percent today, according to the new study.

By diverting moisture from the tropics, the summer monsoon brings relief from months-long intense summer heat and drought to the arid lands of the American Southwest and northwestern Mexico. If the region depended on winter rains alone, the Sonoran Desert would not be known as one of the world's most biodiverse deserts.

"The monsoon is such an iconic feature of the desert Southwest, but we know very little about how it has changed over thousands and millions of years," says Tripti Bhattacharya, the study's first author. "Our finding that the Southwestern monsoon was suppressed, but not completely gone under glacial conditions, points to the dramatic variability of the atmospheric circulation at the time, but suggests it has been a persistent feature of our regional climate."

Previous studies had yielded inconclusive results, in part because the records used to infer evidence of past monsoon rainfall tend to be more like snapshots in time rather than providing more continuous climate records. For example, researchers have gained valuable glimpses into long-vanished plant communities based on plant parts preserved in packrat nests called middens, or by analyzing the chemical signatures they left behind in soils. Those studies suggested persistent monsoon activity during the last ice age, whereas other studies based on climate modeling indicated it was temporarily absent.

By applying a clever method never before used to study the history of the monsoon, Bhattacharya and her co-authors discovered the equivalent of a forgotten, unopened book of past climate records, as opposed to previously studied climate archives, which in comparison are more like single, scattered pages.

Forming a vast natural vault almost 1,000 meters below the sea surface, the seafloor of oxygen-poor zones in the Gulf of California contains organic material blown into the water for many thousands of years, including debris from land plants growing in the region. Since the deposits remain largely undisturbed from scavengers or microbial activity, Tierney and her team were able to isolate leaf wax compounds from the seafloor mud.

Co-author Jessica Tierney, an associate professor in the UA's Department of Geosciences and Bhattacharya's former postdoctoral adviser, has pioneered the analysis of the waxy coatings of plant leaves to reconstruct rainfall or dry spells in the past based on their chemical fingerprint, specifically different ratios of hydrogen atoms. The water in monsoon rain, according to Tierney, contains a larger proportion of a hydrogen isotope known as deuterium, or "heavy water," which has to do with its origin in the tropics. Winter rains, on the other hand, carry a different signature because they contain water with a smaller ratio of deuterium versus "regular" hydrogen.

"Plants take up whichever water they get, and because the two seasons have different ratios of hydrogen isotopes, we can relate the isotope ratios in the preserved leaf waxes to the amount of monsoon rain across the Gulf of California region," Tierney explains.

Piecing together past patterns of the monsoon in the Southwest can help scientists better predict future scenarios under the influence of a climate that's trending toward a warmer world, not another ice age, the researchers say.

"The past is not a perfect analog, but it acts as a natural experiment that helps us test how well we understand the variability of regional climate," says Bhattacharya, who recently accepted a position as assistant professor of earth sciences at Syracuse University. "If we understand how regional climates responded in the past, it gives us a much better shot at predicting how they will respond to climate change in the future."

One way scientists can take advantage of past climate records is by applying climate models to them, using the records to "ground-truth" the models.

"The problem is that right now, our best climate models don't agree with regard to how the monsoon will change in response to global warming," Tierney says. "Some suggest the summer precipitation will become stronger, others say it'll get weaker. By better understanding the mechanics of the phenomenon, our results can help us figure out why the models disagree and provide constraints that can translate into the future."

To test the hypothesis of whether colder times generally weaken the monsoon and warmer periods strengthen it, Tierney's group is planning to investigate how the monsoon responded to warmer periods in the past. Future research will focus on the last interglacial period about 120,000 years ago, and a period marked by greenhouse gas levels similar to those in today's atmosphere: the Pliocene Epoch, which lasted from 5.3-2.5 million years ago.

Having better records of the Southwestern monsoon also helps scientists better understand how it compares to monsoons in other parts of the world that are better studied.

Read more at Science Daily

8,000 new antibiotic combinations are surprisingly effective

"We shouldn't limit ourselves to just single drugs or two-drug combinations in our medical toolbox," said Pamela Yeh (left), with Elif Tekin.
Scientists have traditionally believed that combining more than two drugs to fight harmful bacteria would yield diminishing returns. The prevailing theory is that that the incremental benefits of combining three or more drugs would be too small to matter, or that the interactions among the drugs would cause their benefits to cancel one another out.

Now, a team of UCLA biologists has discovered thousands of four- and five-drug combinations of antibiotics that are more effective at killing harmful bacteria than the prevailing views suggested. Their findings, reported today in the journal npj Systems Biology and Applications, could be a major step toward protecting public health at a time when pathogens and common infections are increasingly becoming resistant to antibiotics.

"There is a tradition of using just one drug, maybe two," said Pamela Yeh, one of the study's senior authors and a UCLA assistant professor of ecology and evolutionary biology. "We're offering an alternative that looks very promising. We shouldn't limit ourselves to just single drugs or two-drug combinations in our medical toolbox. We expect several of these combinations, or more, will work much better than existing antibiotics."

Working with eight antibiotics, the researchers analyzed how every possible four- and five-drug combination, including many with varying dosages -- a total of 18,278 combinations in all -- worked against E. coli. They expected that some of the combinations would be very effective at killing the bacteria, but they were startled by how many potent combinations they discovered.

For every combination they tested, the researchers first predicted how effective they thought it would be in stopping the growth of E. coli. Among the four-drug combinations, there were 1,676 groupings that performed better than they expected. Among the five-drug combinations, 6,443 groupings were more effective than expected.

"I was blown away by how many effective combinations there are as we increased the number of drugs," said Van Savage, the study's other senior author and a UCLA professor of ecology and evolutionary biology and of biomathematics. "People may think they know how drug combinations will interact, but they really don't."

On the other hand, 2,331 four-drug combinations and 5,199 five-drug combinations were less effective than the researchers expected they would be, said Elif Tekin, the study's lead author, who was a UCLA postdoctoral scholar during the research.

Some of the four- and five-drug combinations were effective at least partly because individual medications have different mechanisms for targeting E. coli. The eight tested by the UCLA researchers work in six unique ways.

"Some drugs attack the cell walls, others attack the DNA inside," Savage said. "It's like attacking a castle or fortress. Combining different methods of attacking may be more effective than just a single approach."

Said Yeh: "A whole can be much more, or much less, than the sum of its parts, as we often see with a baseball or basketball team." (As an example, she cited the decisive upset victory in the 2004 NBA championship of the Detroit Pistons -- a cohesive team with no superstars -- over a Los Angeles Lakers team with future Hall of Famers Kobe Bryant, Shaquille O'Neal, Karl Malone and Gary Payton.)

Yeh added that although the results are very promising, the drug combinations have been tested in only a laboratory setting and likely are at least years away from being evaluated as possible treatments for people.

"With the specter of antibiotic resistance threatening to turn back health care to the pre-antibiotic era, the ability to more judiciously use combinations of existing antibiotics that singly are losing potency is welcome," said Michael Kurilla, director of the Division of Clinical Innovation at the National Institutes of Health/National Center for Advancing Translational Science. "This work will accelerate the testing in humans of promising antibiotic combinations for bacterial infections that we are ill-equipped to deal with today."

The researchers are creating open-access software based on their work that they plan to make available to other scientists next year. The software will enable other researchers to analyze the different combinations of antibiotics studied by the UCLA biologists, and to input data from their own tests of drug combinations.

Using a MAGIC framework

One component of the software is a mathematical formula for analyzing how multiple factors interact, which the UCLA scientists developed as part of their research. They call the framework "mathematical analysis for general interactions of components," or MAGIC.

"We think MAGIC is a generalizable tool that can be applied to other diseases -- including cancers -- and in many other areas with three or more interacting components, to better understand how a complex system works," Tekin said.

Savage said he plans to use concepts from that framework in his ongoing research on how temperature, rain, light and other factors affect the Amazon rainforests.

He, Yeh and Mirta Galesic, a professor of human social dynamics at the Santa Fe Institute, also are using MAGIC in a study of how people's formation of ideas is influenced by their parents, friends, schools, media and other institutions -- and how those factors interact.

"It fits in perfectly with our interest in interacting components," Yeh said.

Other co-authors of the new study are Cynthia White, a UCLA graduate who was a research technician while working on the project; Tina Kang, a UCLA doctoral student; Nina Singh, a student at the University of Southern California; Mauricio Cruz-Loya, a UCLA doctoral student; and Robert Damoiseaux, professor of molecular and medical pharmacology, and director of UCLA's Molecular Screening Shared Resource, a facility with advanced robotics technology where Tekin, White, and Kang conducted much of the research.

Read more at Science Daily