May 11, 2019

Gravitational waves leave a detectable mark, physicists say

Gravitational waves, first detected in 2016, offer a new window on the universe, with the potential to tell us about everything from the time following the Big Bang to more recent events in galaxy centers.

And while the billion-dollar Laser Interferometer Gravitational-Wave Observatory (LIGO) detector watches 24/7 for gravitational waves to pass through the Earth, new research shows those waves leave behind plenty of "memories" that could help detect them even after they've passed.

"That gravitational waves can leave permanent changes to a detector after the gravitational waves have passed is one of the rather unusual predictions of general relativity," said doctoral candidate Alexander Grant, lead author of "Persistent Gravitational Wave Observables: General Framework," published April 26 in Physical Review D.

Physicists have long known that gravitational waves leave a memory on the particles along their path, and have identified five such memories. Researchers have now found three more aftereffects of the passing of a gravitational wave, "persistent gravitational wave observables" that could someday help identify waves passing through the universe.

Each new observable, Grant said, provides different ways of confirming the theory of general relativity and offers insight into the intrinsic properties of gravitational waves.

Those properties, the researchers said, could help extract information from the Cosmic Microwave Background -- the radiation left over from the Big Bang.

"We didn't anticipate the richness and diversity of what could be observed," said Éanna Flanagan, the Edward L. Nichols Professor and chair of physics and professor of astronomy.

"What was surprising for me about this research is how different ideas were sometimes unexpectedly related," said Grant. "We considered a large variety of different observables, and found that often to know about one, you needed to have an understanding of the other."

The researchers identified three observables that show the effects of gravitational waves in a flat region in spacetime that experiences a burst of gravitational waves, after which it returns again to being a flat region. The first observable, "curve deviation," is how much two accelerating observers separate from one another, compared to how observers with the same accelerations would separate from one another in a flat space undisturbed by a gravitational wave.

The second observable, "holonomy," is obtained by transporting information about the linear and angular momentum of a particle along two different curves through the gravitational waves, and comparing the two different results.

The third looks at how gravitational waves affect the relative displacement of two particles when one of the particles has an intrinsic spin.

Each of these observables is defined by the researchers in a way that could be measured by a detector. The detection procedures for curve deviation and the spinning particles are "relatively straightforward to perform," wrote the researchers, requiring only "a means of measuring separation and for the observers to keep track of their respective accelerations."

Detecting the holonomy observable would be more difficult, they wrote, "requiring two observers to measure the local curvature of spacetime (potentially by carrying around small gravitational wave detectors themselves)." Given the size needed for LIGO to detect even one gravitational wave, the ability to detect holonomy observables is beyond the reach of current science, researchers say.

"But we've seen a lot of exciting things already with gravitational waves, and we will see a lot more. There are even plans to put a gravitational wave detector in space that would be sensitive to different sources than LIGO," Flanagan said.

Read more at Science Daily

Methane-consuming bacteria could be the future of fuel

Methane molecules illustration.
Known for their ability to remove methane from the environment and convert it into a usable fuel, methanotrophic bacteria have long fascinated researchers. But how, exactly, these bacteria naturally perform such a complex reaction has been a mystery.

Now an interdisciplinary team at Northwestern University has found that the enzyme responsible for the methane-methanol conversion catalyzes this reaction at a site that contains just one copper ion.

This finding could lead to newly designed, human-made catalysts that can convert methane -- a highly potent greenhouse gas -- to readily usable methanol with the same effortless mechanism.

"The identity and structure of the metal ions responsible for catalysis have remained elusive for decades," said Northwestern's Amy C. Rosenzweig, co-senior author of the study. "Our study provides a major leap forward in understanding how bacteria methane-to-methanol conversion."

"By identifying the type of copper center involved, we have laid the foundation for determining how nature carries out one of its most challenging reactions," said Brian M. Hoffman, co-senior author.

The study will publish on Friday, May 10 in the journal Science. Rosenzweig is the Weinberg Family Distinguished Professor of Life Sciences in Northwestern's Weinberg College of Arts and Sciences. Hoffman is the Charles E. and Emma H. Morrison Professor of Chemistry at Weinberg.

By oxidizing methane and converting it to methanol, methanotrophic bacteria (or "methanotrophs") can pack a one-two punch. Not only are they removing a harmful greenhouse gas from the environment, they are also generating a readily usable, sustainable fuel for automobiles, electricity and more.

Current industrial processes to catalyze a methane-to-methanol reaction require tremendous pressure and extreme temperatures, reaching higher than 1,300 degrees Celsius. Methanotrophs, however, perform the reaction at room temperature and "for free."

Read more at Science Daily

May 10, 2019

Explosions of universe's first stars spewed powerful jets

Supernova concept.
Several hundred million years after the Big Bang, the very first stars flared into the universe as massively bright accumulations of hydrogen and helium gas. Within the cores of these first stars, extreme, thermonuclear reactions forged the first heavier elements, including carbon, iron, and zinc.

These first stars were likely immense, short-lived fireballs, and scientists have assumed that they exploded as similarly spherical supernovae.

But now astronomers at MIT and elsewhere have found that these first stars may have blown apart in a more powerful, asymmetric fashion, spewing forth jets that were violent enough to eject heavy elements into neighboring galaxies. These elements ultimately served as seeds for the second generation of stars, some of which can still be observed today.

In a paper published today in the Astrophysical Journal, the researchers report a strong abundance of zinc in HE 1327-2326, an ancient, surviving star that is among the universe's second generation of stars. They believe the star could only have acquired such a large amount of zinc after an asymmetric explosion of one of the very first stars had enriched its birth gas cloud.

"When a star explodes, some proportion of that star gets sucked into a black hole like a vacuum cleaner," says Anna Frebel, an associate professor of physics at MIT and a member of MIT's Kavli Institute for Astrophysics and Space Research. "Only when you have some kind of mechanism, like a jet that can yank out material, can you observe that material later in a next-generation star. And we believe that's exactly what could have happened here."

"This is the first observational evidence that such an asymmetric supernova took place in the early universe," adds MIT postdoc Rana Ezzeddine, the study's lead author. "This changes our understanding of how the first stars exploded."

"A sprinkle of elements"

HE 1327-2326 was discovered by Frebel in 2005. At the time, the star was the most metal-poor star ever observed, meaning that it had extremely low concentrations of elements heavier than hydrogen and helium -- an indication that it formed as part of the second generation of stars, at a time when most of the universe's heavy element content had yet to be forged.

"The first stars were so massive that they had to explode almost immediately," Frebel says. "The smaller stars that formed as the second generation are still available today, and they preserve the early material left behind by these first stars. Our star has just a sprinkle of elements heavier than hydrogen and helium, so we know it must have formed as part of the second generation of stars."

In May of 2016, the team was able to observe the star which orbits close to Earth, just 5,000 light years away. The researchers won time on NASA's Hubble Space Telescope over two weeks, and recorded the starlight over multiple orbits. They used an instrument aboard the telescope, the Cosmic Origins Spectrograph, to measure the minute abundances of various elements within the star.

The spectrograph is designed with high precision to pick up faint ultraviolet light. Some of those wavelength are absorbed by certain elements, such as zinc. The researchers made a list of heavy elements that they suspected might be within such an ancient star, that they planned to look for in the UV data, including silicon, iron, phosophorous, and zinc.

"I remember getting the data, and seeing this zinc line pop out, and we couldn't believe it, so we redid the analysis again and again," Ezzeddine recalls. "We found that, no matter how we measured it, we got this really strong abundance of zinc."

A star channel opens

Frebel and Ezzeddine then contacted their collaborators in Japan, who specialize in developing simulations of supernovae and the secondary stars that form in their aftermath. The researchers ran over 10,000 simulations of supernovae, each with different explosion energies, configurations, and other parameters. They found that while most of the spherical supernova simulations were able to produce a secondary star with the elemental compositions the researchers observed in HE 1327-2326, none of them reproduced the zinc signal.

As it turns out, the only simulation that could explain the star's makeup, including its high abundance of zinc, was one of an aspherical, jet-ejecting supernova of a first star. Such a supernova would have been extremely explosive, with a power equivalent to about a nonillion times (that's 10 with 30 zeroes after it) that of a hydrogen bomb.

"We found this first supernova was much more energetic than people have thought before, about five to 10 times more," Ezzeddine says. "In fact, the previous idea of the existence of a dimmer supernova to explain the second-generation stars may soon need to be retired."

The team's results may shift scientists' understanding of reionization, a pivotal period during which the gas in the universe morphed from being completely neutral, to ionized -- a state that made it possible for galaxies to take shape.

"People thought from early observations that the first stars were not so bright or energetic, and so when they exploded, they wouldn't participate much in reionizing the universe," Frebel says. "We're in some sense rectifying this picture and showing, maybe the first stars had enough oomph when they exploded, and maybe now they are strong contenders for contributing to reionization, and for wreaking havoc in their own little dwarf galaxies."

These first supernovae could have also been powerful enough to shoot heavy elements into neighboring "virgin galaxies" that had yet to form any stars of their own.

Read more at Science Daily

New type of highly sensitive vision discovered in deep-sea fish

Light shining into dark ocean.
The deep sea is home to fish species that can detect various wavelengths of light in near-total darkness. Unlike other vertebrates, they have several genes for the light-sensitive photopigment rhodopsin, which likely enables these fish to detect bioluminescent signals from light-emitting organs. The findings were published in the journal Science by an international team of researchers led by evolutionary biologists from the University of Basel.

Color vision in vertebrates is usually achieved through the interaction of various photopigments in the cone cells found in the retina. Each of these photopigments reacts to a certain wavelength of light. In humans, for example, these wavelengths are the red, green and blue range of the light spectrum. Color vision is only possible in daylight, however. In darkness, vertebrates detect the few available light particles with their light-sensitive rod cells, which contain only a single type of the photopigment rhodopsin -- explaining why nearly all vertebrates are color-blind at night.

A genetic record for the silver spinyfin

An international team of researchers lead by Professor Walter Salzburger from the University of Basel recently analyzed more than 100 fish genomes, including those of fish living in deep-sea habitats. The zoologists discovered that certain deep-sea fish have expanded their repertoire of rhodopsin genes. In the case of the silver spinyfin (Diretmus argenteus), they found no less than 38 copies of the rhodopsin gene, in addition to two other opsins of a different type. "This makes the darkness-dwelling silver spinyfin the vertebrate with the most photopigment genes by far," explains Salzburger.

The deep-sea fish's many different rhodopsin gene copies have each adapted to detect a certain wavelength of light, the researchers further reported. They demonstrated this through computer simulations and functional experiments on rhodopsin proteins regenerated in the lab. The genes cover exactly the wavelength range of light "produced" by light-emitting organs of deep-sea organisms. This is known as bioluminescence, which is the ability of an organism to produce light on its own or with the help of other organisms. For example, anglerfish attract prey with their bioluminescent organs.

Detecting signals in the dark

The deep sea is the largest habitat on Earth and yet one of the least explored due to its inaccessibility. Many organisms have adapted to life in the near-total darkness of this inhospitable environment. For example, many fish have developed highly sensitive telescope eyes that allow them to detect the tiny amount of residual light that makes it to the depths of the ocean.

In vertebrates, 27 key spectral tuning sites have been identified in the protein for rhodopsin. These sites directly affect which wavelengths are detected. The researchers discovered that in the various gene copies of the deep-sea silver spinyfin, 24 of these positions exhibited mutations.

"It appears that deep-sea fish have developed this multiple rhodopsin-based vision several times independently of each other, and that this is specifically used to detect bioluminescent signals," says Salzburger. He adds that this may give deep-sea fish an evolutionary advantage by allowing them to much better see potential prey or predators.

Read more at Science Daily

Climate change is giving old trees a growth spurt

Tree rings.
Larch trees in the permafrost forests of northeastern China -- the northernmost tree species on Earth -- are growing faster as a result of climate change, according to new research.

A new study of growth rings from Dahurian larch in China's northern forests finds the hardy trees grew more from 2005 to 2014 than in the preceding 40 years. The findings also show the oldest trees have had the biggest growth spurts: Trees older than 400 years grew more rapidly in those 10 years than in the past 300 years, according to the new study.

The study's authors suspect warmer soil temperatures are fueling the growth spurts by lowering the depth of the permafrost layer, allowing the trees' roots to expand and suck up more nutrients.

The increased growth is good for the trees in the short-term but may be disastrous for the forests in the long-term, according to the authors. As the climate continues to warm, the permafrost underneath the trees may eventually degrade and no longer be able to support the slow-growing trees.

No other tree species can survive the permafrost plains this far north, so if the larch forests of northern Asia disappear, the entire ecosystem would change, according to the study's authors.

"The disappearance of larch would be a disaster to the forest ecosystem in this region," said Xianliang Zhang, an ecologist at Shenyang Agricultural University in Shenhang, China, and lead author of the new study in AGU's Journal of Geophysical Research: Biogeosciences.

Earth's hardiest trees


Dahurian larch is Earth's northernmost tree species and its most cold-hardy: These larches are the only trees that can tolerate the frigid permafrost plains of Russia, Mongolia and northern China. Chinese locals refer to Dahurian larch as "thin-old-trees," because they grow slowly in the thin active layer of soil above the permafrost and can live for more than 400 years.

Permafrost regions around the world have been thawing in recent decades due to rising temperatures, sometimes degrading into swamps and wetlands. In the new study, Zhang and his colleagues analyzed growth rings from more than 400 Dahurian larch in old-growth forests of northeastern China, the southernmost portion of the tree's range, to see how the trees are faring in a warming climate.

Tree rings allow scientists to measure how much trees grow from year to year. Much like people, trees do most of their growing while young. Dahurian larch generally grow rapidly until they become around 150 years old, at which point their growth slows. When the trees hit 300 years old, their growth basically stalls.

The researchers used the width of each tree's growth rings to calculate how much area each tree gained in cross-section each year over the course of its lifetime.

The results show Dahurian larch trees grew more from 2005 to 2014 than from 1964 to 2004. Interestingly, the effect was most pronounced in the oldest trees: Trees older than 300 years grew 80 percent more from 2005 to 2014 than in the preceding 40 years. Trees between 250 and 300 years old grew 35 percent more during that time period, while trees younger than 250 years grew between 11 and 13 percent more.

The old trees' growth is unusual -- it's akin to a 100-year-old person suddenly getting taller, according to Zhang. The authors suspect older trees are growing more than younger trees because they have more developed root systems that can harvest resources from the soil more efficiently.

The researchers compared the trees' growth rates to climate factors like soil temperatures and precipitation data over the past 50 years to see what was causing the unusual growth. They found increased soil temperatures, especially in winter, are likely powering the growth spurts. They suspect the warmer temperatures lower the depth of the permafrost layer, providing the trees' roots more room to expand and access to more nutrients.

While this initial soil warming has benefitted Dahurian larch, further permafrost thaw could likely decrease tree growth and even cause the forest to decay, according to the authors. Dahurian larch can't survive in wet conditions, so permafrost changing to wetlands or peatlands would be detrimental to the forest as a whole, they said.

"If the larch forest retreat in this region in the future, it is also not a good sign for the whole boreal forest," Zhang said.

While other research has examined the effects of a warming climate on temperature-sensitive trees in North America, the new study examined temperature-sensitive trees in permafrost areas, which have been less widely studied but are a vast component of the boreal forest, said Erika Wise, an associate professor of geography at the University of North Carolina -- Chapel Hill, who was not involved in the new study. Additionally, previous studies on these larch trees have focused on the effects of air temperature and precipitation, but the new study looked at the influence of ground surface temperatures, which has also not been studied widely, she added.

Read more at Science Daily

Gravitational forces in protoplanetary disks may push super-Earths close to their stars

Protoplanetary disc formation.
The galaxy is littered with planetary systems vastly different from ours. In the solar system, the planet closest to the Sun -- Mercury, with an orbit of 88 days -- is also the smallest. But NASA's Kepler spacecraft has discovered thousands of systems full of very large planets -- called super-Earths -- in very small orbits that zip around their host star several times every 10 days.

Now, researchers may have a better understanding how such planets formed.

A team of Penn State-led astronomers found that as planets form out of the chaotic churn of gravitational, hydrodynamic -- or, drag -- and magnetic forces and collisions within the dusty, gaseous protoplanetary disk that surrounds a star as a planetary system starts to form, the orbits of these planets eventually get in synch, causing them to slide -- follow the leader-style -- toward the star. The team's computer simulations result in planetary systems with properties that match up with those of actual planetary systems observed by the Kepler space telescope of solar systems. Both simulations and observations show large, rocky super-Earths orbiting very close to their host stars, according to Daniel Carrera, assistant research professor of astronomy at Penn State's Eberly College of Science.

He said the simulation is a step toward understanding why super-Earths gather so close to their host stars. The simulations may also shed light on why super-Earths are often located so close to their host star where there doesn't seem to be enough solid material in the protoplanetary disk to form a planet, let alone a big planet, according to the researchers, who report their findings in the Monthly Notices of the Royal Astronomical Society.

"When stars are very young, they are surrounded by a disc that is mostly gas with some dust -- and that dust grows into the planets, like the Earth and these super-Earths," said Carrera. "But the particular puzzle for us is that this disc doesn't go the all way to the star -- there's a cavity there. And yet we see these planets closer to the star than the edge of that disc."

The astronomers' computer simulation shows that, over time, the planets' and disk's gravitational forces lock the planets into synchronized orbits -- resonance -- with each other. The planets then begin to migrate in unison, with some moving closer to the edge of the disk. The combination of the gas disk affecting the outer planets and the gravitational interactions among the outer and inner planets can continue to push the inner planets very closer to the star, even interior to the edge of the disk.

"With the first discoveries of Jupiter-size exoplanets orbiting close to their host star, astronomers were inspired to develop multiple models for how such planets could form, including chaotic interactions in multiple planet systems, tidal effects and migration through the gas disk," said Eric Ford, professor of astronomy and astrophysics, director of Penn State's Center for Exoplanets and Habitable Worlds and Institute for CyberScience (ICS) faculty co-hire. "However, these models did not predict the more recent discoveries of super-Earth-size planets orbiting so close to their host star. Some astronomers had suggested that such planets must have formed very near their current locations. Our work is important because it demonstrates how short-period super-Earth-size planets could have formed and migrated to their current locations thanks to the complex interactions of multiple planet systems."

Carrera said more work remains to confirm that the theory is correct.

"We've shown that it's possible for planets to get that close to a star in this simulation, but it doesn't mean that it's the only way that the universe chose to make them," said Carrera. "Someone might come up with a different idea of a way to get the planets that close to a star. And, so, the next step is to test the idea, revise it, make predictions that you can test against observations."

Future research may also explore why our super-Earthless solar system is different from most other solar systems, Carrera added.

"Super-Earths in very close orbits are by far the most common type of exoplanet that we observe, and yet they don't exist in our own solar system and that makes us wonder why," said Carrera.

According to the researchers, the best published estimates suggest that about 30 percent of solar-like stars have some planets close to the host star than the Earth is to the Sun. However, they note that additional planets are could go undetected, especially small planets far from their star.

Read more at Science Daily

May 9, 2019

A new filter to better map the dark universe

Stars in the night sky.
The earliest known light in our universe, known as the cosmic microwave background, was emitted about 380,000 years after the Big Bang. The patterning of this relic light holds many important clues to the development and distribution of large-scale structures such as galaxies and galaxy clusters.

Distortions in the cosmic microwave background (CMB), caused by a phenomenon known as lensing, can further illuminate the structure of the universe and can even tell us things about the mysterious, unseen universe -- including dark energy, which makes up about 68 percent of the universe and accounts for its accelerating expansion, and dark matter, which accounts for about 27 percent of the universe.

Set a stemmed wine glass on a surface, and you can see how lensing effects can simultaneously magnify, squeeze, and stretch the view of the surface beneath it. In lensing of the CMB, gravity effects from large objects like galaxies and galaxy clusters bend the CMB light in different ways. These lensing effects can be subtle (known as weak lensing) for distant and small galaxies, and computer programs can identify them because they disrupt the regular CMB patterning.

There are some known issues with the accuracy of lensing measurements, though, and particularly with temperature-based measurements of the CMB and associated lensing effects.

While lensing can be a powerful tool for studying the invisible universe, and could even potentially help us sort out the properties of ghostly subatomic particles like neutrinos, the universe is an inherently messy place.

And like bugs on a car's windshield during a long drive, the gas and dust swirling in other galaxies, among other factors, can obscure our view and lead to faulty readings of the CMB lensing.

There are some filtering tools that help researchers to limit or mask some of these effects, but these known obstructions continue to be a major problem in the many studies that rely on temperature-based measurements.

The effects of this interference with temperature-based CMB studies can lead to erroneous lensing measurements, said Emmanuel Schaan, a postdoctoral researcher and Owen Chamberlain Postdoctoral Fellow in the Physics Division at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab).

"You can be wrong and not know it," Schaan said. "The existing methods don't work perfectly -- they are really limiting."

To address this problem, Schaan teamed up with Simone Ferraro, a Divisional Fellow in Berkeley Lab's Physics Division, to develop a way to improve the clarity and accuracy of CMB lensing measurements by separately accounting for different types of lensing effects.

"Lensing can magnify or demagnify things. It also distorts them along a certain axis so they are stretched in one direction," Schaan said.

The researchers found that a certain lensing signature called shearing, which causes this stretching in one direction, seems largely immune to the foreground "noise" effects that otherwise interfere with the CMB lensing data. The lensing effect known as magnification, meanwhile, is prone to errors introduced by foreground noise. Their study, published May 8 in the journal Physical Review Letters, notes a "dramatic reduction" in this error margin when focusing solely on shearing effects.

The sources of the lensing, which are large objects that stand between us and the CMB light, are typically galaxy groups and clusters that have a roughly spherical profile in temperature maps, Ferraro noted, and the latest study found that the emission of various forms of light from these "foreground" objects only appears to mimic the magnification effects in lensing but not the shear effects.

"So we said, 'Let's rely only on the shear and we'll be immune to foreground effects,'" Ferraro said. "When you have many of these galaxies that are mostly spherical, and you average them, they only contaminate the magnification part of the measurement. For shear, all of the errors are basically gone."

He added, "It reduces the noise, allowing us to get better maps. And we're more certain that these maps are correct," even when the measurements involve very distant galaxies as foreground lensing objects.

The new method could benefit a range of sky-surveying experiments, the study notes, including the POLARBEAR-2 and Simons Array experiments, which have Berkeley Lab and UC Berkeley participants; the Advanced Atacama Cosmology Telescope (AdvACT) project; and the South Pole Telescope -- 3G camera (SPT-3G). It could also aid the Simons Observatory and the proposed next-generation, multilocation CMB experiment known as CMB-S4 -- Berkeley Lab scientists are involved in the planning for both of these efforts.

The method could also enhance the science yield from future galaxy surveys like the Berkeley Lab-led Dark Energy Spectroscopic Instrument (DESI) project under construction near Tucson, Arizona, and the Large Synoptic Survey Telescope (LSST) project under construction in Chile, through joint analyses of data from these sky surveys and the CMB lensing data.

Increasingly large datasets from astrophysics experiments have led to more coordination in comparing data across experiments to provide more meaningful results. "These days, the synergies between CMB and galaxy surveys are a big deal," Ferraro said.

In this study, researchers relied on simulated full-sky CMB data. They used resources at Berkeley Lab's National Energy Research Scientific Computing Center (NERSC) to test their method on each of the four different foreground sources of noise, which include infrared, radiofrequency, thermal, and electron-interaction effects that can contaminate CMB lensing measurements.

The study notes that cosmic infrared background noise, and noise from the interaction of CMB light particles (photons) with high-energy electrons have been the most problematic sources to address using standard filtering tools in CMB measurements. Some existing and future CMB experiments seek to lessen these effects by taking precise measurements of the polarization, or orientation, of the CMB light signature rather than its temperature.

"We couldn't have done this project without a computing cluster like NERSC," Schaan said. NERSC has also proved useful in serving up other universe simulations to help prepare for upcoming experiments like DESI.

Read more at Science Daily

Paper wasps capable of behavior that resembles logical reasoning

Paper wasps.
A new University of Michigan study provides the first evidence of transitive inference, the ability to use known relationships to infer unknown relationships, in a nonvertebrate animal: the lowly paper wasp.

For millennia, transitive inference was considered a hallmark of human deductive powers, a form of logical reasoning used to make inferences: If A is greater than B, and B is greater than C, then A is greater than C.

But in recent decades, vertebrate animals including monkeys, birds and fish have demonstrated the ability to use transitive inference.

The only published study that assessed TI in invertebrates found that honeybees weren't up to the task. One possible explanation for that result is that the small nervous system of honeybees imposes cognitive constraints that prevent those insects from conducting transitive inference.

Paper wasps have a nervous system roughly the same size -- about one million neurons -- as honeybees, but they exhibit a type complex social behavior not seen in honeybee colonies. University of Michigan evolutionary biologist Elizabeth Tibbetts wondered if paper wasps' social skills could enable them to succeed where honeybees had failed.

To find out, Tibbetts and her colleagues tested whether two common species of paper wasp, Polistes dominula and Polistes metricus, could solve a transitive inference problem. The team's findings are scheduled for online publication May 8 in the journal Biology Letters.

"This study adds to a growing body of evidence that the miniature nervous systems of insects do not limit sophisticated behaviors," said Tibbetts, a professor in the Department of Ecology and Evolutionary Biology.

"We're not saying that wasps used logical deduction to solve this problem, but they seem to use known relationships to make inferences about unknown relationships," Tibbetts said. "Our findings suggest that the capacity for complex behavior may be shaped by the social environment in which behaviors are beneficial, rather than being strictly limited by brain size."

To test for TI, Tibbetts and her colleagues first collected paper wasp queens from several locations around Ann Arbor, Michigan.

In the laboratory, individual wasps were trained to discriminate between pairs of colors called premise pairs. One color in each pair was associated with a mild electric shock, and the other was not.

"I was really surprised how quickly and accurately wasps learned the premise pairs," said Tibbetts, who has studied the behavior of paper wasps for 20 years.

Later, the wasps were presented with paired colors that were unfamiliar to them, and they had to choose between the colors. The wasps were able to organize information into an implicit hierarchy and used transitive inference to choose between novel pairs, Tibbetts said.

"I thought wasps might get confused, just like bees," she said. "But they had no trouble figuring out that a particular color was safe in some situations and not safe in other situations."

So, why do wasps and honeybees -- which both possess brains smaller than a grain of rice -- perform so differently on transitive inference tests? One possibility is that different types of cognitive abilities are favored in bees and wasps because they display different social behaviors.

A honeybee colony has a single queen and multiple equally ranked female workers. In contrast, paper wasp colonies have several reproductive females known as foundresses. The foundresses compete with their rivals and form linear dominance hierarchies.

A wasp's rank in the hierarchy determines shares of reproduction, work and food. Transitive inference could allow wasps to rapidly make deductions about novel social relationships.

That same skill set may enable female paper wasps to spontaneously organize information during transitive inference tests, the researchers hypothesize.

For millennia, transitive inference was regarded as a hallmark of human cognition and was thought to be based on logical deduction. More recently, some researchers have questioned whether TI requires higher-order reasoning or can be solved with simpler rules.

The study by Tibbetts and her colleagues illustrates that paper wasps can build and manipulate an implicit hierarchy. But it makes no claims about the precise mechanisms that underlie this ability.

In previous studies, Tibbetts and her colleagues showed that paper wasps recognize individuals of their species by variations in their facial markings and that they behave more aggressively toward wasps with unfamiliar faces.

The researchers have also demonstrated that paper wasps have surprisingly long memories and base their behavior on what they remember of previous social interactions with other wasps.

Read more at Science Daily

Statistical study finds it unlikely South African fossil species is ancestral to humans

Australopithecus written on chalkboard.
Statistical analysis of fossil data shows that it is unlikely that Australopithecus sediba, a nearly two-million-year-old, apelike fossil from South Africa, is the direct ancestor of Homo, the genus to which modern-day humans belong.

The research by paleontologists from the University of Chicago, published this week in Science Advances, concludes by suggesting that Australopithecus afarensis, of the famous "Lucy" skeleton, is still the most likely ancestor to the genus Homo.

The first A. sediba fossils were unearthed near Johannesburg in 2008. Hundreds of fragments of the species have since been discovered, all dating to roughly two million years ago. The oldest known Homo fossil, the jawbone of an as yet unnamed species found in Ethiopia, is 2.8 million years old, predating A. sediba by 800,000 years.

Despite this timeline, the researchers who discovered A. sediba have claimed that it is an ancestral species to Homo. While it is possible that A. sediba (the hypothesized ancestor) could have postdated earliest Homo (the hypothesized descendant) by 800,000 years, the new analysis indicates that the probability of finding this chronological pattern is highly unlikely.

"It is definitely possible for an ancestor's fossil to postdate a descendant's by a large amount of time," said the study's lead author Andrew Du, PhD, who will join the faculty at Colorado State University after concluding his postdoctoral research in the lab of Zeray Alemseged, PhD, the Donald M. Pritzker Professor of Organismal and Biology and Anatomy at UChicago.

"We thought we would take it one step further to ask how likely it is to happen, and our models show that the probability is next to zero," Du said.

Du and Alemseged also reviewed the scientific literature for other hypothesized ancestor-descendant relationships between two hominin species. Of the 28 instances they found, only one first-discovered fossil of a descendant was older than its proposed ancestor, a pair of Homo species separated by 100,000 years, far less than the 800,000 years separating A. sediba and earliest Homo. For context, the average lifespan of any hominin species is about one million years.

"Again, we see that it's possible for an ancestor's fossil to postdate its descendant's," Du said. "But 800,000 years is quite a long time."

Alemseged and Du maintain that Australopithecus afarensisis a better candidate for the direct ancestor of Homofor a number of reasons. A. afarensis fossils have been dated up to three million years old, nearing the age of the first Homo jaw. Lucy and her counterparts, including Selam, the fossil of an A. afarensischild that Alemseged discovered in 2000, were found in Ethiopia, just miles from where the Homo jaw was discovered. The jaw's features also resemble those of A. afarensis closely enough that one could make the case it was a direct descendant.

Read more at Science Daily

Star formation burst in the Milky Way 2-3 billion years ago

Milky Way galaxy.
A team led by researchers of the Institute of Cosmos Sciences of the University of Barcelona (ICCUB, UB-IEEC) and the Besançon Astronomical Observatory have found, analysing data from the Gaia satellite, that a severe star formation burst occurred in the Milky Way about 2 to 3 billion years ago. In this process, more than 50 percent of the stars that created the galactic disc may have been born. Their results come from the combination of the distances, colors and magnitude of the stars that were measured by Gaia with models that predict their distribution in our Galaxy. The study has been published in the journal Astronomy & Astrophysics.

Just like a flame fades when there is no gas in the cylinder, the rhythm of the stellar formation in the Milky Way, fuelled by the gas that was deposited, should decrease slowly and in a continuous way until it has used up the existing gas. The results of the study show that, although this was the process that took place over the first 4 billion years of the disc formation, a severe star formation burst, or "stellar baby boom" -- as stated in the article published in the Nature Research Highlights --, inverted this trend. The merging with a satellite galaxy of the Milky Way, which was rich in gas, could have added new fuel and reactivated the process of stellar formation, in a similar way to when a gas cylinder is changed. This mechanism would explain the distribution of distances, ages and masses that are estimated from the data taken from the European Space Agency Gaia satellite.

"The time scale of this star formation burst together with the great amount of stellar mass involved in the process, thousands of millions of solar mass, suggests the disc of our Galaxy did not have a steady and paused evolution, it may have suffered an external perturbation that began about five billion years ago," said Roger Mor, ICCUB researcher and first author of the article.

"We have been able to find this out due having -- for the first time -- precise distances for more than three million stars in the solar environment," says Roger Mor. "Thanks to these data, we could discover the mechanisms that controlled the evolution more than 8-10 billion years ago in the disc of our Galaxy, which is not more than the bright band we see in the sky on a dark night and with no light pollution." As in many research fields, these findings have been possible thanks to the availability of the combination of a great amount of unprecedented precision data, and the availability of many hours in computing in the computer facilities funded by the FP7 GENIUS European project (Gaia European Project for Improved data User Services) -in the Center for Scientific and Academic Services of Catalonia (CSUC).

Cosmologic models predict our galaxy would have been growing due the merging with other galaxies, a fact that has been stated by other studies using Gaia data. One of these mergers could be the cause of the severe star formation burst that was detected in this study.

"Actually, the peak of star formation is so clear, unlike what we predicted before having data from Gaia, that we thought necessary to treat its interpretation together with experts on cosmological evolution of external galaxies," notes Francesca Figuerars, lecturer at the Department of Quantum Physics and Astrophysics of the UB, ICCUB member and author of the article.

According to the expert on simulations of galaxies similar to the Milky Way, Santi Roca-Fàbrega -from the Complutense University of Mardid and also author of the article, "the obtained results match with what the current cosmological models predict, and what is more, our Galaxy seen from Gaia's eyes is an excellent cosmological laboratory where we can test and confront models at a bigger scale in the universe."

Gaia mission until 2020

This study has been conducted with the second release of the Gaia mission, which was published a year ago, on April 25, 2018. Xavier Luri, director of ICCUB and also an author of the article states: "The role of scientists and engineers of the UB has been essential so that the scientific community enjoys the excellent quality of data from the Gaia release."

More than 400 scientists and engineers from around Europe are part of the consortium in charge of preparing and validating these data. "Their collective work brought the international scientific community a release that is making us rethink many of the existent scenarios on the origins and evolution of our galaxy," notes Luri.

Read more at Science Daily

New clues about how ancient galaxies lit up the Universe

Spiral galaxy.
NASA's Spitzer Space Telescope has revealed that some of the Universe's earliest galaxies were brighter than expected. The excess light is a by-product of the galaxies releasing incredibly high amounts of ionising radiation. The finding offers clues to the cause of the Epoch of Reionisation, a major cosmic event that transformed the universe from being mostly opaque to the brilliant starscape seen today. The new work appears in a paper in Monthly Notices of the Royal Astronomical Society.

Researchers report on observations of some of the first galaxies to form in the universe, less than 1 billion years after the big bang (or a little more than 13 billion years ago). The data show that in a few specific wavelengths of infrared light, the galaxies are considerably brighter than scientists anticipated. The study is the first to confirm this phenomenon for a large sampling of galaxies from this period, showing that these were not special cases of excessive brightness, but that even average galaxies present at that time were much brighter in these wavelengths than galaxies we see today.

No one knows for sure when the first stars in our universe burst to life. But evidence suggests that between about 100 million and 200 million years after the Big Bang, the Universe was filled mostly with neutral hydrogen gas that had perhaps just begun to coalesce into stars, which then began to form the first galaxies. By about 1 billion years after the big bang, the Universe had become a sparkling firmament. Something else had changed, too: Electrons of the omnipresent neutral hydrogen gas had been stripped away in a process known as ionisation. The Epoch of Reionisation -- the changeover from a universe full of neutral hydrogen to one filled with ionised hydrogen -- is well documented.

Before this Universe-wide transformation, long-wavelength forms of light, such as radio waves and visible light, traversed the universe more or less unencumbered. But shorter wavelengths of light -- including ultraviolet light, X-rays and gamma rays -- were stopped short by neutral hydrogen atoms. These collisions would strip the neutral hydrogen atoms of their electrons, ionising them.

But what could have possibly produced enough ionizing radiation to affect all the hydrogen in the Universe? Was it individual stars? Giant galaxies? If either were the culprit, those early cosmic colonisers would have been different than most modern stars and galaxies, which typically don't release high amounts of ionising radiation. Then again, perhaps something else entirely caused the event, such as quasars -- galaxies with incredibly bright centres powered by huge amounts of material orbiting supermassive black holes.

"It's one of the biggest open questions in observational cosmology," said Stephane De Barros, lead author of the study and a postdoctoral researcher at the University of Geneva in Switzerland. "We know it happened, but what caused it? These new findings could be a big clue."

To peer back in time to the era just before the Epoch of Reionisation ended, Spitzer stared at two regions of the sky for more than 200 hours each, allowing the space telescope to collect light that had travelled for more than 13 billion years to reach us.

As some of the longest science observations ever carried out by Spitzer, they were part of an observing campaign called GREATS, short for GOODS Re-ionization Era wide-Area Treasury from Spitzer. GOODS (itself an acronym: Great Observatories Origins Deep Survey) is another campaign that performed the first observations of some GREATS targets. The study also used archival data from the NASA / ESA Hubble Space Telescope.

Using these ultra-deep observations by Spitzer, the team of astronomers observed 135 distant galaxies and found that they were all particularly bright in two specific wavelengths of infrared light produced by ionising radiation interacting with hydrogen and oxygen gases within the galaxies. This implies that these galaxies were dominated by young, massive stars composed mostly of hydrogen and helium. They contain very small amounts of "heavy" elements (like nitrogen, carbon and oxygen) compared to stars found in average modern galaxies.

These stars were not the first stars to form in the Universe (those would have been composed of hydrogen and helium only) but were still members of a very early generation of stars. The Epoch of Reionisation wasn't an instantaneous event, so while the new results are not enough to close the book on this cosmic event, they do provide new details about how the Universe evolved at this time and how the transition played out.

"We did not expect that Spitzer, with a mirror no larger than a Hula-Hoop, would be capable of seeing galaxies so close to the dawn of time," said Michael Werner, Spitzer's project scientist at NASA's Jet Propulsion Laboratory in Pasadena, California. "But nature is full of surprises, and the unexpected brightness of these early galaxies, together with Spitzer's superb performance, puts them within range of our small but powerful observatory."

The NASA / CSA / ESA James Webb Space Telescope, set to launch in 2021, will study the Universe in many of the same wavelengths observed by Spitzer. But where Spitzer's primary mirror is only 85 centimetres in diameter, Webb's is 6.5 metres -- about 7.5 times larger -- enabling Webb to study these galaxies in far greater detail. In fact, Webb will try to detect light from the first stars and galaxies in the Universe. The new study shows that due to their brightness in those infrared wavelengths, the galaxies observed by Spitzer will be easier for Webb to study than previously thought.

Read more at Science Daily

May 8, 2019

Could this rare supernova resolve a longstanding origin debate?

Detection of a supernova with an unusual chemical signature by a team of astronomers led by Carnegie's Juna Kollmeier -- and including Carnegie's Nidia Morrell, Anthony Piro, Mark Phillips, and Josh Simon -- may hold the key to solving the longstanding mystery that is the source of these violent explosions. Observations taken by the Magellan telescopes at Carnegie's Las Campanas Observatory in Chile were crucial to detecting the emission of hydrogen that makes this supernova, called ASASSN-18tb, so distinctive.

Their work is published in Monthly Notices of the Royal Astronomical Society.

Type Ia supernovae play a crucial role in helping astronomers understand the universe. Their brilliance allows them to be seen across great distances and to be used as cosmic mile-markers, which garnered the 2011 Nobel Prize in Physics. Furthermore, their violent explosions synthesize many of the elements that make up the world around us, which are ejected into the galaxy to generate future stars and stellar systems.

Although hydrogen is the most-abundant element in the universe, it is almost never seen in Type Ia supernova explosions. In fact, the lack of hydrogen is one of the defining features of this category of supernovae and is thought to be a key clue to understanding what came before their explosions. This is why seeing hydrogen emissions coming from this supernova was so surprising.

Type Ia supernovae originate from the thermonuclear explosion of a white dwarf that is part of a binary system. But what exactly triggers the explosion of the white dwarf -- the dead core left after a Sun-like star exhausts its nuclear fuel -- is a great puzzle. A prevailing idea is that, the white dwarf gains matter from its companion star, a process that may eventually trigger the explosion, but whether or not this is the correct theory explaining these supernovae has been hotly debated for decades.

This led the research team behind this paper to begin a major survey of Type Ia supernovae -- called 100IAS -- that was launched when Kollmeier was discussing the origin of these supernovae with study co-authors Subo Dong of Peking University and Doron Kushnir of the Weizmann Institute of Science who, along with Weizmann colleague Boaz Katz, put forward an new theory for Type Ia explosions that involves the violent collision of two white dwarfs.

Astronomers eagerly study the chemical signatures of the material ejected during these explosions in order to understand the mechanism and players involved in creating Type Ia supernovae.

In recent years, astronomers have discovered a small number of rare Type Ia supernovae that are cloaked in large amount of hydrogen -- maybe as much as the mass of our Sun. But in several respects, ASASSN-18tb is different from these previous events.

"It's possible that the hydrogen we see when studying ASASSN-18tb is like these previous supernovae, but there are some striking differences that aren't so easy to explain," said Kollmeier.

First, in all previous cases these hydrogen-cloaked Type Ia supernovae were found in young, star-forming galaxies where plenty of hydrogen-rich gas may be present. But ASASSN-18tb occurred in a galaxy consisting of old stars. Second, the amount of hydrogen ejected by ASASSN-18tb is significantly less than that seen surrounding those other Type Ia supernovae. It probably amounts to about one-hundredth the mass of our Sun.

"One exciting possibility is that we are seeing material being stripped from the exploding white dwarf's companion star as the supernova collides with it," said Anthony Piro. "If this is the case, it would be the first-ever observation of such an occurrence."

"I have been looking for this signature for a decade!" said co-author Josh Simon. "We finally found it, but it's so rare, which is an important piece of the puzzle for solving the mystery of how Type Ia supernovae originate."

Nidia Morrell was observing that night, and she immediately reduced the data coming off the telescope and circulated them to the team including PhD student Ping Chen, who works on 100IAS for his thesis and Jose Luis Prieto of Universidad Diego Portales, a veteran SNe observer. Mr. Chen was the first to notice that this was not a typical spectrum. All were completely surprised by what they saw in ASASSN-18tb's spectrum. "I was shocked, and I thought to myself 'could this really be hydrogen?'" recalled Morrell.

Morrell met with team member Mark Phillips, who is a pioneer in establishing the relationship -- informally named after him -- that allows Type Ia supernovae to be used as standard rulers to discuss the observation. Phillips was convinced: It is hydrogen you've found; no other possible explanation."

Read more at Science Daily

Mystery of texture of Guinness beer: inclination angle of a pint glass is key to solution

Stout beer.
Guinness beer, a dark stout beer, is pressurized with nitrogen gas. When poured Guinness beer into a pint glass, small-diameter bubbles (only 1/10 the size of those in carbonated drinks such as soda and carbonated water) disperse throughout the entire glass and the texture motion of the bubble swarm moves downwards.

Although some models to explain how the downward movement of a bubble swarm as waves are caused in Guinness beer have been proposed, the mechanism underlying the texture-formation was an open problem.

Because the opaque and dark-colored Guinness beer obstructs the physical observation in a glass and huge computation using supercomputers is necessary to conduct numerical simulation of flows including a vast number of small bubbles in the beer, the team of researchers led by Tomoaki Watamura produced transparent "pseudo-Guinness fluid" by using light particles and tap water. They filmed the movement of liquid with a high-speed video camera applying laser-induced-fluorescence method in order to accurately measure the movement of fluid. In addition, using molecular tags, they visualized the irregular movement of the fluid.

With these methods, the team poured pseudo-Guinness fluid in an inclined container to observe how the texture formed. The texture-formation appeared only in the region of about 1 mm away from the inclined wall and didn't appear in the vertical wall vicinity.

They also observed a clear-fluid (bubble-free) film flow down along the inclined wall in the inclined wall vicinity, capturing velocity and thickness of bubble-free film flowing downward. While the texture appeared when the glass inclination angles were small, it did not when they were large, demonstrating that the texture-formation in a glass of Guinness beer was caused by the roll-wave instability of the gravity current.

Lead author Watamura says, "There are a large number of small objects in nature, such as fine rock particles transported from rivers to the sea and microorganisms living in lakes and ponds. Comprehending and regulating the movement of small objects is important in various industrial processes as well. Our research results will be useful in understanding and controlling flows of bubbles and particles used in industrial processes as well as protein crystallization and cell cultivation used in the field of life science."

From Science Daily

Learning language: New insights into how brain functions

For most native English-speakers, learning the Mandarin Chinese language from scratch is no easy task.

Learning it in a class that essentially compresses a one-semester college course into a single month of intensive instruction -- and agreeing to have your brain scanned before and after -- might seem even more daunting.

But the 24 Americans who did just that have enabled University of Delaware cognitive neuroscientist Zhenghan Qi and her colleagues to make new discoveries about how adults learn a foreign language.

The study, published in May in the journal NeuroImage, focused on the roles of the brain's left and right hemispheres in language acquisition. The findings could lead to instructional methods that potentially improve students' success in learning a new language.

"The left hemisphere is known as the language-learning part of the brain, but we found that it was the right hemisphere that determined the eventual success" in learning Mandarin, said Qi, assistant professor of linguistics and cognitive science.

"This was new," she said. "For decades, everyone has focused on the left hemisphere, and the right hemisphere has been largely overlooked."

The left hemisphere is undoubtedly important in language learning, Qi said, noting that clinical research on individuals with speech disorders has indicated that the left side of the brain is in many ways the hub of language processing.

But, she said, before any individuals -- infants learning their native language or adults learning a second language -- begin processing such aspects of the new language as vocabulary and grammar, they must first learn to identify its basic sounds or phonological elements.

It's during that process of distinguishing "acoustic details" of sounds where the right side of the brain is key, according to the new findings.

Researchers began by exposing the 24 participants in the study to pairs of sounds that were similar but began with different consonants, such as "bah" and "nah," and having them describe the tones, Qi said.

"We asked: Were the tones of those two sounds similar or different?" she said. "We used the brain activation patterns during this task to predict who would be the most successful learners" of the new language.

The study continued by teaching the participants in a setting designed to replicate a college language class, although the usual semester was condensed into four weeks of instruction. Students attended class for three and a half hours a day, five days a week, completed homework assignments and took tests.

"Our research is the first to look at attainment and long-term retention of real-world language learned in a classroom setting, which is how most people learn a new language," Qi said.

By scanning each participant's brain with functional MRI (magnetic resonance imaging) at the beginning and end of the project, the scientists were able to see which part of the brain was most engaged while processing basic sound elements in Mandarin. To their surprise, they found that -- although, as expected, the left hemisphere showed a substantial increase of activation later in the learning process -- the right hemisphere in the most successful learners was most active in the early, sound-recognition stage.

"It turns out that the right hemisphere is very important in processing foreign speech sounds at the beginning of learning," Qi said. She added that the right hemisphere's role then seems to diminish in those successful learners as they continue learning the language.

Additional research will investigate whether the findings apply to those learning other languages, not just Mandarin. The eventual goal is to explore whether someone can practice sound recognition early in the process of learning a new language to potentially improve their success.

"We found that the more active the right hemisphere is, the more sensitive the listener is to acoustic differences in sound," Qi said. "Everyone has different levels of activation, but even if you don't have that sensitivity to begin with, you can still learn successfully if your brain is plastic enough."

Read more at Science Daily

Freshwater mussel shells were material of choice for prehistoric craftsmen

These are prehistoric shell ornaments made with freshwater mother-of-pearl.
A new study suggests that 6000-years-ago people across Europe shared a cultural tradition of using freshwater mussel shells to craft ornaments.

An international team of researchers, including academics from the University of York, extracted ancient proteins from prehistoric shell ornaments -- which look remarkably similar despite being found at distant locations in Denmark, Romania and Germany -- and discovered they were all made using the mother-of-pearl of freshwater mussels.

The ornaments were made between 4200 and 3800 BC and were even found in areas on the coast where plenty of other shells would have been available.

Archaeological evidence suggests the ornaments, known as "double-buttons," may have been pressed into leather to decorate armbands or belts.

Senior author of the study, Dr Beatrice Demarchi, from the Department of Archaeology at the University of York and the University of Turin (Italy), said: "We were surprised to discover that the ornaments were all made from freshwater mussels because it implies that this material was highly regarded by prehistoric craftsmen, wherever they were in Europe and whatever cultural group they belonged to. Our study suggests the existence of a European-wide cross-cultural tradition for the manufacture of these double-buttons."

Freshwater molluscs have often been overlooked as a source of raw material in prehistory (despite the strength and resilience of mother-of-pearl) because many archaeologists believed that their local origin made them less "prestigious" than exotic marine shells.

Co-author of the paper, Dr André Colonese, from the Department of Archaeology at the University of York, said: "The ornaments are associated with the Late Mesolithic, Late Neolithic and Copper Age cultures. Some of these groups were still living as hunter gatherers, but in the south they were farmers with switching to a more settled lifestyle.

"The fact that these ornaments look consistently similar and are made from the same material suggests there may have been some kind of interaction between these distinct groups of people at this time.

"They may have had a shared knowledge or tradition for how to manufacture these ornaments and clearly had a sophisticated understanding of the natural environment and which resources to use."

Mollusc shells contain a very small proportion of proteins compared to other bio-mineralised tissues, such as bone, making them difficult to analyse.

The researchers are now working on extracting proteins from fossilised molluscs, a method which they have dubbed "palaeoshellomics." These new techniques could offer fresh insights into some of the earliest forms of life on earth, enhancing our knowledge of evolution.

Dr Demarchi added: "This is the first time researchers have been able to retrieve ancient protein sequences from prehistoric shell ornaments in order to identify the type of mollusc they are made from.

Read more at Science Daily

Plastic gets a do-over: Breakthrough discovery recycles plastic from the inside out

Unlike conventional plastics, the monomers of PDK plastic could be recovered and freed from any compounded additives simply by dunking the material in a highly acidic solution.
Light yet sturdy, plastic is great -- until you no longer need it. Because plastics contain various additives, like dyes, fillers, or flame retardants, very few plastics can be recycled without loss in performance or aesthetics. Even the most recyclable plastic, PET -- or poly(ethylene terephthalate) -- is only recycled at a rate of 20-30%, with the rest typically going to incinerators or landfills, where the carbon-rich material takes centuries to decompose.

Now a team of researchers at the U.S. Department of Energy's (DOE) Lawrence Berkeley National Laboratory (Berkeley Lab) has designed a recyclable plastic that, like a Lego playset, can be disassembled into its constituent parts at the molecular level, and then reassembled into a different shape, texture, and color again and again without loss of performance or quality. The new material, called poly(diketoenamine), or PDK, was reported in the journal Nature Chemistry.

"Most plastics were never made to be recycled," said lead author Peter Christensen, a postdoctoral researcher at Berkeley Lab's Molecular Foundry. "But we have discovered a new way to assemble plastics that takes recycling into consideration from a molecular perspective."

Christensen was part of a multidisciplinary team led by Brett Helms, a staff scientist in Berkeley Lab's Molecular Foundry. The other co-authors are undergraduate researchers Angelique Scheuermann (then of UC Berkeley) and Kathryn Loeffler (then of the University of Texas at Austin) who were funded by DOE's Science Undergraduate Laboratory Internship (SULI) program at the time of the study. The overall project was funded through Berkeley Lab's Laboratory Directed Research and Development program.

All plastics, from water bottles to automobile parts, are made up of large molecules called polymers, which are composed of repeating units of shorter carbon-containing compounds called monomers.

According to the researchers, the problem with many plastics is that the chemicals added to make them useful -- such as fillers that make a plastic tough, or plasticizers that make a plastic flexible -- are tightly bound to the monomers and stay in the plastic even after it's been processed at a recycling plant.

During processing at such plants, plastics with different chemical compositions -- hard plastics, stretchy plastics, clear plastics, candy-colored plastics -- are mixed together and ground into bits. When that hodgepodge of chopped-up plastics is melted to make a new material, it's hard to predict which properties it will inherit from the original plastics.

This inheritance of unknown and therefore unpredictable properties has prevented plastic from becoming what many consider the Holy Grail of recycling: a "circular" material whose original monomers can be recovered for reuse for as long as possible, or "upcycled" to make a new, higher quality product.

So, when a reusable shopping bag made with recycled plastic gets threadbare with wear and tear, it can't be upcycled or even recycled to make a new product. And once the bag has reached its end of life, it's either incinerated to make heat, electricity, or fuel, or ends up in a landfill, Helms said.

"Circular plastics and plastics upcycling are grand challenges," he said. "We've already seen the impact of plastic waste leaking into our aquatic ecosystems, and this trend is likely to be exacerbated by the increasing amounts of plastics being manufactured and the downstream pressure it places on our municipal recycling infrastructure."

Recycling plastic one monomer at a time

The researchers want to divert plastics from landfills and the oceans by incentivizing the recovery and reuse of plastics, which could be possible with polymers formed from PDKs. "With PDKs, the immutable bonds of conventional plastics are replaced with reversible bonds that allow the plastic to be recycled more effectively," Helms said.

Unlike conventional plastics, the monomers of PDK plastic could be recovered and freed from any compounded additives simply by dunking the material in a highly acidic solution. The acid helps to break the bonds between the monomers and separate them from the chemical additives that give plastic its look and feel.

"We're interested in the chemistry that redirects plastic lifecycles from linear to circular," said Helms. "We see an opportunity to make a difference for where there are no recycling options." That includes adhesives, phone cases, watch bands, shoes, computer cables, and hard thermosets that are created by molding hot plastic material.

The researchers first discovered the exciting circular property of PDK-based plastics when Christensen was applying various acids to glassware used to make PDK adhesives, and noticed that the adhesive's composition had changed. Curious as to how the adhesive might have been transformed, Christensen analyzed the sample's molecular structure with an NMR (nuclear magnetic resonance) spectroscopy instrument. "To our surprise, they were the original monomers," Helms said.

After testing various formulations at the Molecular Foundry, they demonstrated that not only does acid break down PDK polymers into monomers, but the process also allows the monomers to be separated from entwined additives.

Next, they proved that the recovered PDK monomers can be remade into polymers, and those recycled polymers can form new plastic materials without inheriting the color or other features of the original material -- so that broken black watchband you tossed in the trash could find new life as a computer keyboard if it's made with PDK plastic. They could also upcycle the plastic by adding additional features, such as flexibility.

Moving toward a circular plastic future

The researchers believe that their new recyclable plastic could be a good alternative to many nonrecyclable plastics in use today.

"We're at a critical point where we need to think about the infrastructure needed to modernize recycling facilities for future waste sorting and processing," said Helms. "If these facilities were designed to recycle or upcycle PDK and related plastics, then we would be able to more effectively divert plastic from landfills and the oceans. This is an exciting time to start thinking about how to design both materials and recycling facilities to enable circular plastics," said Helms.

Read more at Science Daily

May 7, 2019

Telescopes in space for even sharper images of black holes

In space, the EHI has a resolution more than five times that of the EHT on Earth, and images can be reconstructed with higher fidelity. Top left: Model of Sagittarius A* at an observation frequency of 230 GHz. Top left: Simulation of an image of this model with the EHT. Bottom left: Model of Sagittarius A* at an observation frequency of 690 GHz. Bottom right: Simulation of an image of this model with the EHI.
Astronomers have just managed to take the first image of a black hole, and now the next challenge facing them is how to take even sharper images, so that Einstein's Theory of General Relativity can be tested. Radboud University astronomers, along with the European Space Agency (ESA) and others, are putting forward a concept for achieving this by launching radio telescopes into space. They publish their plans in the scientific journal Astronomy & Astrophysics.

The idea is to place two or three satellites in circular orbit around the Earth to observe black holes. The concept goes by the name Event Horizon Imager (EHI). In their new study, the scientists present simulations of what images of the black hole Sagittarius A* would look if they were taken by satellites like these.

More than five times as sharp

"There are lots of advantages to using satellites instead of permanent radio telescopes on Earth, as with the Event Horizon Telescope (EHT)," says Freek Roelofs, a PhD candidate at Radboud University and the lead author of the article. "In space, you can make observations at higher radio frequencies, because the frequencies from Earth are filtered out by the atmosphere. The distances between the telescopes in space are also larger. This allows us to take a big step forward. We would be able to take images with a resolution more than five times what is possible with the EHT."

Sharper images of a black hole will lead to better information that could be used to test Einstein's Theory of General Relativity in greater detail. "The fact that the satellites are moving round the Earth makes for considerable advantages," Radio Astronomy Professor Heino Falcke says. "With them, you can take near perfect images to see the real details of black holes. If small deviations from Einstein's theory occur, we should be able to see them."

The EHI will also be able to image about five additional black holes that are smaller than the black holes that the EHT is currently focussing on. The latter are Sagittarius A* at the centre of our Milky Way and M87* at the centre of Messier 87, a massive galaxy in the Virgo Cluster.

Technological challenges

The researchers have simulated what they would be able to see with different versions of the technology under different circumstances. For this they made use of models of plasma behaviour around the black hole and the resulting radiation. "The simulations look promising from a scientific aspect, but there are difficulties to overcome at a technical level," Roelofs says.

The astronomers collaborated with scientists from ESA/ESTEC to investigate the technical feasibility of the project. "The concept demands that you must be able to ascertain the position and speed of the satellites very accurately," according to Volodymyr Kudriashov, a researcher at the Radboud Radio Lab who also works at ESA/ESTEC. "But we really believe that the project is feasible."

Consideration also has to be given to how the satellites exchange data. "With the EHT, hard drives with data are transported to the processing centre by airplane. That's of course not possible in space." In this concept, the satellites will exchange data via a laser link, with the data being partially processed on board before being sent back to Earth for further analysis. "There are already laser links in space," Kudriashov notes.

Hybrid system


The idea is that the satellites will initially function independently of the EHT telescopes. But consideration is also being given to a hybrid system, with the orbiting telescopes combined with the ones on Earth. Falcke: "Using a hybrid like this could provide the possibility of creating moving images of a black hole, and you might be able to observe even more and also weaker sources."

Read more at Science Daily

New 3-foot-tall relative of Tyrannosaurus rex

Sterling Nesbitt and fossil remains of Suskityrannus hazelae, which he found at age 16 in 1998.
A new relative of the Tyrannosaurus rex -- much smaller than the huge, ferocious dinosaur made famous in countless books and films, including, yes, "Jurassic Park" -- has been discovered and named by a Virginia Tech paleontologist and an international team of scientists.

The newly named tyrannosauroid dinosaur -- Suskityrannus hazelae -- stood roughly 3 feet tall at the hip and was about 9 feet in length, the entire animal only marginally longer than the just the skull of a fully grown Tyrannosaurus rex, according to Sterling Nesbitt, an assistant professor with Department of Geosciences in the Virginia Tech College of Science. In a wild twist to this discovery, Nesbitt found the fossil at age 16 whilst a high school student participating in a dig expedition in New Mexico in 1998, led by Doug Wolfe, an author on the paper.

In all, Suskityrannus hazelae is believed to have weighed between 45 and 90 pounds. The typical weight for a full-grown Tyrannosaurus rex is roughly 9 tons. Its diet likely consisted of the same as its larger meat-eating counterpart, with Suskityrannus hazelae likely hunting small animals, although what it hunted is unknown. The dinosaur was at least 3 years old at death based on an analysis of its growth from its bones.

The fossil dates back 92 million years to the Cretaceous Period, a time when some of the largest dinosaurs ever found lived.

"Suskityrannus gives us a glimpse into the evolution of tyrannosaurs just before they take over the planet," Nesbitt said. "It also belongs to a dinosaurian fauna that just proceeds the iconic dinosaurian faunas in the latest Cretaceous that include some of the most famous dinosaurs, such as the Triceratops, predators like Tyrannosaurus rex, and duckbill dinosaurs like Edmotosaurus."

The findings are published in the latest online issue of Nature Ecology & Evolution. In describing the new find, Nesbitt said, "Suskityrannus has a much more slender skull and foot than its later and larger cousins, the Tyrannosaurus rex. The find also links the older and smaller tyrannosauroids from North America and China with the much larger tyrannosaurids that lasted until the final extinction of non-avian dinosaurs.

(Tyrannosaurus rex small arm jokes abound. So, if you're wondering how small the arms of Suskityrannus were, Nesbitt and his team are not exactly sure. No arm fossils of either specimen were found, but partial hand claws were found. And, they are quite small. Also not known: If Suskityrannus had two or three fingers.)

Two partial skeletons were found. The first included a partial skull that was found in 1997 by Robert Denton, now a senior geologist with Terracon Consultants, and others in the Zuni Basin of western New Mexico during an expedition organized by Zuni Paleontological Project leader Doug Wolfe.

The second, more complete specimen was found in 1998 by Nesbitt, then a high school junior with a burgeoning interest in paleontology, and Wolfe, with assistance in collection by James Kirkland, now of the Utah Geological Survey. "Following Sterling out to see his dinosaur, I was amazed at how complete a skeleton was lying exposed at the site," Kirkland said.

For much of the 20 years since the fossils were uncovered, the science team did not know what they had.

"Essentially, we didn't know we had a cousin of Tyrannosaurus rex for many years," Nesbitt said. He added the team first thought they had the remains of a dromaeosaur, such as Velociraptor. During the late 1990s, close relatives Tyrannosaurus rex simply were not known or not recognized. Since then, more distant cousins of Tyrannosaurus rex, such as Dilong paradoxus, have been found across Asia.

The fossil remains were found near other dinosaurs, along with the remains of fish, turtles, mammals, lizards, and crocodilians. From 1998 until 2006, the fossils remain stored at the Arizona Museum of Natural History in Mesa, Arizona. After 2006, Nesbitt brought the fossils with him through various postings as student and researcher in New York, Texas, Illinois, and now Blacksburg. He credits the find, and his interactions with the team members on the expedition, as the start of his career.

"My discovery of a partial skeleton of Suskityrannus put me onto a scientific journey that has framed my career," said Nesbitt, also a member of the Virginia Tech Global Change Center. "I am now an assistant professor that gets to teach about Earth history."

Read more at Science Daily

Ayahuasca fixings found in 1,000-year-old bundle in the Andes

Ritual bundle with leather bag, carved wooden snuff tablets and snuff tube with human hair braids, pouch made of three fox snouts, camelid bone spatulas, colorful textile headband and wool and fiber strings.
Today's hipster creatives and entrepreneurs are hardly the first generation to partake of ayahuasca, according to archaeologists who have discovered traces of the powerfully hallucinogenic potion in a 1,000-year-old leather bundle buried in a cave in the Bolivian Andes.

Led by University of California, Berkeley, archaeologist Melanie Miller, a chemical analysis of a pouch made from three fox snouts sewn together tested positive for at least five plant-based psychoactive substances. They included dimethyltryptamine (DMT) and harmine, key active compounds in ayahuasca, a mind-blowing brew commonly associated with the Amazon jungle.

"This is the first evidence of ancient South Americans potentially combining different medicinal plants to produce a powerful substance like ayahuasca," said Miller, a researcher with UC Berkeley's Archaeological Research Facility who uses chemistry and various technologies to study how ancient humans lived.

She is lead author of the study, published today (Monday, May 6) in the journal Proceedings of the National Academy of Sciences.

Miller's analysis of a scraping from the fox-snout pouch and a plant sample found in the ritual bundle -- via liquid chromatography tandem mass spectrometry -- turned up trace amounts of bufotenine, DMT, harmine, cocaine and benzoylecgonine. Various combinations of these substances produce powerful, mind-altering hallucinations.

The discovery adds to a growing body of evidence of ritualistic psychotropic plant use going back millennia, said Miller, a postdoctoral fellow at the University of Otago in New Zealand who conducted the research during her doctoral studies at UC Berkeley.

"Our findings support the idea that people have been using these powerful plants for at least 1,000 years, combining them to go on a psychedelic journey, and that ayahuasca use may have roots in antiquity," said Miller.

The remarkably well-preserved ritual bundle was found by archaeologists at 13,000-foot elevations in the Lipez Altiplano region of southwestern Bolivia, where llamas and alpacas roam. The leather kit dates back to the pre-Inca Tiwanaku civilization, which dominated the southern Andean highlands from about 550 to 950 A.D.

In addition to the fox-snout pouch, the leather bundle contained intricately carved wooden "snuffing tablets" and a "snuffing tube" with human hair braids attached, for snorting intoxicants; llama bone spatulas; a colorful woven textile strip and dried plant material. All the objects were in good shape, due to the arid conditions of the Andean highlands.

Though the cave where the artifacts were found appeared to be a burial site, an excavation did not turn up human remains. Moreover, the plants found in the bundle do not grow at those altitudes, suggesting the bundle's owner may have been a traveling shaman or another expert in the rituals of psychotropic plant use, or someone who was part of an extensive medicinal plant trading network.

"A lot of these plants, if consumed in the wrong dosage, could be very poisonous," Miller said. "So, whoever owned this bundle would need to have had great knowledge and skills about how to use these plants, and how and where to procure them."

Of particular fascination to Miller is the pouch made of three fox snouts. She describes it as "the most amazing artifact I've had the privilege to work with."

"There are civilizations who believe that, by consuming certain psychotropic plants, you can embody a specific animal to help you reach supernatural realms, and perhaps a fox may be among those animals," Miller said.

Ayahuasca is made from brewing the vines of Banisteriopsis Caapi and the leaves of the chacruna (Psychotria viridis) shrub. The leaves release DMT, and the vines release harmine -- and therein lies the secret of the ayahuasca effect.

"The tryptamine DMT produces strong, vivid hallucinations that can last from minutes to an hour, but combined with harmine, you can have prolonged out-of-body altered states of consciousness with altered perceptions of time and of the self," Miller said.

Once the drugs take effect, ayahuasca users typically enter a purgative state, which means they vomit a lot.

Though its use is currently fashionable among Silicon Valley techies, Hollywood celebrities and spiritual awakening-seekers worldwide, Miller says these latest archaeological findings pay homage to ayahuasca's ancient history.

Miller joined the Cueva del Chileno excavation project when archaeologists Juan Albarracín-Jordán of the Universidad Mayor de San Andrés in Bolivia and José Capriles of Pennsylvania State University sought her expertise to identify the plant matter they had found in the bundle.

She traveled for two days to reach the cave site near the remote south Bolivian village of Lipez and helped with the final phases of the excavation. The bundle was transported to a laboratory in La Paz and, once permits were in place, samples were exported to the lab of Christine Moore, chief toxicologist with the Immunalysis Corp. in Pomona, California.

Moore's lab provided the liquid chromatography tandem mass spectrometry technology needed to conduct toxicology tests on the samples. Once the contents of the Andean bundle tested positive for five kinds of psychotropic substances, Miller's research team was over the moon.

Read more at Science Daily

Blue supergiant stars open doors to concert in space

A snapshot from a hydrodynamical simulation of the interior of a star three times as heavy as our Sun, which shows waves generated by turbulent core convection and propagating throughout the star's interior. Darker and lighter colors represent fluctuations due to waves.
Blue supergiants are rock-and-roll: they live fast and die young. This makes them rare and difficult to study. Before space telescopes were invented, few blue supergiants had been observed, so our knowledge of these stars was limited. Using recent NASA space telescope data, an international team led by KU Leuven studied the sounds originating inside these stars and discovered that almost all blue supergiants shimmer in brightness because of waves on their surface.

Since the dawn of humanity, the stars in the night sky have captured our imagination. We even sing nursery rhymes to children pondering the nature of stars: "Twinkle, twinkle little star, how I wonder what you are". Telescopes are able to probe far into the universe, but astronomers have struggled to 'see' inside the stars. New space telescopes allow astronomers to 'see' the waves that originate in the deep interior of the stars. This makes it possible to study these stars using asteroseismology, a similar technique to how seismologists use earthquakes to study the Earth's interior.

Stars come in different shapes, sizes and colours. Some stars are similar to our Sun and live calmly for billions of years. More massive stars, those born with ten times or more the mass of the Sun, live significantly shorter and active lives before they explode and expel their material into space in what is called a supernova. Blue supergiants belong to this group. Before they explode, they are the metal factories of the universe, as these stars produce most chemical elements beyond helium in the Periodic Table of Mendeleev.

For the first time, researchers have been able to 'see' beneath the opaque surface of blue supergiants. "The discovery of waves in so many blue supergiant stars was a eureka moment," says postdoctoral researcher Dominic Bowman who is the corresponding author for this study: "The flicker in these stars had been there all along, we only had to wait for modern space telescopes to be able to observe them. It is as if the rock-and-roll stars had been performing the whole time, but only now NASA space missions were able to open the doors of their concert hall. From the frequencies of the waves at the surface, we can derive the physics and chemistry of their deep interior, including the stellar core. These frequencies probe how efficiently metal is produced and how it moves around in the factory."

"Before the NASA Kepler/K2 and TESS space telescopes, few blue supergiants that vary in brightness were known," says Bowman (KU Leuven). "Until now, we had not seen these waves causing shimmering and twinkling on the surface of blue supergiants. You need to be able to look at the brightness of an individual star for long enough with a very sensitive detector before you can map out how it changes over time."

Read more at Science Daily