Jul 14, 2018

Could gravitational waves reveal how fast our universe is expanding?

Illustration of a neutron star and a black hole about to collide.
Since it first exploded into existence 13.8 billion years ago, the universe has been expanding, dragging along with it hundreds of billions of galaxies and stars, much like raisins in a rapidly rising dough.

Astronomers have pointed telescopes to certain stars and other cosmic sources to measure their distance from Earth and how fast they are moving away from us -- two parameters that are essential to estimating the Hubble constant, a unit of measurement that describes the rate at which the universe is expanding.

But to date, the most precise efforts have landed on very different values of the Hubble constant, offering no definitive resolution to exactly how fast the universe is growing. This information, scientists believe, could shed light on the universe's origins, as well as its fate, and whether the cosmos will expand indefinitely or ultimately collapse.

Now scientists from MIT and Harvard University have proposed a more accurate and independent way to measure the Hubble constant, using gravitational waves emitted by a relatively rare system: a black hole-neutron star binary, a hugely energetic pairing of a spiraling black hole and a neutron star. As these objects circle in toward each other, they should produce space-shaking gravitational waves and a flash of light when they ultimately collide.

In a paper to be published July 12th in Physical Review Letters, the researchers report that the flash of light would give scientists an estimate of the system's velocity, or how fast it is moving away from the Earth. The emitted gravitational waves, if detected on Earth, should provide an independent and precise measurement of the system's distance. Even though black hole-neutron star binaries are incredibly rare, the researchers calculate that detecting even a few should yield the most accurate value yet for the Hubble constant and the rate of the expanding universe.

"Black hole-neutron star binaries are very complicated systems, which we know very little about," says Salvatore Vitale, assistant professor of physics at MIT and lead author of the paper. "If we detect one, the prize is that they can potentially give a dramatic contribution to our understanding of the universe."

Vitale's co-author is Hsin-Yu Chen of Harvard.

Competing constants

Two independent measurements of the Hubble constant were made recently, one using NASA's Hubble Space Telescope and another using the European Space Agency's Planck satellite. The Hubble Space Telescope's measurement is based on observations of a type of star known as a Cepheid variable, as well as on observations of supernovae. Both of these objects are considered "standard candles," for their predictable pattern of brightness, which scientists can use to estimate the star's distance and velocity.

The other type of estimate is based on observations of the fluctuations in the cosmic microwave background -- the electromagnetic radiation that was left over in the immediate aftermath of the Big Bang, when the universe was still in its infancy. While the observations by both probes are extremely precise, their estimates of the Hubble constant disagree significantly.

"That's where LIGO comes into the game," Vitale says.

LIGO, or the Laser Interferometry Gravitational-Wave Observatory, detects gravitational waves -- ripples in the Jell-O of space-time, produced by cataclysmic astrophysical phenomena.

"Gravitational waves provide a very direct and easy way of measuring the distances of their sources," Vitale says. "What we detect with LIGO is a direct imprint of the distance to the source, without any extra analysis."

In 2017, scientists got their first chance at estimating the Hubble constant from a gravitational-wave source, when LIGO and its Italian counterpart Virgo detected a pair of colliding neutron stars for the first time. The collision released a huge amount of gravitational waves, which researchers measured to determine the distance of the system from Earth. The merger also released a flash of light, which astronomers focused on with ground and space telescopes to determine the system's velocity.

With both measurements, scientists calculated a new value for the Hubble constant. However, the estimate came with a relatively large uncertainty of 14 percent, much more uncertain than the values calculated using the Hubble Space Telescope and the Planck satellite.

Vitale says much of the uncertainty stems from the fact that it can be challenging to interpret a neutron star binary's distance from Earth using the gravitational waves that this particular system gives off.

"We measure distance by looking at how 'loud' the gravitational wave is, meaning how clear it is in our data," Vitale says. "If it's very clear, you can see how loud it is, and that gives the distance. But that's only partially true for neutron star binaries."

That's because these systems, which create a whirling disc of energy as two neutron stars spiral in toward each other, emit gravitational waves in an uneven fashion. The majority of gravitational waves shoot straight out from the center of the disc, while a much smaller fraction escapes out the edges. If scientists detect a "loud" gravitational wave signal, it could indicate one of two scenarios: the detected waves stemmed from the edge of a system that is very close to Earth, or the waves emanated from the center of a much further system.

"With neutron star binaries, it's very hard to distinguish between these two situations," Vitale says.

A new wave

In 2014, before LIGO made the first detection of gravitational waves, Vitale and his colleagues observed that a binary system composed of a black hole and a neutron star could give a more accurate distance measurement, compared with neutron star binaries. The team was investigating how accurately one could measure a black hole's spin, given that the objects are known to spin on their axes, similarly to Earth but much more quickly.

The researchers simulated a variety of systems with black holes, including black hole-neutron star binaries and neutron star binaries. As a byproduct of this effort, the team noticed that they were able to more accurately determine the distance of black hole-neutron star binaries, compared to neutron star binaries. Vitale says this is due to the spin of the black hole around the neutron star, which can help scientists better pinpoint from where in the system the gravitational waves are emanating.

"Because of this better distance measurement, I thought that black hole-neutron star binaries could be a competitive probe for measuring the Hubble constant," Vitale says. "Since then, a lot has happened with LIGO and the discovery of gravitational waves, and all this was put on the back burner."

Vitale recently circled back to his original observation, and in this new paper, he set out to answer a theoretical question:

"Is the fact that every black hole-neutron star binary will give me a better distance going to compensate for the fact that potentially, there are far fewer of them in the universe than neutron star binaries?" Vitale says.

To answer this question, the team ran simulations to predict the occurrence of both types of binary systems in the universe, as well as the accuracy of their distance measurements. From their calculations, they concluded that, even if neutron binary systems outnumbered black hole-neutron star systems by 50-1, the latter would yield a Hubble constant similar in accuracy to the former.

More optimistically, if black hole-neutron star binaries were slightly more common, but still rarer than neutron star binaries, the former would produce a Hubble constant that is four times as accurate.

Read more at Science Daily

How might dark matter interact with ordinary matter?

Photo shows PandaX, a xenon-based detector in China.
An international team of scientists that includes University of California, Riverside, physicist Hai-Bo Yu has imposed conditions on how dark matter may interact with ordinary matter -- constraints that can help identify the elusive dark matter particle and detect it on Earth.

Dark matter -- nonluminous material in space -- is understood to constitute 85 percent of the matter in the universe. Unlike normal matter, it does not absorb, reflect, or emit light, making it difficult to detect.

Physicists are certain dark matter exists, having inferred this existence from the gravitational effect dark matter has on visible matter. What they are less certain of is how dark matter interacts with ordinary matter -- or even if it does.

In the search for direct detection of dark matter, the experimental focus has been on WIMPs, or weakly interacting massive particles, the hypothetical particles thought to make up dark matter.

But Yu's international research team invokes a different theory to challenge the WIMP paradigm: the self-interacting dark matter model, or SIDM, a well-motivated framework that can explain the full range of diversity observed in the galactic rotation curves. First proposed in 2000 by a pair of eminent astrophysicists, SIDM has regained popularity in both the particle physics and the astrophysics communities since around 2009, aided, in part, by work Yu and his collaborators did.

Yu, a theorist in the Department of Physics and Astronomy at UCR, and Yong Yang, an experimentalist at Shanghai Jiaotong University in China, co-led the team analyzing and interpreting the latest data collected in 2016 and 2017 at PandaX-II, a xenon-based dark matter direct detection experiment in China (PandaX refers to Particle and Astrophysical Xenon Detector; PandaX-II refers to the experiment). Should a dark matter particle collide with PandaX-II's liquefied xenon, the result would be two simultaneous signals: one of photons and the other of electrons.

Yu explained that PandaX-II assumes dark matter "talks to" normal matter -- that is, interacts with protons and neutrons -- by means other than gravitational interaction (just gravitational interaction is not enough). The researchers then search for a signal that identifies this interaction. In addition, the PandaX-II collaboration assumes the "mediator particle," mediating interactions between dark matter and normal matter, has far less mass than the mediator particle in the WIMP paradigm.

"The WIMP paradigm assumes this mediator particle is very heavy -- 100 to 1000 times the mass of a proton -- or about the mass of the dark matter particle," Yu said. "This paradigm has dominated the field for more than 30 years. In astrophysical observations, we don't, however, see all its predictions. The SIDM model, on the other hand, assumes the mediator particle is about 0.001 times the mass of the dark matter particle, inferred from astrophysical observations from dwarf galaxies to galaxy clusters. The presence of such a light mediator could lead to smoking-gun signatures of SIDM in dark matter direct detection, as we suggested in an earlier theory paper. Now, we believe PandaX-II, one of the world's most sensitive direct detection experiments, is poised to validate the SIDM model when a dark matter particle is detected."

The international team of researchers reports July 12 in Physical Review Letters the strongest limit on the interaction strength between dark matter and visible matter with a light mediator. The journal has selected the research paper as a highlight, a significant honor.

"This is a particle physics constraint on a theory that has been used to understand astrophysical properties of dark matter," said Flip Tanedo, a dark matter expert at UCR, who was not involved in the research. "The study highlights the complementary ways in which very different experiments are needed to search for dark matter. It also shows why theoretical physics plays a critical role to translate between these different kinds of searches. The study by Hai-Bo Yu and his colleagues interprets new experimental data in terms of a framework that makes it easy to connect to other types of experiments, especially astrophysical observations, and a much broader range of theories."

PandaX-II is located at the China Jinping Underground Laboratory, Sichuan Province, where pandas are abundant. The laboratory is the deepest underground laboratory in the world. PandaX-II had generated the largest dataset for dark matter detection when the analysis was performed. One of only three xenon-based dark matter direct detection experiments in the world, PandaX-II is one of the frontier facilities to search for extremely rare events where scientists hope to observe a dark matter particle interacting with ordinary matter and thus better understand the fundamental particle properties of dark matter.

Particle physicists' attempts to understand dark matter have yet to yield definitive evidence for dark matter in the lab.

"The discovery of a dark matter particle interacting with ordinary matter is one of the holy grails of modern physics and represents the best hope to understand the fundamental, particle properties of dark matter," Tanedo said.

For the past decade, Yu, a world expert on SIDM, has led an effort to bridge particle physics and cosmology by looking for ways to understand dark matter's particle properties from astrophysical data. He and his collaborators have discovered a class of dark matter theories with a new dark force that may explain unexpected features seen in the systems across a wide range, from dwarf galaxies to galaxy clusters. More importantly, this new SIDM framework serves as a crutch for particle physicists to convert astronomical data into particle physics parameters of dark matter models. In this way, the SIDM framework is a translator for two different scientific communities to understand each other's results.

Now with the PandaX-II experimental collaboration, Yu has shown how self-interacting dark matter theories may be distinguished at the PandaX-II experiment.

"Prior to this line of work, these types of laboratory-based dark matter experiments primarily focused on dark matter candidates that did not have self-interactions," Tanedo said. "This work has shown how dark forces affect the laboratory signals of dark matter."

Read more at Science Daily

Jul 12, 2018

Invasive plants adapt to new environments

Invasive monkeyflowers growing among native forget-me-nots illustrate the assimilation of a non-native species 200 years after its introduction into the British Isles.
Invasive plants have the ability to adapt to new environments -- and even behave like a native species, according to University of Stirling research.

A study has found that the behaviour of invasive plants changes over time -- meaning plants of the same species act differently if they arrive in their new environment at separate times.

Scientists studied the characteristics of monkeyflowers (Mimulus guttatus), which first arrived in the UK from North America 200 years ago. They compared the behaviour of monkeyflowers long-established in Scotland with those introduced recently for the purposes of the experiment.

Significantly, they found that the long-established plants were bigger and produced more flowers and more clones than those recently introduced. In comparison, the study showed that the genes of plants recently introduced are not well-adapted to deal with the UK environment.

Dr Mario Vallejo-Marin, Associate Professor in the Faculty of Natural Sciences, led the work alongside PhD student Pauline Pantoja.

"Our study shows that invasive plants -- in this case, the monkeyflower -- become increasingly adapted to new environments thanks to natural selection," he explained.

"If we compare monkeyflowers that have been here for the last 200 years with those from North America today, they are completely different plants. It appears that, over time, the plants seem to become natives of their new home.

"In other words, these results suggest that invasive populations of plants are better suited to live in their new home than new arrivals from the native range."

The team created two types of hybrid plants -- a UK/North American hybrid and a UK/UK hybrid -- and grew them, under identical circumstances, to estimate the impact of natural selection. Over two years, around 1,200 plants were grown in a field plot at Stirling.

"The main differences seem to be that the UK plants can produce both high numbers of flowers and high numbers of clones while the hybrids created from the North American samples can only do either many flowers or many clones but not both," Dr Vallejo-Marin said.

Reflecting on the findings, he added: "The last 500 years have seen a rapidly increasing spread of non-native organisms around the world.

"Our study shows that, in older invasions -- more than 200 years old -- the newcomers are becoming adapted to their new surroundings through evolution by natural selection.

"As more non-native species come of age, the role of natural selection in the success of non-native species will continue to increase."

Dr Vallejo-Marin is now planning further studies to discover how common the process is -- and he believes others may also benefit from the work.

"Understanding how organisms adapt to new environments is key in an era of rapid environmental change," he said. "Our study can also be of relevance to environmental managers dealing with biological invasions."

Read more at Science Daily

Why the left hemisphere of the brain understands language better than the right

Nerve cells in the brain region planum temporale have more synapses in the left hemisphere than in the right hemisphere -- which is vital for rapid processing of auditory speech, according to the report published by researchers from Ruhr-Universität Bochum and Technische Universität Dresden in the journal Science Advances. There has already been ample evidence of left hemisphere language dominance; however, the underlying processes on the neuroanatomical level had not yet been fully understood.

A new form of magnetic resonance imaging (MRI) in combination with electroencephalography (EEG) measurements has made it possible to bundle insights into the microstructure of planum temporale with the speed of auditory speech processing. The team headed by Dr Sebastian Ocklenburg, Patrick Friedrich, Christoph Fraenz, Prof Dr Dr h. c. Onur Güntürkün and Dr Erhan Genç outlines their findings in an article published in the scientific journal Science Advances from July 11, 2018.

Left hemisphere language dominance

Using a simple experiment, researchers can demonstrate just how superior the left hemisphere is when it comes to the processing of auditory speech: when playing two different syllables -- for example "Da" and "Ba" -- to a person's left and right ear via headphones, most people will state that they only heard the syllable in the right ear. The reason: language that is perceived via the right ear is processed in the left hemisphere. When brainwaves are measured using EEG, it emerges that the left hemisphere processes auditory speech information more rapidly.

"Researchers have long determined that a brain region that is crucial for the processing of auditory speech, i.e. planum temporale, is frequently larger in the left hemisphere than in the right one," says Sebastian Ocklenburg from the biopsychology research unit in Bochum. In the brains of deceased individuals who had donated their bodies to science, Frankfurt-based researchers later discovered that the nerve cells in the left planum temporale have a larger number of neuronal synapses than those in the right hemisphere.

New measurement method facilitates hitherto impossible insights

"However, it had previously not been understood if that asymmetrical microstructure is the decisive factor for the superiority of the left hemisphere when it comes to the processing of auditory speech," explains Erhan Genç, likewise a member of the biopsychology research unit. Since a method for counting the number of neural synapses in living humans had not existed until very recently, that number could not be conclusively linked to the performance of auditory speech processing. The researchers have now closed this gap with the aid of so-called neurite orientation dispersion and density imaging.

By deploying this highly specific MRI technology, the bio-psychologists measured the density and spatial arrangement of planum temporale neurites in almost one hundred test participants. At the same time, they used EEG measurements to analyse the processing speed of auditory speech information in both the left and the right hemispheres in the same individuals.

Higher speed thanks to more neurites

The result: test participants who were capable of processing auditory speech in the left hemisphere at a high speed possessed an extraordinarily high number of densely packed neurites in the left planum temporale. "It is because of this microstructure that processing of auditory speech is faster in the left hemisphere; those individuals are presumably also able to decode what they hear at higher temporal precision," concludes Ocklenburg. "Higher connectivity density thus appears to be a crucial component for the linguistic superiority of our left hemisphere," adds Genç.

From Science Daily

Scientists discover Earth's youngest banded iron formation in western China

Earth's youngest banded iron formation in western China.
The banded iron formation, located in western China, has been conclusively dated as Cambrian in age. Approximately 527 million years old, this formation is young by comparison to the majority of discoveries to date. The deposition of banded iron formations, which began approximately 3.8 billion years ago, had long been thought to terminate before the beginning of the Cambrian Period at 540 million years ago.

"This is critical, as it is the first observation of a Precambrian-like banded iron formation that is Early Cambrian in age. This offers the most conclusive evidence for the presence of widespread iron-rich conditions at a time, confirming what has recently been suggested from geochemical proxies," said Kurt Konhauser, professor in the Department of Earth and Atmospheric Sciences and co-author. Konhauser supervised the research that was led by Zhiquan Li, a PhD candidate from Beijing while on exchange at UAlberta.

The Early Cambrian is known for the rise of animals, so the level of oxygen in seawater should have been closer to near modern levels. "This is important as the availability of oxygen has long been thought to be a handbrake on the evolution of complex life, and one that should have been alleviated by the Early Cambrian," says Leslie Robbins, a PhD candidate in Konhauser's lab and a co-author on the paper.

The researchers compared the geological characteristics and geochemistry to ancient and modern samples to find an analogue for their deposition. The team relied on the use of rare earth element patterns to demonstrate that the deposit formed in, or near, a chemocline in a stratified iron-rich basin.

"Future studies will aim to quantify the full extent of these Cambrian banded iron formations in China and whether similar deposits can be found elsewhere," says Kurt Konhauser.

From Science Daily

Brain function partly replicated by nanomaterials

Spontaneous spikes being similar to nerve impulses of neurons was generated from a POM/CNT complexed network.
The brain requires surprisingly little energy to adapt to the environment to learn, make ambiguous recognitions, have high recognition ability and intelligence, and perform complex information processing.

The two key features of neural circuits are "learning ability of synapses" and "nerve impulses or spikes." As brain science progresses, brain structure has been gradually clarified, but it is too complicated to completely emulate. Scientists have tried to replicate brain function by using simplified neuromorphic circuits and devices that emulate a part of the brain's mechanisms.

In developing neuromorphic chips to artificially replicate the circuits that mimic brain structure and function, the functions of generation and transmission of spontaneous spikes that mimic nerve impulses (spikes) have not yet been fully utilized.

A joint group of researchers from Kyushu Institute of Technology and Osaka University studied current rectification control in junctions of various molecules and particles absorbed on single-walled carbon nanotube (SWNT), using conductive atomic force microscopy (C-AFM), and discovered that a negative differential resistance was produced in polyoxometalate (POM) molecules absorbed on SWNT. This suggests that an unstable dynamic non-equilibrium state occurs in molecular junctions.

In addition, the researchers created extremely dense, random SWNT/POM network molecular neuromorphic devices, generating spontaneous spikes similar to nerve impulses of neurons .

POM consists of metal atoms and oxygen atoms to form a 3-dimensional framework.  Unlike ordinary organic molecules, POM can store charges in a single molecule. In this study, it was thought that negative differential resistance and spike generation from the network were caused by nonequilibrium charge dynamics in molecular junctions in the network.

Thus, the joint research group led by Megumi Akai-Kasaya conducted simulation calculations of the random molecular network model complexed with POM molecules, which are able to store electric charges, replicating spikes generated from the random molecular network.  They also demonstrated that this molecular model would very likely become a component of reservoir computing devices. Reservoir computing is anticipated as next-generation artificial intelligence (AI). Their research results were published in Nature Communications.

"The significance of our study is that a portion of brain function was replicated by nano-molecular materials. We demonstrated the possibility that the random molecular network itself can become neuromorphic AI," says lead author Hirofumi Tanaka.

Read more at Science Daily

Breakthrough in the search for cosmic particle accelerators

Artist's impression of the active galactic nucleus. The supermassive black hole at the center of the accretion disk sends a narrow high-energy jet of matter into space, perpendicular to the disc.
Using an internationally organised astronomical dragnet, scientist have for the first time located a source of high-energy cosmic neutrinos, ghostly elementary particles that travel billions of light years through the universe, flying unaffected through stars, planets and entire galaxies. The joint observation campaign was triggered by a single neutrino that had been recorded by the IceCube neutrino telescope at the South Pole, on 22 September 2017. Telescopes on earth and in space were able to determine that the exotic particle had originated in a galaxy over three billion light years away, in the constellation of Orion, where a gigantic black hole serves as a natural particle accelerator. Scientists from the 18 different observatories involved are presenting their findings in the journal Science. Furthermore, a second analysis, also published in Science, shows that other neutrinos previously recorded by IceCube came from the same source.

The observation campaign, in which research scientists from Germany played a key role, is a decisive step towards solving a riddle that has been puzzling scientists for over 100 years, namely that of the precise origins of so-called cosmic rays, high-energy subatomic particles that are constantly bombarding Earth's atmosphere. "This is a milestone for the budding field of neutrino astronomy. We are opening a new window into the high-energy universe," says Marek Kowalski, the head of Neutrino Astronomy at DESY, a research centre of the Helmholtz Association, and a researcher at the Humboldt University in Berlin. "The concerted observational campaign using instruments located all over the globe is also a significant achievement for the field of multi-messenger astronomy, that is the investigation of cosmic objects using different messengers, such as electromagnetic radiation, gravitational waves and neutrinos."

Messengers from the high-energy universe

One way in which scientists expect energetic neutrinos to be created is as a sort of by-product of cosmic rays, that are expected to be produced in cosmic particle accelerators, such as the vortex of matter created by supermassive black holes or exploding stars. However, unlike the electrically charged particles of cosmic rays, neutrinos are electrically neutral and therefore not deflected by cosmic magnetic fields as they travel through space, meaning that the direction from which they arrive points straight back at their actual source. Also, neutrinos are scarcely absorbed. "Observing cosmic neutrinos gives us a glimpse of processes that are opaque to electromagnetic radiation," says Klaus Helbing from the Bergische University of Wuppertal, spokesperson for the German IceCube network.""Cosmic neutrinos are messengers from the high-energy universe."

Demonstrating the presence of neutrinos is extremely complicated, however, because most of the ghostly particles travel right through the entire Earth without leaving a trace. Only on very rare occasions does a neutrino interact with its surroundings. It therefore takes huge detectors in order to capture at least a few of these rare reactions. For the IceCube detector, an international consortium of scientists headed by the University of Wisconsin in Madison (USA) drilled 86 holes into the Antarctic ice, each 2500 metres deep. Into these holes they lowered 5160 light sensors, spread out over a total volume of one cubic kilometre. The sensors register the tiny flashes of light that are produced during the rare neutrino interactions in the transparent ice.

Five years ago, IceCube furnished the first evidence of high-energy neutrinos from the depths of outer space. However, these neutrinos appeared to be arriving from random directions across the sky. "Up to this day, we didn't know where they originated," says Elisa Resconi from the Technical University of Munich, whose group contributed crucially to the findings. "Through the neutrino recorded on 22 September, we have now managed to identify a first source."

From radio waves to gamma radiation

The energy of the neutrino in question was around 300 tera-electronvolts, more than 40 times that of the protons produced in the world's largest particle accelerator, the Large Hadron Collider at the European accelerator facility CERN outside Geneva. Within minutes of recording the neutrino, the IceCube detector automatically alerted numerous other astronomical observatories. A large number of these then scrutinised the region in which the high-energy neutrino had originated, scanning the entire electromagnetic spectrum: from high-energy gamma- and X-rays, through visible light, to radio waves. Sure enough, they were able for the first time to assign a celestial object to the direction from which a high-energy cosmic neutrino had arrived.

"In our case, we saw an active galaxy, which is a large galaxy containing a gigantic black hole at its centre," explains Kowalski. Huge "jets" shoot out into space at right angles to the massive vortex that sucks matter into the black hole. Astrophysicists have long suspected that these jets generate a substantial proportion of cosmic particle radiation. "Now we have found key evidence supporting this assumption," Resconi emphasises.

The active galaxy that has now been identified is a so-called blazar, an active galaxy whose jet points precisely in our direction. Using software developed by DESY researchers, the gamma-ray satellite Fermi, operated by the US space agency NASA, had already registered a dramatic increase in the activity of this blazar, whose catalogue number is TXS 0506+056, around 22 September. Now, an earthbound gamma-ray telescope also recorded a signal from it. "In the follow-up observation of the neutrino, we were able to observe the blazar in the range of very high-energy gamma radiation too, using the MAGIC telescope system on the Canary Island La Palma," says DESY's Elisa Bernardini, who coordinates the MAGIC observations. "The gamma-rays are closest in energy to neutrinos and therefore play a crucial role in determining the mechanism by which the neutrinos are created." The programme for the efficient follow-up observation of neutrinos using gamma-ray telescopes was developed by Bernardini's group.

The NASA X-ray satellites Swift and NuSTAR also registered the eruption of the blazar, and the gamma-ray telescopes H.E.S.S., HAWC and VERITAS as well as the gamma-ray and X-ray satellites AGILE, belonging to the Italian Space Agency ASI, and Integral, belonging to the European Space Agency ESA, all took part in the follow-up observations. All in all, seven optical observatories (the ASAS-SN, Liverpool, Kanata, Kiso Schmidt, SALT and Subaru telescopes, as well as the Very Large Telescope VLT of the European Southern Observatory, ESO) observed the active galaxy, and the Karl G. Jansky Very Large Array (VLA) studied its activity in the radio spectrum. This led to a comprehensive picture of the radiation emitted by this blazar, all the way from radio waves to gamma-rays carrying up to 100 billion times as much energy.

Search in archives reveals further neutrinos

A worldwide team of scientists from all the groups involved worked flat out, conducting a complicated statistical analysis to determine whether the correlation between the neutrino and the gamma-ray observations was perhaps just a coincidence. "We calculated that the probability of it being a mere coincidence was around 1 in 1000," explains DESY's Anna Franckowiak, who was in charge of the statistical analysis of the various different data sets. This may not sound very large, but it is not small enough to quell the professional scepticism of physicists.

A second line of investigation rectified this. The IceCube researchers searched through their data from the past years for possible previous measurements of neutrinos coming from the direction of the blazar that had now been identified. And they did indeed find a distinct surplus of more than a dozen of the ghost particles arriving from the direction of TXS 0506+056 during the time between September 2014 and March 2015, as they are reporting in a second paper published in the same edition of Science. The likelihood of this excess being a mere statistical outlier is estimated at 1 in 5000, "a number that makes you prick up your ears," says Christopher Wiebusch from RWTH Aachen, whose group had already noted the hint of excess neutrinos from the direction of TXS 0506+056 in an earlier analysis. "The data also allows us to make a first estimate of the neutrino flux from this source." Together with the single event of September 2017, the IceCube data now provides the best experimental evidence to date that active galaxies are in fact sources of high-energy cosmic neutrinos.

Read more at Science Daily

Jul 11, 2018

Ancient bones reveal 2 whale species lost from the Mediterranean Sea

Aerial view of some of the fish-salting tanks (cetaria) in the ancient Roman city of Baelo Claudia, near today's Tarifa in Spain. The largest circular tank is 3 meters wide, with a 18m3 capacity. These tanks were used to process large fish, particularly tuna. This study supports the possibility that they could have also been used to process whales.
Two thousand years ago the Mediterranean Sea was a haven for two species of whale which have since virtually disappeared from the North Atlantic, a new study analysing ancient bones suggests.

The discovery of the whale bones in the ruins of a Roman fish processing factory located at the strait of Gibraltar also hints at the possibility that the Romans may have hunted the whales.

Prior to the study, by an international team of ecologists, archaeologists and geneticists, it was assumed that the Mediterranean Sea was outside of the historical range of the right and gray whale.

Academics from the Archaeology Department at the University of York used ancient DNA analysis and collagen fingerprinting to identify the bones as belonging to the North Atlantic right whale (Eubalaena glacialis) and the Atlantic gray whale (Eschrichtius robustus).

After centuries of whaling, the right whale currently occurs as a very threatened population off eastern North America and the gray whale has completely disappeared from the North Atlantic and is now restricted to the North Pacific.

Co-author of the study Dr Camilla Speller, from the University of York, said: "These new molecular methods are opening whole new windows into past ecosystems. Whales are often neglected in Archaeological studies, because their bones are frequently too fragmented to be identifiable by their shape.

"Our study shows that these two species were once part of the Mediterranean marine ecosystem and probably used the sheltered basin as a calving ground.

"The findings contribute to the debate on whether, alongside catching large fish such as tuna, the Romans had a form of whaling industry or if perhaps the bones are evidence of opportunistic scavenging from beached whales along the coast line."

Both species of whale are migratory, and their presence east of Gibraltar is a strong indication that they previously entered the Mediterranean Sea to give birth.

The Gibraltar region was at the centre of a massive fish-processing industry during Roman times, with products exported across the entire Roman Empire. The ruins of hundreds of factories with large salting tanks can still be seen today in the region.

Lead author of the study Dr Ana Rodrigues, from the French National Centre for Scientific Research, said: "Romans did not have the necessary technology to capture the types of large whales currently found in the Mediterranean, which are high-seas species. But right and gray whales and their calves would have come very close to shore, making them tempting targets for local fishermen."

It is possible that both species could have been captured using small rowing boats and hand harpoons, methods used by medieval Basque whalers centuries later.

The knowledge that coastal whales were once present in the Mediterranean also sheds new light on ancient historical sources.

Anne Charpentier, lecturer at the University of Montpellier and co-author in the study, said: "We can finally understand a 1st-Century description by the famous Roman naturalist Pliny the Elder, of killer whales attacking whales and their new-born calves in the Cadiz bay.

"It doesn't match anything that can be seen there today, but it fits perfectly with the ecology if right and gray whales used to be present."

The study authors are now calling for historians and archaeologists to re-examine their material in the light of the knowledge that coastal whales where once part of the Mediterranean marine ecosystem.

Dr Rodriguez added: "It seems incredible that we could have lost and then forgotten two large whale species in a region as well-studied as the Mediterranean. It makes you wonder what else we have forgotten."

Forgotten Mediterranean calving grounds of gray and North Atlantic right whales: evidence from Roman archaeological records is published in the journal Proceedings of the Royal Society of London B.

Read more at Science Daily

Eating bone marrow played a key role in the evolution of the human hand

Student using stone tool.
The strength required to access the high calorie content of bone marrow may have played a key role in the evolution of the human hand and explain why primates hands are not like ours, research at the University of Kent has found.

In an article in The Journal of Human Evolution, a team lead by Professor Tracy Kivell of Kent's School of Anthropology and Conservation concludes that although stone tool making has always been considered a key influence on the evolution of the human hand, accessing bone marrow generally has not.

It is widely accepted that the unique dexterity of the human hand evolved, at least in part, in response to stone tool use during our evolutionary history.

Archaeological evidence suggests that early hominins participated in a variety of tool-related activities, such as nut-cracking, cutting flesh, smashing bone to access marrow, as well as making stone tools. However, it is unlikely that all these behaviours equally influenced modern human hand anatomy.

To understand the impact these different actions may have had on the evolution of human hands, researchers measured the force experienced by the hand of 39 individuals during different stone tool behaviours -- nut-cracking, marrow acquisition with a hammerstone, flake production with a hammerstone, and handaxe and stone tool (i.e. a flake) -- to see which digits were most important for manipulating the tool.

They found that the pressures varied across the different behaviours, with nut-cracking generally requiring the lowest pressure while making the flake and accessing marrow required the greatest pressures. Across all of the different behaviours, the thumb, index finger and middle finger were always most important.

Professor Kivell says this suggests that nut-cracking force may not be high enough to elicit changes in the formation of the human hand, which may be why other primates are adept nut-crackers without having a human-like hand.

In contrast, making stone flakes and accessing marrow may have been key influences on our hand anatomy due to the high stress they cause on our hands. The researchers concluded that eating marrow, given its additional benefit of high calorific value, may have also played a key role in evolution of human dexterity.

The manual pressures of stone tool behaviors and their implications for the evolution of the human hand by Erin Marie Williams-Hatala, Kevin G. Hatala, McKenzie Gordon and Margaret Kasper, all Chatham University, Pittsburgh, USA and Alastair Key and Tracy Kivell, University of Kent is published in the Journal of Human Evolution.

From Science Daily

Colorful celestial landscape

New observations with ESO's Very Large Telescope show the star cluster RCW 38 in all its glory. This image was taken during testing of the HAWK-I camera with the GRAAL adaptive optics system. It shows the cluster and its surrounding clouds of brightly glowing gas in exquisite detail, with dark tendrils of dust threading through the bright core of this young gathering of stars.
This image shows the star cluster RCW 38, as captured by the HAWK-I infrared imager mounted on ESO's Very Large Telescope (VLT) in Chile. By gazing into infrared wavelengths, HAWK-I can examine dust-shrouded star clusters like RCW 38, providing an unparalleled view of the stars forming within. This cluster contains hundreds of young, hot, massive stars, and lies some 5500 light-years away in the constellation of Vela (The Sails).

The central area of RCW 38 is visible here as a bright, blue-tinted region, an area inhabited by numerous very young stars and protostars that are still in the process of forming. The intense radiation pouring out from these newly born stars causes the surrounding gas to glow brightly. This is in stark contrast to the streams of cooler cosmic dust winding through the region, which glow gently in dark shades of red and orange. The contrast creates this spectacular scene -- a piece of celestial artwork.

Previous images of this region taken in optical wavelengths are strikingly different -- optical images appear emptier of stars due to dust and gas blocking our view of the cluster. Observations in the infrared, however, allow us to peer through the dust that obscures the view in the optical and delve into the heart of this star cluster.

HAWK-I is installed on Unit Telescope 4 (Yepun) of the VLT, and operates at near-infrared wavelengths. It has many scientific roles, including obtaining images of nearby galaxies or large nebulae as well as individual stars and exoplanets. GRAAL is an adaptive optics module which helps HAWK-I to produce these spectacular images. It makes use of four laser beams projected into the night sky, which act as artificial reference stars, used to correct for the effects of atmospheric turbulence -- providing a sharper image.

Read more at Science Daily

Did humans leave Africa earlier than previously thought?

Picture taken at the site of the discovery of ancient tools in China.
Ancient tools and bones discovered in China by archaeologists suggest early humans left Africa and arrived in Asia earlier than previously thought.

The artefacts show that our earliest human ancestors colonised East Asia over two million years ago. They were found by a Chinese team that was led by Professor Zhaoyu Zhu of the Chinese Academy of Sciences, and included Professor Robin Dennell of Exeter University. The tools were discovered at a locality called Shangchen in the southern Chinese Loess Plateau. The oldest are ca. 2.12 million years old, and are c. 270,000 years older than the 1.85 million year old skeletal remains and stone tools from Dmanisi, Georgia, which were previously the earliest evidence of humanity outside Africa.

The artefacts include a notch, scrapers, cobble, hammer stones and pointed pieces. All show signs of use -- the stone had been intentionally flaked. Most were made of quartzite and quartz that probably came from the foothills of the Qinling Mountains 5 to 10 km to the south of the site, and the streams flowing from them. Fragments of animal bones 2.12 million years old were also found.

The Chinese Loess Plateau covers about 270,000 square kilometres, and during the past 2.6m years between 100 and 300m of wind-blown dust -- known as loess -- has been deposited in the area.

The 80 stone artefacts were found predominantly in 11 different layers of fossil soils which developed in a warm and wet climate. A further 16 items were found in six layers of loess that developed under colder and drier conditions. These 17 different layers of loess and fossil soils were formed during a period spanning almost a million years. This shows that early types of humans occupied the Chinese Loess Plateau under different climatic conditions between 1.2 and 2.12 million years ago.

The layers containing these stone tools were dated by linking the magnetic properties of the layers to known and dated changes in Earth's magnetic field.

Read more at Science Daily

Humans did not stem from a single ancestral population in one region of Africa

Middle Stone Age cultural artefacts from northern and southern Africa.
A scientific consortium led by Dr. Eleanor Scerri, British Academy Postdoctoral Fellow at the University of Oxford and researcher at the Max Planck Institute for the Science of Human History, has found that human ancestors were scattered across Africa, and largely kept apart by a combination of diverse habitats and shifting environmental boundaries, such as forests and deserts. Millennia of separation gave rise to a staggering diversity of human forms, whose mixing ultimately shaped our species.

While it is widely accepted that our species originated in Africa, less attention has been paid to how we evolved within the continent. Many had assumed that early human ancestors originated as a single, relatively large ancestral population, and exchanged genes and technologies like stone tools in a more or less random fashion.

In a paper published in Trends in Ecology and Evolution this week, this view is challenged, not only by the usual study of bones (anthropology), stones (archaeology) and genes (population genomics), but also by new and more detailed reconstructions of Africa's climates and habitats over the last 300,000 years.

One species, many origins

"Stone tools and other artifacts -- usually referred to as material culture -- have remarkably clustered distributions in space and through time," said Dr. Eleanor Scerri, researcher at the Max Planck Institute for the Science of Human History and the University of Oxford, and lead author of the study. "While there is a continental-wide trend towards more sophisticated material culture, this 'modernization' clearly doesn't originate in one region or occur at one time period."

Human fossils tell a similar story. "When we look at the morphology of human bones over the last 300,000 years, we see a complex mix of archaic and modern features in different places and at different times," said Prof. Chris Stringer, researcher at the London Natural History Museum and co-author on the study. "As with the material culture, we do see a continental-wide trend towards the modern human form, but different modern features appear in different places at different times, and some archaic features are present until remarkably recently."

The genes concur. "It is difficult to reconcile the genetic patterns we see in living Africans, and in the DNA extracted from the bones of Africans who lived over the last 10,000 years, with there being one ancestral human population," said Prof. Mark Thomas, geneticist at University College London and co-author on the study. "We see indications of reduced connectivity very deep in the past, some very old genetic lineages, and levels of overall diversity that a single population would struggle to maintain."

An ecological, biological and cultural patchwork

To understand why human populations were so subdivided, and how these divisions changed through time, the researchers looked at the past climates and environments of Africa, which give a picture of shifting and often isolated habitable zones. Many of the most inhospitable regions in Africa today, such as the Sahara, were once wet and green, with interwoven networks of lakes and rivers, and abundant wildlife. Similarly, some tropical regions that are humid and green today were once arid. These shifting environments drove subdivisions within animal communities and numerous sub-Saharan species exhibit similar phylogenetic patterns in their distribution.

The shifting nature of these habitable zones means that human populations would have gone through many cycles of isolation -- leading to local adaptation and the development of unique material culture and biological makeup -- followed by genetic and cultural mixing.

"Convergent evidence from these different fields stresses the importance of considering population structure in our models of human evolution," says co-author Dr. Lounes Chikhi of the CNRS in Toulouse and Instituto Gulbenkian de Ciência in Lisbon."This complex history of population subdivision should thus lead us to question current models of ancient population size changes, and perhaps re-interpret some of the old bottlenecks as changes in connectivity," he added.

Read more at Science Daily

Jul 10, 2018

Scientists discover the world's oldest colors

Biogeochemistry Lab Manager Janet Hope from the ANU Research School of Earth Sciences holds a vial of pink colored porphyrins representing the oldest intact pigments in the world.
Scientists from The Australian National University (ANU) and overseas have discovered the oldest colours in the geological record, 1.1 billion-year-old bright pink pigments extracted from rocks deep beneath the Sahara desert in Africa.

Dr Nur Gueneli from ANU said the pigments taken from marine black shales of the Taoudeni Basin in Mauritania, West Africa, were more than half a billion years older than previous pigment discoveries. Dr Gueneli discovered the molecules as part of her PhD studies.

"The bright pink pigments are the molecular fossils of chlorophyll that were produced by ancient photosynthetic organisms inhabiting an ancient ocean that has long since vanished," said Dr Gueneli from the ANU Research School of Earth Sciences.

The fossils range from blood red to deep purple in their concentrated form, and bright pink when diluted.

ANU led the research with support from Geoscience Australia and researchers in the United States and Japan.

The researchers crushed the billion-year-old rocks to powder, before extracting and analysing molecules of ancient organisms from them.

"The precise analysis of the ancient pigments confirmed that tiny cyanobacteria dominated the base of the food chain in the oceans a billion years ago, which helps to explain why animals did not exist at the time," Dr Gueneli said.

Senior lead researcher Associate Professor Jochen Brocks from ANU said that the emergence of large, active organisms was likely to have been restrained by a limited supply of larger food particles, such as algae.

"Algae, although still microscopic, are a thousand times larger in volume than cyanobacteria, and are a much richer food source," said Dr Brocks from the ANU Research School of Earth Sciences.

"The cyanobacterial oceans started to vanish about 650 million years ago, when algae began to rapidly spread to provide the burst of energy needed for the evolution of complex ecosystems, where large animals, including humans, could thrive on Earth."

From Science Daily

Every person has a unique brain anatomy

Three brain scans (from the front, side and above) of two different brains (pictured on the left and on the right) belonging to twins. The furrows and ridges are different in each person.
Like with fingerprints, no two people have the same brain anatomy, a study by researchers of the University of Zurich has shown. This uniqueness is the result of a combination of genetic factors and individual life experiences.

The fingerprint is unique in every individual: As no two fingerprints are the same, they have become the go-to method of identity verification for police, immigration authorities and smartphone producers alike. But what about the central switchboard inside our heads? Is it possible to find out who a brain belongs to from certain anatomical features? This is the question posed by the group working with Lutz Jäncke, UZH professor of neuropsychology. In earlier studies, Jäncke had already been able to demonstrate that individual experiences and life circumstances influence the anatomy of the brain.

Experiences make their mark on the brain

Professional musicians, golfers or chess players, for example, have particular characteristics in the regions of the brain which they use the most for their skilled activity. However, events of shorter duration can also leave behind traces in the brain: If, for example, the right arm is kept still for two weeks, the thickness of the brain's cortex in the areas responsible for controlling the immobilized arm is reduced. "We suspected that those experiences having an effect on the brain interact with the genetic make-up so that over the course of years every person develops a completely individual brain anatomy," explains Jäncke.

Magnetic resonance imaging provides basis for calculations

To investigate their hypothesis, Jäncke and his research team examined the brains of nearly 200 healthy older people using magnetic resonance imaging three times over a period of two years. Over 450 brain anatomical features were assessed, including very general ones such as total volume of the brain, thickness of the cortex, and volumes of grey and white matter. For each of the 191 people, the researchers were able to identify an individual combination of specific brain anatomical characteristics, whereby the identification accuracy, even for the very general brain anatomical characteristics, was over 90 percent.

Combination of circumstances and genetics


"With our study we were able to confirm that the structure of people's brains is very individual," says Lutz Jäncke on the findings. "The combination of genetic and non-genetic influences clearly affects not only the functioning of the brain, but also its anatomy." The replacement of fingerprint sensors with MRI scans in the future is unlikely, however. MRIs are too expensive and time-consuming in comparison to the proven and simple method of taking fingerprints.

Read more at Science Daily

Rocky planet neighbor looks familiar, but is not Earth's twin

This artist's impression shows the temperate planet Ross 128 b, with its red dwarf parent star in the background.
Last autumn, the world was excited by the discovery of an exoplanet called Ross 128 b, which is just 11 light years away from Earth. New work from a team led by Diogo Souto of Brazil's Observatório Nacional and including Carnegie's Johanna Teske has for the first time determined detailed chemical abundances of the planet's host star, Ross 128.

Understanding which elements are present in a star in what abundances can help researchers estimate the makeup of the exoplanets that orbit them, which can help predict how similar the planets are to the Earth.

"Until recently, it was difficult to obtain detailed chemical abundances for this kind of star," said lead author Souto, who developed a technique to make these measurements last year.

Like the exoplanet's host star Ross 128, about 70 percent of all stars in the Milky Way are red dwarfs, which are much cooler and smaller than our Sun. Based on the results from large planet-search surveys, astronomers estimate that many of these red dwarf stars host at least one exoplanet. Several planetary systems around red dwarfs have been newsmakers in recent years, including Proxima b, a planet which orbits the nearest star to our own Sun, Proxima Centauri, and the seven planets of TRAPPIST-1, which itself is not much larger in size than our Solar System's Jupiter.

Using the Sloan Digital Sky Survey's APOGEE spectroscopic instrument, the team measured the star's near-infrared light to derive abundances of carbon, oxygen, magnesium, aluminum, potassium, calcium, titanium, and iron.

"The ability of APOGEE to measure near-infrared light, where Ross 128 is brightest, was key for this study," Teske said. "It allowed us to address some fundamental questions about Ross 128 b's `Earth-like-ness'," Teske said.

When stars are young, they are surrounded by a disk of rotating gas and dust from which rocky planets accrete. The star's chemistry can influence the contents of the disk, as well as the resulting planet's mineralogy and interior structure. For example, the amount of magnesium, iron, and silicon in a planet will control the mass ratio of its internal core and mantle layers.

The team determined that Ross 128 has iron levels similar to our Sun. Although they were not able to measure its abundance of silicon, the ratio of iron to magnesium in the star indicates that the core of its planet, Ross 128 b, should be larger than Earth's.

Because they knew Ross 128 b's minimum mass, and stellar abundances, the team was also able to estimate a range for the planet's radius, which is not possible to measure directly due to the way the planet's orbit is oriented around the star.

Knowing a planet's mass and radius is important to understanding what it's made of, because these two measurements can be used to calculate its bulk density. What's more, when quantifying planets in this way, astronomers have realized that planets with radii greater than about 1.7 times Earth's are likely surrounded by a gassy envelope, like Neptune, and those with smaller radii are likely to be more-rocky, as is our own home planet.

The estimated radius of Ross 128 b indicates that it should be rocky.

Lastly, by measuring the temperature of Ross 128 and estimating the radius of the planet the team was able to determine how much of the host star's light should be reflecting off the surface of Ross 128 b, revealing that our second-closest rocky neighbor likely has a temperate climate.

Read more at Science Daily

Oxygen levels on early Earth rose, fell several times before great oxidation even

The Jeerinah Formation in Western Australia, where a UW-led team found a sudden shift in nitrogen isotopes. "Nitrogen isotopes tell a story about oxygenation of the surface ocean, and this oxygenation spans hundreds of kilometers across a marine basin and lasts for somewhere less than 50 million years," said lead author Matt Koehler.
Earth's oxygen levels rose and fell more than once hundreds of millions of years before the planetwide success of the Great Oxidation Event about 2.4 billion years ago, new research from the University of Washington shows.

The evidence comes from a new study that indicates a second and much earlier "whiff" of oxygen in Earth's distant past -- in the atmosphere and on the surface of a large stretch of ocean -- showing that the oxygenation of the Earth was a complex process of repeated trying and failing over a vast stretch of time.

The finding also may have implications in the search for life beyond Earth. Coming years will bring powerful new ground- and space-based telescopes able to analyze the atmospheres of distant planets. This work could help keep astronomers from unduly ruling out "false negatives," or inhabited planets that may not at first appear to be so due to undetectable oxygen levels.

"The production and destruction of oxygen in the ocean and atmosphere over time was a war with no evidence of a clear winner, until the Great Oxidation Event," said Matt Koehler, a UW doctoral student in Earth and space sciences and lead author of a new paper published the week of July 9 in the Proceedings of the National Academy of Sciences.

"These transient oxygenation events were battles in the war, when the balance tipped more in favor of oxygenation."

In 2007, co-author Roger Buick, UW professor of Earth and space sciences, was part of an international team of scientists that found evidence of an episode -- a "whiff" -- of oxygen some 50 million to 100 million years before the Great Oxidation Event. This they learned by drilling deep into sedimentary rock of the Mount McRae Shale in Western Australia and analyzing the samples for the trace metals molybdenum and rhenium, accumulation of which is dependent on oxygen in the environment.

Now, a team led by Koehler has confirmed a second such appearance of oxygen in Earth's past, this time roughly 150 million years earlier -- or about 2.66 billion years ago -- and lasting for less than 50 million years. For this work they used two different proxies for oxygen -- nitrogen isotopes and the element selenium -- substances that, each in its way, also tell of the presence of oxygen.

"What we have in this paper is another detection, at high resolution, of a transient whiff of oxygen," said Koehler. "Nitrogen isotopes tell a story about oxygenation of the surface ocean, and this oxygenation spans hundreds of kilometers across a marine basin and lasts for somewhere less than 50 million years."

The team analyzed drill samples taken by Buick in 2012 at another site in the northwestern part of Western Australia called the Jeerinah Formation.

The researchers drilled two cores about 300 kilometers apart but through the same sedimentary rocks -- one core samples sediments deposited in shallower waters, and the other samples sediments from deeper waters. Analyzing successive layers in the rocks years shows, Buick said, a "stepwise" change in nitrogen isotopes "and then back again to zero. This can only be interpreted as meaning that there is oxygen in the environment. It's really cool -- and it's sudden."

The nitrogen isotopes reveal the activity of certain marine microorganisms that use oxygen to form nitrate, and other microorganisms that use this nitrate for energy. The data collected from nitrogen isotopes sample the surface of the ocean, while selenium suggests oxygen in the air of ancient Earth. Koehler said the deep ocean was likely anoxic, or without oxygen, at the time.

The team found plentiful selenium in the shallow hole only, meaning that it came from the nearby land, not making it to deeper water. Selenium is held in sulfur minerals on land; higher atmospheric oxygen would cause more selenium to be leached from the land through oxidative weathering -- "the rusting of rocks," Buick said -- and transported to sea.

"That selenium then accumulates in ocean sediments," Koehler said. "So when we measure a spike in selenium abundances in ocean sediments, it could mean there was a temporary increase in atmospheric oxygen."

The finding, Buick and Koehler said, also has relevance for detecting life on exoplanets, or those beyond the solar system.

"One of the strongest atmospheric biosignatures is thought to be oxygen, but this study confirms that during a planet's transition to becoming permanently oxygenated, its surface environments may be oxic for intervals of only a few million years and then slip back into anoxia," Buick said.

"So, if you fail to detect oxygen in a planet's atmosphere, that doesn't mean that the planet is uninhabited or even that it lacks photosynthetic life. Merely that it hasn't built up enough sources of oxygen to overwhelm the 'sinks' for any longer than a short interval.

Read more at Science Daily

Jul 9, 2018

Generating electrical power from waste heat

This tiny silicon-based device developed at Sandia National Laboratories can catch and convert waste heat into electrical power. The rectenna, short for rectifying antenna, is made of common aluminum, silicon and silicon dioxide using standard processes from the integrated circuit industry.
Directly converting electrical power to heat is easy. It regularly happens in your toaster, that is, if you make toast regularly. The opposite, converting heat into electrical power, isn't so easy.

Researchers from Sandia National Laboratories have developed a tiny silicon-based device that can harness what was previously called waste heat and turn it into DC power. Their advance was recently published in Physical Review Applied.

"We have developed a new method for essentially recovering energy from waste heat. Car engines produce a lot of heat and that heat is just waste, right? So imagine if you could convert that engine heat into electrical power for a hybrid car. This is the first step in that direction, but much more work needs to be done," said Paul Davids, a physicist and the principal investigator for the study.

"In the short term we're looking to make a compact infrared power supply, perhaps to replace radioisotope thermoelectric generators." Called RTGs, the generators are used for such tasks as powering sensors for space missions that don't get enough direct sunlight to power solar panels.

Davids' device is made of common and abundant materials, such as aluminum, silicon and silicon dioxide -- or glass -- combined in very uncommon ways.

Silicon device catches, channels and converts heat into power

Smaller than a pinkie nail, the device is about 1/8 inch by 1/8 inch, half as thick as a dime and metallically shiny. The top is aluminum that is etched with stripes roughly 20 times smaller than the width of a human hair. This pattern, though far too small to be seen by eye, serves as an antenna to catch the infrared radiation.

Between the aluminum top and the silicon bottom is a very thin layer of silicon dioxide. This layer is about 20 silicon atoms thick, or 16,000 times thinner than a human hair. The patterned and etched aluminum antenna channels the infrared radiation into this thin layer.

The infrared radiation trapped in the silicon dioxide creates very fast electrical oscillations, about 50 trillion times a second. This pushes electrons back and forth between the aluminum and the silicon in an asymmetric manner. This process, called rectification, generates net DC electrical current.

The team calls its device an infrared rectenna, a portmanteau of rectifying antenna. It is a solid-state device with no moving parts to jam, bend or break, and doesn't have to directly touch the heat source, which can cause thermal stress.

Infrared rectenna production uses common, scalable processes

Because the team makes the infrared rectenna with the same processes used by the integrated circuit industry, it's readily scalable, said Joshua Shank, electrical engineer and the paper's first author, who tested the devices and modeled the underlying physics while he was a Sandia postdoctoral fellow.

He added, "We've deliberately focused on common materials and processes that are scalable. In theory, any commercial integrated circuit fabrication facility could make these rectennas."

That isn't to say creating the current device was easy. Rob Jarecki, the fabrication engineer who led process development, said, "There's immense complexity under the hood and the devices require all kinds of processing tricks to build them."

One of the biggest fabrication challenges was inserting small amounts of other elements into the silicon, or doping it, so that it would reflect infrared light like a metal, said Jarecki. "Typically you don't dope silicon to death, you don't try to turn it into a metal, because you have metals for that. In this case we needed it doped as much as possible without wrecking the material."

The devices were made at Sandia's Microsystems Engineering, Science and Applications Complex. The team has been issued a patent for the infrared rectenna and have filed several additional patents.

The version of the infrared rectenna the team reported in Physical Review Applied produces 8 nanowatts of power per square centimeter from a specialized heat lamp at 840 degrees. For context, a typical solar-powered calculator uses about 5 microwatts, so they would need a sheet of infrared rectennas slightly larger than a standard piece of paper to power a calculator. So, the team has many ideas for future improvements to make the infrared rectenna more efficient.

Future work to improve infrared rectenna efficiency

These ideas include making the rectenna's top pattern 2D x's instead of 1D stripes, in order to absorb infrared light over all polarizations; redesigning the rectifying layer to be a full-wave rectifier instead of the current half-wave rectifier; and making the infrared rectenna on a thinner silicon wafer to minimize power loss due to resistance.

Through improved design and greater conversion efficiency, the power output per unit area will increase. Davids thinks that within five years, the infrared rectenna may be a good alternative to RTGs for compact power supplies.

Shank said, "We need to continue to improve in order to be comparable to RTGs, but the rectennas will be useful for any application where you need something to work reliably for a long time and where you can't go in and just change the battery. However, we're not going to be an alternative for solar panels as a source of grid-scale power, at least not in the near term."

Davids added, "We've been whittling away at the problem and now we're beginning to get to the point where we're seeing relatively large gains in power conversion, and I think that there's a path forward as an alternative to thermoelectrics. It feels good to get to this point. It would be great if we could scale it up and change the world."

Read more at Science Daily

Senolytic drugs reverse damage caused by senescent cells in mice

Mouse.
Injecting senescent cells into young mice results in a loss of health and function but treating the mice with a combination of two existing drugs cleared the senescent cells from tissues and restored physical function. The drugs also extended both life span and health span in naturally aging mice, according to a new study in Nature Medicine, published on July 9, 2018.

A research team led by James L. Kirkland, M.D., Ph.D., of the Mayo Clinic in Rochester, Minnesota, found that injecting even a small number of senescent cells into young, healthy mice causes damage that can result in physical dysfunction. The researchers also found that treatment with a combination of dasatinib and quercetin could prevent cell damage, delay physical dysfunction, and, when used in naturally aging mice, extend their life span.

"This study provides compelling evidence that targeting a fundamental aging process -- in this case, cell senescence in mice -- can delay age-related conditions, resulting in better health and longer life," said NIA Director Richard J. Hodes, M.D. "This study also shows the value of investigating biological mechanisms which may lead to better understanding of the aging process."

Many normal cells continuously grow, die, and replicate. Cell senescence is a process in which cells lose function, including the ability to divide and replicate, but are resistant to cell death. Such cells have been shown to affect neighboring ones because they secrete several pro-inflammatory and tissue remodeling molecules. Senescent cells increase in many tissues with aging; they also occur in organs associated with many chronic diseases and after radiation or chemotherapy.

Senolytics are a class of drugs that selectively eliminate senescent cells. In this study, Kirkland's team used a combination of dasatinib and quercetin (D+Q) to test whether this senolytic combination could slow physical dysfunction caused by senescent cells. Dasatinib is used to treat some forms of leukemia; quercetin is a plant flavanol found in some fruits and vegetables.

To determine whether senescent cells caused physical dysfunction, the researchers first injected young (four-month-old) mice with either senescent (SEN) cells or non-senescent control (CON) cells. As early as two weeks after transplantation, the SEN mice showed impaired physical function as determined by maximum walking speed, muscle strength, physical endurance, daily activity, food intake, and body weight. In addition, the researchers saw increased numbers of senescent cells, beyond what was injected, suggesting a propagation of the senescence effect into neighboring cells.

To then analyze whether a senolytic compound could stop or delay physical dysfunction, researchers treated both SEN and CON mice for three days with the D+Q compound mix. They found that D+Q selectively killed senescent cells and slowed the deterioration in walking speed, endurance, and grip strength in the SEN mice.

In addition to young mice injected with senescent cells, the researchers also tested older (20-month-old), non-transplanted mice with D+Q intermittently for 4 months. D+Q alleviated normal age-related physical dysfunction, resulting in higher walking speed, treadmill endurance, grip strength, and daily activity.

Finally, the researchers found that treating very old (24- to 27-month-old) mice with D+Q biweekly led to a 36 percent higher average post-treatment life span and lower mortality hazard than control mice. This indicates that senolytics can reduce risk of death in old mice.

"This is exciting research," said Felipe Sierra, Ph.D., director of NIA's Division of Aging Biology. "This study clearly demonstrates that senolytics can relieve physical dysfunction in mice. Additional research will be necessary to determine if compounds, like the one used in this study, are safe and effective in clinical trials with people."

The researchers noted that current and future preclinical studies may show that senolytics could be used to enhance life span not only in older people, but also in cancer survivors treated with senescence-inducing radiation or chemotherapy and people with a range of senescence-associated chronic diseases.

Read more at Science Daily

Jumping genes: Cross species transfer of genes has driven evolution

A graphic representation of the BovB element which shows how it has appeared in species that are wide apart on the evolutionary tree -- for example sea urchins and elephants, cows and snakes.
Far from just being the product of our parents, University of Adelaide scientists have shown that widespread transfer of genes between species has radically changed the genomes of today's mammals, and been an important driver of evolution.

In the world's largest study of so-called "jumping genes," the researchers have traced two particular jumping genes across 759 species of plants, animals and fungi. These jumping genes are actually small pieces of DNA that can copy themselves throughout a genome and are known as transposable elements.

They have found that cross-species transfers, even between plants and animals, have occurred frequently throughout evolution.

Both of the transposable elements they traced -- L1 and BovB -- entered mammals as foreign DNA. This is the first time anyone has shown that the L1 element, important in humans, has jumped between species.

"Jumping genes, properly called retrotransposons, copy and paste themselves around genomes, and in genomes of other species. How they do this is not yet known although insects like ticks or mosquitoes or possibly viruses may be involved -- it's still a big puzzle," says project leader Professor David Adelson, Director of the University of Adelaide's Bioinformatics Hub.

"This process is called horizontal transfer, differing from the normal parent-offspring transfer, and it's had an enormous impact on mammalian evolution."

For example, Professor Adelson says, 25% of the genome of cows and sheep is derived from jumping genes.

"Think of a jumping gene as a parasite," says Professor Adelson. "What's in the DNA is not so important -- it's the fact that they introduce themselves into other genomes and cause disruption of genes and how they are regulated."

Published today in the journal Genome Biology, in collaboration with the South Australian Museum, the researchers found horizontal gene transfer was much more widespread than had been thought.

"L1 elements were thought to be inherited only from parent to offspring," says lead author Dr Atma Ivancevic, postdoctoral researcher in the University of Adelaide's Medical School. "Most studies have only looked at a handful of species and found no evidence of transfer. We looked at as many species as we could."

L1 elements in humans have been associated with cancer and neurological disorders. The researchers say that understanding the inheritance of this element is important for understanding the evolution of diseases.

The researchers found L1s are abundant in plants and animals, although only appearing sporadically in fungi. But the most surprising result was the lack of L1s in two key mammal species -- the Australian monotremes (platypus and echidna) -- showing that the gene entered the mammalian evolutionary pathway after the divergence from monotremes.

"We think the entry of L1s into the mammalian genome was a key driver of the rapid evolution of mammals over the past 100 million years," says Professor Adelson.

The team also looked at the transfer of BovB elements between species. BovB is a much younger jumping gene: it was first discovered in cows, but has since been shown to jump between a bizarre array of animals including reptiles, elephants and marsupials. Earlier research, led by Professor Adelson, found that ticks were the most likely facilitators of cross-species BovB transfer.

The new research extended the analysis to find that BovB has jumped even more widely than previously anticipated. BovB has transferred at least twice between frogs and bats, and new potential vector species include bed bugs, leeches and locusts.

The team believes that studying insect species will help find more evidence of cross-species transfer. They also aim to study other jumping genes and explore the possibility of aquatic vectors, such as sea worms and nematodes.

Read more at Science Daily

Plasma-spewing quasar shines light on universe's youth, early galaxy formation

This is an artist's conception of a radio jet spewing out fast-moving material from the newly discovered quasar, which formed within the first billion years of the universe's existence.
Carnegie's Eduardo Bañados led a team that found a quasar with the brightest radio emission ever observed in the early universe, due to it spewing out a jet of extremely fast-moving material.

Bañados' discovery was followed up by Emmanuel Momjian of the National Radio Astronomy Observatory, which allowed the team to see with unprecedented detail the jet shooting out of a quasar that formed within the universe's first billion years of existence.

The findings, published in two papers in The Astrophysical Journal, will allow astronomers to better probe the universe's youth during an important period of transition to its current state.

Quasars are comprised of enormous black holes accreting matter at the centers of massive galaxies. This newly discovered quasar, called PSO J352.4034-15.3373, is one of a rare breed that doesn't just swallow matter into the black hole but also emits a jet of plasma traveling at speeds approaching that of light. This jet makes it extremely bright in the frequencies detected by radio telescopes. Although quasars were identified more than 50 years ago by their strong radio emissions, now we know that only about 10 percent of them are strong radio emitters.

What's more, this newly discovered quasar's light has been traveling nearly 13 billion of the universe's 13.7 billion years to reach us here in Earth. P352-15 is the first quasar with clear evidence of radio jets seen within the first billion years of the universe's history.

"There is a dearth of known strong radio emitters from the universe's youth and this is the brightest radio quasar at that epoch by an order of magnitude," Bañados said.

"This is the most-detailed image yet of such a bright galaxy at this great distance," Momjian added.

The Big Bang started the universe as a hot soup of extremely energetic particles that were rapidly expanding. As it expanded, it cooled and coalesced into neutral hydrogen gas, which left the universe dark, without any luminous sources, until gravity condensed matter into the first stars and galaxies. About 800 million years after the Big Bang, the energy released by these first galaxies caused the neutral hydrogen that was scattered throughout the universe to get excited and lose an electron, or ionize, a state that the gas has remained in since that time.

It's highly unusual to find radio jet-emitting quasars such as this one from the period just after the universe's lights came back on.

Read more at Science Daily

Jul 8, 2018

Milky Way type dust particles discovered in a galaxy 11 billion light years from Earth

The discovery of the afterglow. To the left is an image from the so-called Pan-STARRS telescope in Hawaii taken before the explosion. To the right is an image of the same part of the sky taken with the Nordic Optical Telescope a few minutes after the explosion was registered by the Swift satellite.
An international research team, with participation from the Niels Bohr Institute at the University of Copenhagen, has found the same type of interstellar dust that we know from the Milky Way in a distant galaxy 11 billion light years from Earth. This type of dust has been found to be rare in other galaxies and the new discovery plays an important role in understanding what it takes for this particular type of interstellar dust to be formed.

The discovery of the afterglow. To the left is an image from the so-called Pan-STARRS telescope in Hawaii taken before the explosion. To the right is an image of the same part of the sky taken with the Nordic Optical Telescope a few minutes after the explosion was registered by the Swift satellite.

Dust in galaxies

Galaxies are complex structures comprised of many individual parts, such as stars, gas, dust and dark matter. Even though the dust only represents a small part of the total amount of matter in a galaxy, it plays a major role in how stars are formed and how the light from the stars escapes the galaxies. Dust grains can both absorb and scatter light. Dust particles also play a decisive role in the formation of planets and thus also for the understanding of our own existence on Earth.

How do you measure dust 11 billion light years away?

The dust in galaxies consists of small grains of carbon, silicon, iron, aluminium and other heavier elements. The Milky Way has a very high content of carbonaceous dust, which has been shown to be very rare in other galaxies. But now a similar type of dust has been found in a few, very distant galaxies that researchers have been able to investigate using light from gamma-ray bursts. Gamma-ray bursts come from massive stars that explode when the when the fuel in its core is exhausted. The explosion causes the dying stars to emit powerful bursts of light that astronomers can use to analyse what the galaxies are comprised of. Specifically, they can measure the elemental content and analyse their way forward to the properties of the dust properties by examining the light that escapes from the galaxies.

The carbonaceous dust is registered in the measurements as a "dust bump," that is, a high value of dust with the said composition. This ultraviolet dust bump has now been detected in a gamma-ray burst, which has been named GRB180325A and the result has just been accepted for publication in the journal Astrophysical Journal Letters. The lead author is Tayyaba Zafar who completed her PhD studies at the Niels Bohr Institute in Copenhagen and is now working at the Angle Australian Observatory in Australia. Several other researchers from NBI are co-authors of the article.

Collaboration between observatories

GRB180325A was detected by Neil Gehrel's Swift Observatory (NASA) on 28 March 2018. Swift is a satellite mission that detects gamma rays from the dying stars. When such a detection from the satellite hits the astronomers, a hectic period begins. The astronomers try to observe that part of the sky as quickly as possible in order to secure the crucial information that allows them to study the interior of the galaxy the explosion originated from. In this case Kasper Heintz, who did his master's thesis at the Niels Bohr Institute and is now a PhD student at the University of Iceland, was on duty. He activated the Nordic Optical Telescope (NOT) at La Palma, where Professor Johan Fynbo from the Niels Bohr Institute was observing for another project. The first observations of the light from the gamma-ray burst were secured only a few minutes after the discovery by Swift.

The observations from NOT showed that the star had exploded in a galaxy with a red shift of 2.25, which means that the light has travelled approximately 11 billion light years. The observations immediately showed that the dust bump, known from the Milky Way, was present in this galaxy. The team then observed the gamma-ray burst with the X-shooter spectrograph on ESO's Very Large Telescope (European Southern Observatory) on the Cerro Paranal in Chile. All in all, four spectra of the afterglow from the gamma-ray burst were secured -- all with a clear detection of the dust bump.

"It is a beautiful example of how observations in space and around the world can work together and create breakthroughs in research. The work also gives cause to express great thanks to the Carlsberg Foundation, without which Danish astronomy would neither have access to the Very Large Telescope nor NOT," says Professor Johan Fynbo.

"Our spectra show that the presence of atomic carbon seems to be a requirement for the dust that causes the dust bump to be formed," says Kasper Heintz.

The dust bump has previously been seen in observations of four other gamma-ray bursts, the last of which was detected 10 years ago.

Read more at Science Daily