Jul 27, 2019

Top tools for pinpointing genetic drivers of disease

Published in Nature Communications, the study is the largest of its kind and was led by Walter and Eliza Hall Institute computational biologists Professor Tony Papenfuss, Dr Daniel Cameron and Mr Leon Di Stefano.

The new study reveals the world's top genomic rearrangement detection tools providing summaries on their performance and recommendations for use. Dr Cameron said the study could ultimately help clinicians determine the best treatments for their patients.

"Basically, you have to understand what is going wrong before you can work out how to fix the problem. In the context of cancer for instance, an understanding of the genetic mutations driving tumour growth could help oncologists determine the most appropriate treatment for their patients," he said.

To determine the best genomic rearrangement detection methods, the researchers comprehensively tested 12 of the most widely used tools to see which ones could accurately identify the differences between a patient's genetic information and the standard human reference genome. The findings revealed that a tool called GRIDSS, developed by Professor Papenfuss and Dr Cameron, was one of the best performing options, most accurately able to detect DNA rearrangements.

Dr Cameron said the study would not have been possible without the Institute's high-performance computing resource.

"Over the course of two years, we tested 12 of the most popular genomic rearrangement detection tools, generating more than 50 terabytes of data, to determine which tools perform well, and when they perform badly. Without these computing resources, we estimate the study would have taken us more than ten years," he said.

The Institute's Theme Leader for Computational Biology Professor Papenfuss said computational methods were required, more than ever before, for making sense of vast and complex datasets being generated from research.

"Computational studies like this one keep the field up to date with best practice approaches for data analysis. This particular study provides a comprehensive resource for users of genomic rearrangement detection methods, as well as developers in the field. It will also help to direct the next iteration of genomic rearrangement tool development at the Institute," he said.

As new experimental techniques and DNA sequencing machines become available, the very nature of the data they generate is changing. Professor Papenfuss said that older analysis tools, while heavily cited and widely used, could lead to erroneous interpretations if used on data produced by the latest DNA sequencing machines. "This is why it is so important for researchers to find the right match between the analysis tool and dataset at hand," he said.

From Science Daily

Production sites of stars are rare

Astronomers using the Nobeyama Radio Obeservatory (NRO) 45-m telescope found that high-density gas, the material for stars, accounts for only 3% of the total mass of gas distributed in the Milky Way. This result provides key information for understanding the unexpectedly low production rate of stars.

Stars are born in gas clouds. The high-density gas pockets form in the extended, low-density gas clouds, and stars form in the very dense gas cores which evolve within the high-density gas. However, observations of distant galaxies detected 1000 times fewer stars than the production value expected from the total amount of low-density gas. To interpret the discrepancy, observations which detect both of the high-density and low-density gas with high-spatial resolution and wide area coverage were needed. However, such observations are difficult, because the high-density gas structures are dozens of times smaller than the low-density gas structures.

The Milky Way survey project "FUGIN" conducted using the NRO 45-m telescope and the multi-beam receiver FOREST overcame these difficulties. Kazufumi Torii, a project assistant professor at NAOJ, and his team analyzed the big data obtained in the FUGIN project, and measured the accurate masses of the low-density and high-density gas for a large span of 20,000 light-years along the Milky Way. They revealed for the first time that the high-density gas accounts for only 3% of the total gas.

These results imply the production rate of high-density gas in the low-density gas clouds is small, creating only a small number of opportunities to form stars. The researcher team will continue working on the FUGIN data to investigate the cause of inefficient formation of the high-density gas.

These observation results were published as K. Torii et al. "FOREST Unbiased Galactic plane Imaging survey with the Nobeyama 45 m telescope (FUGIN). V. Dense gas mass fraction of molecular gas in the Galactic plane" as an on-line paper of the Publications of the Astronomical Society of Japan on 2019 Apr., and will be published as a special issue.

From Science Daily

Jul 26, 2019

Underwater glacial melting is occurring at higher rates than modeling predicts

LeConte Glacier, Alaska.
Researchers have developed a new method to allow for the first direct measurement of the submarine melt rate of a tidewater glacier, and, in doing so, they concluded that current theoretical models may be underestimating glacial melt by up to two orders of magnitude.

In a National Science Foundation-funded project, a team of scientists, led by University of Oregon oceanographer Dave Sutherland, studied the subsurface melting of the LeConte Glacier, which flows into LeConte Bay south of Juneau, Alaska.

The team's findings, which could lead to improved forecasting of climate-driven sea level rise, were published in the July 26 issue of the journal Science.

Direct melting measurements previously have been made on ice shelves in Antarctica by boring through to the ice-ocean interface beneath. In the case of vertical-face glaciers terminating at the ocean, however, those techniques are not available.

"We don't have that platform to be able to access the ice in this way," said Sutherland, a professor in the UO's Department of Earth Sciences. "Tidewater glaciers are always calving and moving very rapidly, and you don't want to take a boat up there too closely."

Most previous research on the underwater melting of glaciers relied on theoretical modeling, measuring conditions near the glaciers and then applying theory to predict melt rates. But this theory had never been directly tested.

"This theory is used widely in our field," said study co-author Rebecca H. Jackson, an oceanographer at Rutgers University who was a postdoctoral researcher at Oregon State University during the project. "It's used in glacier models to study questions like: how will the glacier respond if the ocean warms by one or two degrees?"

To test these models in the field, the research team of oceanographers and glaciologists deployed a multibeam sonar to scan the glacier's ocean-ice interface from a fishing vessel six times in August 2016 and five times in May 2017.

The sonar allowed the team to image and profile large swaths of the underwater ice, where the glacier drains from the Stikine Icefield. Also gathered were data on the temperature, salinity and velocity of the water downstream from the glacier, which allowed the researchers to estimate the meltwater flow.

They then looked for changes in melt patterns that occurred between the August and May measurements.

"We measured both the ocean properties in front of the glacier and the melt rates, and we found that they are not related in the way we expected," Jackson said. "These two sets of measurements show that melt rates are significantly, sometimes up to a factor of 100, higher than existing theory would predict."

There are two main categories of glacial melt: discharge-driven and ambient melt. Subglacial discharge occurs when large volumes, or plumes, of buoyant meltwater are released below the glacier. The plume combines with surrounding water to pick up speed and volume as it rises up swiftly against the glacial face. The current steadily eats away from the glacier face, undercutting the glacier before eventually diffusing into the surrounding waters.

Most previous studies of ice-ocean interactions have focused on these discharge plumes. The plumes, however, typically affect only a narrow area of the glacier face, while ambient melt instead covers the rest of the glacier face.

Predictions have estimated ambient melt to be 10-100 times less than the discharge melt, and, as such, it is often disregarded as insignificant, said Sutherland, who heads the UO's Oceans and Ice Lab.

The research team found that submarine melt rates were high across the glacier's face over both of the seasons surveyed, and that the melt rate increases from spring to summer.

While the study focused on one marine-terminating glacier, Jackson said, the new approach should be useful to any researchers who are studying melt rates at other glaciers. That would help to improve projections of global sea level rise, she added.

Read more at Science Daily

Scientists find clue to 'maternal instinct'

Lab mouse and pups
Oxytocin is widely referred to as the love hormone and plays an important role in the regulation of social and maternal behavior. In recent years, the oxytocin system in the brain has received tremendous attention as key to new treatments for many mental health disorders, such as anxiety, autism spectrum disorders and postpartum depression. New research led by a biologist and his students at LSU have discovered a group of cells that are activated by oxytocin in one area of female mouse brains that are not present in the same area in male mouse brains.

"Many researchers have attempted to investigate the difference between the oxytocin system in females versus males, but no one has successfully found conclusive evidence until now. Our discovery was a big surprise," said Ryoichi Teruyama, LSU Department of Biological Sciences associate professor, who led this study published in PLOS ONE.

The oxytocin receptor cells are present in the brain area thought to be involved in the regulation of maternal behavior. Moreover, the expression of oxytocin receptors in these cells are only present when estrogen is also present. These imply that these cells are involved in inducing maternal behavior. In addition, it confirms what many recent human studies have shown: there is a connection between an altered expression of oxytocin receptors and postpartum depression.

Postpartum depression contributes to poor maternal health and has negative effects on a child's development. A number of studies have found that children of depressed mothers are at risk for a wide range of cognitive, emotional, behavioral and medical problems. Therefore, postpartum depression is a major public health concern that has significant adverse effects on both mother and child. About 10 to 20 percent of women experience postpartum depression after childbirth.

This new discovery that occurred at LSU opens doors to potential new treatments and drugs for postpartum depression targeting oxytocin receptor cells.

"I think our discovery could be universal to all mammals that exhibit maternal behavior, including humans," Teruyama said.

Student researchers

Study co-author Ryan LeBlanc from Denham Springs was an undergraduate student researcher at LSU whose work was instrumental to this discovery. However, he had little previous research experience before joining Teruyama's lab.

Teruyama recalled that when LeBlanc first approached him to be his mentor, he asked him about his hobbies. LeBlanc said he liked to build plastic models of battleships.

"I certainly don't know much about battleship plastic models, but anyone who can assemble 500 to 2,000 plastic parts into models must be persistent, focused and exceedingly careful. I accepted him gladly thinking he is going to find something extraordinary, and I was right," Teruyama said.

LeBlanc took on the tedious task of finding and marking the exact location of thousands of oxytocin receptor cells with a red pen. He spent more than a month identifying the cells, which was instrumental to this discovery.

Read more at Science Daily

Einstein's general relativity theory is questioned but still stands for now

Black hole illustration
More than 100 years after Albert Einstein published his iconic theory of general relativity, it is beginning to fray at the edges, said Andrea Ghez, UCLA professor of physics and astronomy. Now, in the most comprehensive test of general relativity near the monstrous black hole at the center of our galaxy, Ghez and her research team report July 25 in the journal Science that Einstein's theory of general relativity holds up.

"Einstein's right, at least for now," said Ghez, a co-lead author of the research. "We can absolutely rule out Newton's law of gravity. Our observations are consistent with Einstein's theory of general relativity. However, his theory is definitely showing vulnerability. It cannot fully explain gravity inside a black hole, and at some point we will need to move beyond Einstein's theory to a more comprehensive theory of gravity that explains what a black hole is."

Einstein's 1915 theory of general relativity holds that what we perceive as the force of gravity arises from the curvature of space and time. The scientist proposed that objects such as the sun and the Earth change this geometry. Einstein's theory is the best description of how gravity works, said Ghez, whose UCLA-led team of astronomers has made direct measurements of the phenomenon near a supermassive black hole -- research Ghez describes as "extreme astrophysics."

The laws of physics, including gravity, should be valid everywhere in the universe, said Ghez, who added that her research team is one of only two groups in the world to watch a star known as S0-2 make a complete orbit in three dimensions around the supermassive black hole at the center of the Milky Way. The full orbit takes 16 years, and the black hole's mass is about four million times that of the sun.

The researchers say their work is the most detailed study ever conducted into the supermassive black hole and Einstein's theory of general relativity.

The key data in the research were spectra that Ghez's team analyzed this April, May and September as her "favorite star" made its closest approach to the enormous black hole. Spectra, which Ghez described as the "rainbow of light" from stars, show the intensity of light and offer important information about the star from which the light travels. Spectra also show the composition of the star. These data were combined with measurements Ghez and her team have made over the last 24 years.

Spectra -- collected at the W.M. Keck Observatory in Hawaii using a spectrograph built at UCLA by a team led by colleague James Larkin -- provide the third dimension, revealing the star's motion at a level of precision not previously attained. (Images of the star the researchers took at the Keck Observatory provide the two other dimensions.) Larkin's instrument takes light from a star and disperses it, similar to the way raindrops disperse light from the sun to create a rainbow, Ghez said.

"What's so special about S0-2 is we have its complete orbit in three dimensions," said Ghez, who holds the Lauren B. Leichtman and Arthur E. Levine Chair in Astrophysics. "That's what gives us the entry ticket into the tests of general relativity. We asked how gravity behaves near a supermassive black hole and whether Einstein's theory is telling us the full story. Seeing stars go through their complete orbit provides the first opportunity to test fundamental physics using the motions of these stars."

Ghez's research team was able to see the co-mingling of space and time near the supermassive black hole. "In Newton's version of gravity, space and time are separate, and do not co-mingle; under Einstein, they get completely co-mingled near a black hole," she said.

"Making a measurement of such fundamental importance has required years of patient observing, enabled by state-of-the-art technology," said Richard Green, director of the National Science Foundation's division of astronomical sciences. For more than two decades, the division has supported Ghez, along with several of the technical elements critical to the research team's discovery. "Through their rigorous efforts, Ghez and her collaborators have produced a high-significance validation of Einstein's idea about strong gravity."

Keck Observatory Director Hilton Lewis called Ghez "one of our most passionate and tenacious Keck users." "Her latest groundbreaking research," he said, "is the culmination of unwavering commitment over the past two decades to unlock the mysteries of the supermassive black hole at the center of our Milky Way galaxy."

The researchers studied photons -- particles of light -- as they traveled from S0-2 to Earth. S0-2 moves around the black hole at blistering speeds of more than 16 million miles per hour at its closest approach. Einstein had reported that in this region close to the black hole, photons have to do extra work. Their wavelength as they leave the star depends not only on how fast the star is moving, but also on how much energy the photons expend to escape the black hole's powerful gravitational field. Near a black hole, gravity is much stronger than on Earth.

Ghez was given the opportunity to present partial data last summer, but chose not to so that her team could thoroughly analyze the data first. "We're learning how gravity works. It's one of four fundamental forces and the one we have tested the least," she said. "There are many regions where we just haven't asked, how does gravity work here? It's easy to be overconfident and there are many ways to misinterpret the data, many ways that small errors can accumulate into significant mistakes, which is why we did not rush our analysis."

Ghez, a 2008 recipient of the MacArthur "Genius" Fellowship, studies more than 3,000 stars that orbit the supermassive black hole. Hundreds of them are young, she said, in a region where astronomers did not expect to see them.

It takes 26,000 years for the photons from S0-2 to reach Earth. "We're so excited, and have been preparing for years to make these measurements," said Ghez, who directs the UCLA Galactic Center Group. "For us, it's visceral, it's now -- but it actually happened 26,000 years ago!"

This is the first of many tests of general relativity Ghez's research team will conduct on stars near the supermassive black hole. Among the stars that most interest her is S0-102, which has the shortest orbit, taking 11 1/2 years to complete a full orbit around the black hole. Most of the stars Ghez studies have orbits of much longer than a human lifespan.

Ghez's team took measurements about every four nights during crucial periods in 2018 using the Keck Observatory -- which sits atop Hawaii's dormant Mauna Kea volcano and houses one of the world's largest and premier optical and infrared telescopes. Measurements are also taken with an optical-infrared telescope at Gemini Observatory and Subaru Telescope, also in Hawaii. She and her team have used these telescopes both on site in Hawaii and remotely from an observation room in UCLA's department of physics and astronomy.

Black holes have such high density that nothing can escape their gravitational pull, not even light. (They cannot be seen directly, but their influence on nearby stars is visible and provides a signature. Once something crosses the "event horizon" of a black hole, it will not be able to escape. However, the star S0-2 is still rather far from the event horizon, even at its closest approach, so its photons do not get pulled in.)

Ghez's co-authors include Tuan Do, lead author of the Science paper, a UCLA research scientist and deputy director of the UCLA Galactic Center Group; Aurelien Hees, a former UCLA postdoctoral scholar, now a researcher at the Paris Observatory; Mark Morris, UCLA professor of physics and astronomy; Eric Becklin, UCLA professor emeritus of physics and astronomy; Smadar Naoz, UCLA assistant professor of physics and astronomy; Jessica Lu, a former UCLA graduate student who is now a UC Berkeley assistant professor of astronomy; UCLA graduate student Devin Chu; Greg Martinez, UCLA project scientist; Shoko Sakai, a UCLA research scientist; Shogo Nishiyama, associate professor with Japan's Miyagi University of Education; and Rainer Schoedel, a researcher with Spain's Instituto de Astrofsica de Andalucia.

Read more at Science Daily

Strange bacteria hint at ancient origin of photosynthesis

Leaf in sunlight
Structures inside rare bacteria are similar to those that power photosynthesis in plants today, suggesting the process is older than assumed.

The finding could mean the evolution of photosynthesis needs a rethink, turning traditional ideas on their head.

Photosynthesis is the ability to use the Sun's energy to produce sugars via chemical reactions. Plants, algae, and some bacteria today perform 'oxygenic' photosynthesis, which splits water into oxygen and hydrogen to power the process, releasing oxygen as a waste product.

Some bacteria instead perform 'anoxygenic' photosynthesis, a version that uses molecules other than water to power the process and does not release oxygen.

Scientists have always assumed that anoxygenic photosynthesis is more 'primitive', and that oxygenic photosynthesis evolved from it. Under this view, anoxygenic photosynthesis emerged about 3.5 billion years ago and oxygenic photosynthesis evolved a billion years later.

However, by analysing structures inside an ancient type of bacteria, Imperial College London researchers have suggested that a key step in oxygenic photosynthesis may have already been possible a billion years before commonly thought.

The new research is published in the journal Trends in Plant Science.

Lead author of the study, Dr Tanai Cardona from the Department of Life Sciences at Imperial, said: "We're beginning to see that much of the established story about the evolution of photosynthesis is not supported by the real data we obtain about the structure and functioning of early bacterial photosynthesis systems."

The bacteria they studied, Heliobacterium modesticaldum, is found around hot springs, soils and waterlogged fields, where it performs anoxygenic photosynthesis. It is very distantly related to cyanobacteria, the main bacteria that performs oxygenic photosynthesis today.

It is so distantly related that it last had a 'common ancestor' with cyanobacteria billions of years ago. This means that any traits the two bacteria share are likely to also have been present in the ancient bacteria that gave rise to them both.

By analysing the structures that both H. modesticaldum and modern cyanobacteria use to perform their different types of photosynthesis, Dr Cardona found striking similarities.

Both structures contain a site that cyanobacteria and plants exclusively use to split water -- the first crucial step in oxygenic photosynthesis.

The evolution of cyanobacteria is usually assumed to also be the first appearance of oxygenic photosynthesis, but the fact that H. modesticaldum contains a similar site means that the building blocks for oxygenic photosynthesis are likely much more ancient than thought, as old as photosynthesis itself, and therefore could have arisen much earlier in Earth's history.

Dr Cardona also suggests that this might mean oxygenic photosynthesis was not the product of a billion years of evolution from anoxygenic photosynthesis, but could have been a trait that evolved much sooner, if not first.

Dr Cardona said: "This result helps explain in fantastic detail why the systems responsible for photosynthesis and oxygen production are the way they are today- but for it to make sense it requires a change of perspective in the way we view the evolution of photosynthesis.

"Under the traditional view -- that anoxygenic photosynthesis evolved first and was the only type for about a billion years or more before oxygenic photosynthesis evolved -- these structures should not exist at all in this type of bacteria."

Read more at Science Daily

Jul 25, 2019

New telescope gives peek at the birth of the universe

The Square Kilometre Array (SKA) is set to become the largest radio telescope on Earth. Bielefeld University researchers together with the Max Planck Institute for Radio Astronomy (MPIfR) and international partners have now examined the SKA-MPG telescope -- a prototype for the part of the SKA that receives signals in the mid-frequency range. The study, published today (24 July) in the journal Monthly Notices of the Royal Astronomical Society, shows that the telescope is not only a prototype to test the SKA design, but can also be used on its own to provide insights into the origin of the Universe. The German Federal Ministry of Education and Research (BMBF) is funding the work on the SKA-MPG through a joint research project coordinated by Bielefeld University.

'The SKA-MPG telescope in South Africa will help us to understand the cosmic background Radiation,' says Dr Aritra Basu, lead author of the study and physicist in Bielefeld University's Astroparticle Physics and Cosmology Working Group. The cosmic background radiation is light in the microwave range that was produced shortly after the Big Bang, and exploring it provides information about the origin of the Universe. 'However, measurements of the cosmic background radiation are distorted by other effects in the foreground, such as ultrafast electrons in the magnetic field of the Milky Way. In order to measure cosmic background radiation, we need to know more about these effects. Our study shows that the new telescope is excellent for investigating foreground radiation with ultra precision,' says Basu.

The SKA-MPG telescope was jointly developed by the Max Planck Institute for Radio Astronomy (MPIfR) in Bonn and MT-Mechatronics GmbH. The abbreviation 'MPG' stands for the Max Planck Society, which is funding the telescope. The radio telescope has a diameter of 15 metres and can receive signals between 1.7 and 3.5 GHz. It is currently being assembled in the South Africa's Karoo desert. Gundolf Wieching of the MPIfR, project leader of the telescope, expects first regular scientific use in autumn 2019.

The radio telescope is primarily designed as a prototype for a part of the SKA that receives signals from a medium radio frequency range. If the prototype performs well in a series of tests, around 200 such telescopes will be built for the SKA in South Africa. The SKA will observe medium as well as low radio frequencies. This second instrument will consist of thousands of small radio antennae that can be combined to simulate a huge radio telescope. The two parts of the SKA will then collect Signals over one square kilometre in Australia and South Africa -- hence the name 'Square Kilometre Array'. 'Even with our prototype, we are able to look deep into the Universe thanks to a clever design for the telescope and new developments in receiver and backend technology,' says Dr Hans-Rainer Klöckner, astrophysicist at the MPIfR. 'I am curious to see what we will discover once 200 of these telescopes are synchronised for the SKA.' The SKA will be used, for example, to explore gravitational waves and dark energy, or to test Einstein's theory of relativity under extreme conditions.

The SKA will be the first global science organisation with locations on three continents: Australia, Africa, and Europe. In addition, data centres are being set up around the world. A special challenge lies in dealing with the enormous volume of data: the SKA will collect over 600 petabytes of observation data per year -- equivalent to the storage capacity of more than half a million laptops.

Read more at Science Daily

Cold, dry planets could have a lot of hurricanes

Nearly every atmospheric science textbook ever written will say that hurricanes are an inherently wet phenomenon -- they use warm, moist air for fuel. But according to new simulations, the storms can also form in very cold, dry climates.

A climate as cold and dry as the one in the study is unlikely to ever become the norm on Earth, especially as climate change is making the world warmer and wetter. But the findings could have implications for storms on other planets and for the intrinsic properties of hurricanes that most scientists and educators currently believe to be true.

"We have theories for how hurricanes work at temperatures that we're used to experiencing on Earth, and theoretically, they should still apply if we move to a colder and drier climate," said Dan Chavas, an assistant professor of earth, atmospheric and planetary sciences at Purdue University. "We wanted to know if hurricanes really need water. And we've shown that they don't -- but in a very different world."

In the world we live in now, hurricanes need water. When they reach land, they die because they run of out the water they evaporate for energy -- but that doesn't have to be the case. The findings were published in the Journal of the Atmospheric Sciences.

"Just because there isn't something changing phase between liquid and vapor doesn't mean a hurricane can't form," Chavas said.

In collaboration with Timothy Cronin, an assistant professor of atmospheric science at MIT, he used a computer model that mimics a very basic atmosphere and constantly generates hurricanes. In a general hurricane scenario, this looks like a box with ocean at the bottom, but Chavas tweaked it to dry out the surface or cool it below temperatures that usually generate hurricanes.

The coldest simulations were run at 240 degrees Kelvin (-28 F) and produced a shocking number of cyclones. These cold, dry storms were generally smaller and weaker than the hurricanes on Earth, but they formed at a higher frequency.

As the temperature drops, the air can hold less water, which explains why cold temperatures and dry surfaces yield similar results in experiments. At 240 degrees K, air can hold roughly 100 times less water vapor than at temperatures typical of the modern tropics.

Interestingly, there is a range of moderate temperatures and moisture levels in which no cyclones formed at all. From 250 to 270 degrees Kelvin (-10 F to 26 F), no hurricanes formed, although the researchers aren't sure why. At 280 Kelvin (44 F), the atmosphere filled again with cyclones.

"Maybe that means there are ideal regimes for hurricanes to exist and the current world we live in is one," Chavas said. "Or you could be in another world where there's no water, but it's still capable of producing many hurricanes. When people are considering whether we could live on a dry, rocky planet like Mars, this could be something to consider."

Read more at Science Daily

Fastest eclipsing binary, a valuable target for gravitational wave studies

Observations made with a new instrument developed for use at the 2.1-meter (84-inch) telescope at the National Science Foundation's Kitt Peak National Observatory have led to the discovery of the fastest eclipsing white dwarf binary yet known. Clocking in with an orbital period of only 6.91 minutes, the rapidly orbiting stars are expected to be one of the strongest sources of gravitational waves detectable with LISA, the future space-based gravitational wave detector.

The Dense "Afterlives" of Stars

After expanding into a red giant at the end of its life, a star like the Sun will eventually evolve into a dense white dwarf, an object with a mass like that of the Sun squashed down to a size comparable to Earth. Similarly, as binary stars evolve, they can engulf their companion in the red giant phase and spiral close together, eventually leaving behind a close white dwarf binary. White dwarf binaries with very tight orbits are expected to be strong sources of gravitational wave radiation. Although anticipated to be relatively common, such systems have proven elusive, with only a few identified to date.

Record-setting White Dwarf Binary


A new survey of the night sky, currently underway at Palomar Observatory and Kitt Peak National Observatory, is changing this situation.

Each night, Caltech's Zwicky Transient Facility (ZTF), a survey that uses the 48-inch telescope at Palomar Observatory, scans the sky for objects that move, blink, or otherwise vary in brightness. Promising candidates are followed up with a new instrument, the Kitt Peak 84-inch Electron Multiplying Demonstrator (KPED), at the Kitt Peak 2.1-meter telescope to identify short period eclipsing binaries. KPED is designed to measure with speed and sensitivity the changing brightness of celestial sources.

This approach has led to the discovery of ZTF J1539+5027 (or J1539 for short), a white dwarf eclipsing binary with the shortest period known to date, a mere 6.91 minutes. The stars orbit so close together that the entire system could fit within the diameter of the planet Saturn.

"As the dimmer star passes in front of the brighter one, it blocks most of the light, resulting in the seven-minute blinking pattern we see in the ZTF data," explains Caltech graduate student Kevin Burdge, lead author of the paper reporting the discovery, which appears in the today's issue of the journal Nature.

A Strong Source of Gravitational Waves


Closely orbiting white dwarfs are predicted to spiral together closer and faster, as the system loses energy by emitting gravitational waves. J1539's orbit is so tight that its orbital period is predicted to become measurably shorter after only a few years. Burdge's team was able to confirm the prediction from general relativity of a shrinking orbit, by comparing their new results with archival data acquired over the past ten years.

J1539 is a rare gem. It is one of only a few known sources of gravitational waves -- ripples in space and time -- that will be detected by the future European space mission LISA (Laser Interferometer Space Antenna), which is expected to launch in 2034. LISA, in which NASA plays a role, will be similar to the National Science Foundation's ground-based LIGO (Laser Interferometer Gravitational-wave Observatory), which made history in 2015 by making the first direct detection of gravitational waves from a pair of colliding black holes. LISA will detect gravitational waves from space at lower frequencies. J1539 is well matched to LISA; the 4.8 mHz gravitational wave frequency of J1539 is close to the peak of LISA's sensitivity.

Discoveries Continue for Historic Telescope


Kitt Peak's 2.1-meter telescope, the second major telescope to be constructed at the site, has been in continuous operation since 1964. Its history includes many important discoveries in astrophysics, such as the Lyman-alpha forest in quasar spectra, the first gravitational lens by a galaxy, the first pulsating white dwarf, and the first comprehensive study of the binary frequency of stars like the Sun. The latest result continues its venerable track record.

Lori Allen, Director of Kitt Peak National Observatory and Acting Director of NOAO says, "We're thrilled to see that our 2.1-meter telescope, now more than 50 years old, remains a powerful platform for discovery."

"These wonderful observations are further proof that cutting-edge science can be done on modest-sized telescopes like the 2.1-meter in the modern era," adds Chris Davis, NSF Program Officer for NOAO.

More Thrills Ahead!

As remarkable as it is, J1539 was discovered with only a small portion of the data expected from ZTF. It was found in the ZTF team's initial analysis of 10 million sources, whereas the project will eventually study more than a billion stars.

Read more at Science Daily

Supercomputers use graphics processors to solve longstanding turbulence question

Advanced simulations have solved a problem in turbulent fluid flow that could lead to more efficient turbines and engines.

When a fluid, such as water or air, flows fast enough, it will experience turbulence -- seemingly random changes in velocity and pressure within the fluid.

Turbulence is extremely difficult to study but is important for many fields of engineering, such as air flow past wind turbines or jet engines. Understanding turbulence better would allow engineers to design more efficient turbine blades, for example, or make more aerodynamic shapes for Formula 1 cars.

However, current engineering models of turbulence often rely upon 'empirical' relationships based on previous observations of turbulence to predict what will happen, rather than a full understanding of the underlying physics.

This is because the underlying physics is immensely complicated, leaving many questions that seem simple unsolved.

Now, researchers at Imperial College London have used supercomputers, running simulations on graphics processors originally developed for gaming, to solve a longstanding question in turbulence.

Their result, published today in the Journal of Fluid Mechanics, means empirical models can be tested and new models can be created, leading to more optimal designs in engineering.

Co-author Dr Peter Vincent, from the Department of Aeronautics at Imperial, said: "We now have a solution for an important fundamental flow problem. This means we can check empirical models of turbulence against the 'correct' answer, to see how well they are describing what actually happens, or if they need adjusting."

The question is quite simple: if a turbulent fluid is flowing in a channel and it is disturbed, how does that disturbance dissipate in the fluid? For example, if water was suddenly released from a dam into a river and then shut off, what affect would that pulse of dam water have on the flow of the river?

To determine the overall 'average' behaviour of the fluid response, the team needed to simulate the myriad smaller responses within the fluid. They used supercomputers to run thousands of turbulent flow simulations, each requiring billions of calculations to complete.

Using these simulations, they were able to determine the exact parameters that describe how the disturbance dissipates in the flow and determined various requirements that empirical turbulence models must satisfy.

Read more at Science Daily

Preventing people from abandoning exotic pets that threatened biodiversity

Abandoning exotic pets is an ethical problem that can lead to biological invasions that threaten conservation of biodiversity in the environment. An article published in the journal Biological Invasions, whose first author is the researcher Alberto Maceda Veiga, from the Biodiversity Research Institute of the University of Barcelona (IRBio), reveals that the release of invasive species in the environment has not been reduced despite the regulation that prohibits the possession of these species since 2011.

Other participants in the study, which goes over the regulation of the national catalogue of exotic invasive species, are Josep Escribano Alacid, from the Natural Science Museum of Barcelona, Albert Martínez Silvestre and Isabel Verdaguer, from the Amphibians and Reptiles Recovery Centre of Catalonia (CRARC), and Ralph Mac Nally, from the University of Canberra (Australia).

From buying impulsively to abandoning exotic pets

The study shows that, from 2009 to 2011, more than 60,000 exotic animals causing trouble to the owners were recorded in the northern-eastern area of Spain, but these figures do not correspond completely to the animals that were abandoned. "The main reason people abandon their pets is because they buy impulsively, and some of these species can easily reproduce once they are released," says Alberto Maceda, member of the Department of Evolutionary Biology, Ecology and Environmental Sciences of the University of Barcelona.

"In general, people get an animal very easily when it is young and cute, but when it grows up and causes trouble, they abandon it. Not all people do what it takes regarding the responsibilities of having a pet, it's a responsibility that lasts many years for those animals who live long, such as turtles."

Laws that do not stop this crime: abandoning exotic animals

Since 2011, a law prohibits the trade, possession and transport of exotic species in Spain. The regulation has been effective to stop shops from selling species of crabs, fish, reptiles and amphibians that are listed in the regulation, but it has not prevented people from abandoning invasive species, the study warns.

"Apart from the species listed in the regulation, there are many others that are abandoned in the natural environment or are left in animal centers," notes Maceda. "The worst problem, however, lies in the exotic species that were commercialized massively years ago. This is the case of the known Florida turtles, which are really small at first, but when they grow, they are usually abandoned. The study shows that the legislative response takes a long time to make any effect regarding the release of invasive species which were sold years ago in this country. These measures are not very effective once the invasive species is distributed around the territory."

Although the law served to stop those species from being sold, "abandoning animals is another crime, and there is not any current legislation to solve this problem." "We cannot ignore -he insists- that releasing any pet in the environment is a risk, apart from not being ethical, and therefore, it has to stop," stresses the researcher.

Fighting for animal wellbeing and promoting a responsible possession of animals

Improving biosafety measures in livestock stabling centers, changing commercial criteria for the species and training buyers to promote a responsible possession are measures that could help reduce the abandoning rate. "It is necessary to create a record of owners, apart from making more educational campaigns. One option could be to require a certificate for the owner's training, as well as the use of microchips and special licenses to have certain species at home, and avoid the free access to species we know that could bring trouble to the owners."

According to the authors, importation of exotic species should be regulated, since nowadays it is only debated on when there is scientific evidence of a risk of biological invasion, or when there is risk of extinction for a species. In short, it would be necessary to list the species owners can have at home.

Not all exotic pets in the environment have been abandoned

Experts warn that prohibiting certain species can generate a response in the market that can promote the commerce of other animals with the same problems. A revealing example is the prohibition of the trade of the red-eared slider (Trachemys scripta elegans), which was introduced in the market of other freshwater turtles that brought similar problems to the buyers.

Also, not all exotic pets get to the environment for being abandoned. In some cases, the reason is a lack of biosafety measures -disinfection of waste waters, etc.- in companies that stable exotic species. Also, some legislative measures from the past have been controverted for the conservation of the environment: for instance, releasing mosquitofish -one of the most dangerous invasive species worldwide- to control the local populations of mosquitos.

Read more at Science Daily

Jul 24, 2019

Cosmic pearls: Fossil clams in Florida contain evidence of ancient meteorite

Researchers picking through the contents of fossil clams from a Sarasota County quarry found dozens of tiny glass beads, likely the calling cards of an ancient meteorite.

Analysis of the beads suggests they are microtektites, particles that form when the explosive impact of an extraterrestrial object sends molten debris hurtling into the atmosphere where it cools and recrystallizes before falling back to Earth.

They are the first documented microtektites in Florida and possibly the first to be recovered from fossil shells.

Mike Meyer was a University of South Florida undergraduate when he discovered the microtektites during a 2006 summer fieldwork project led by Roger Portell, invertebrate paleontology collections director at the Florida Museum of Natural History.

As part of the project, students systematically collected fossils from the shell-packed walls of a quarry that offered a cross-section of the last few million years of Florida's geological history. They pried open fossil clams, washing the sediment trapped inside through very fine sieves. Meyer was looking for other tiny objects -- the shells of single-celled organisms known as benthic foraminifera -- when he noticed the translucent glassy balls, smaller than grains of salt.

"They really stood out," said Meyer, now an assistant professor of Earth systems science at Harrisburg University in Pennsylvania. "Sand grains are kind of lumpy, potato-shaped things. But I kept finding these tiny, perfect spheres."

After the fieldwork ended, his curiosity about the spheres persisted. But his emails to various researchers came up short: No one knew what they were. Meyer kept the spheres -- 83 in total -- in a small box for more than a decade.

"It wasn't until a couple years ago that I had some free time," he said. "I was like, 'Let me just start from scratch.'"

Meyer analyzed the elemental makeup and physical features of the spheres and compared them to microtektites, volcanic rock and byproducts of industrial processes, such as coal ash. His findings pointed to an extraterrestrial origin.

"It did blow my mind," he said.

He thinks the microtektites are the products of one or more small, previously unknown meteorite impacts, potentially on or near the Florida Platform, the plateau that undergirds the Florida Peninsula.

Initial results from an unpublished test suggest the spheres have traces of exotic metals, further evidence they are microtektites, Meyer said.

Most of them had been sealed inside fossil Mercenaria campechiensis or southern quahogs. Portell said that as clams die, fine sediment and particles wash inside. As more sediment settles on top of the clams over time, they close, becoming excellent long-term storage containers.

"Inside clams like these we can find whole crabs, sometimes fish skeletons," Portell said. "It's a nice way of preserving specimens."

During the 2006 fieldwork, the students recovered microtektites from four different depths in the quarry, which is "a little weird," Meyer said, since each layer represents a distinct period of time.

"It could be that they're from a single tektite bed that got washed out over millennia or it could be evidence for numerous impacts out on the Florida Platform that we just don't know about," he said.

The researchers plan to date the microtektites, but Portell's working guess is that they are "somewhere around 2 to 3 million years old."

One oddity is that they contain high amounts of sodium, a feature that sets them apart from other impact debris. Salt is highly volatile and generally boils off if thrust into the atmosphere at high speed, Meyer said.

"This high sodium content is intriguing because it suggests a very close location for the impact," Meyer said. "Or at the very least, whatever impact created it likely hit a very large reserve of rock salt or the ocean. A lot of those indicators point to something close to Florida."

Meyer and Portell suspect there are far more microtektites awaiting discovery in Florida and have asked amateur fossil collectors to keep an eye out for the tiny spheres.

Read more at Science Daily

Scientists document late Pleistocene/early Holocene Mesoamerican stone tool tradition

Belize highlighted on map
From the perspective of Central and South America, the peopling of the New World was a complex process lasting thousands of years and involving multiple waves of Pleistocene and early Holocene period immigrants entering into the Neotropics.

Paleoindian colonists arrived in waves of immigrants entering the Neotropics, a region starting in the humid rainforests of southern Mexico before 13,000 years ago and brought with them technologies developed for adaptation to environments and resources found in North America.

As the ice age ended across the New World people adapted more generalized stone tools to exploit changing environments and resources. In the Neotropics these changes would have been pronounced as patchy forests and grasslands gave way to broadleaf tropical forests.

In new research published recently in PLOS One titled "Linking late Paleoindian stone tool technlogies and populations in North, Central and South America," scientists from The University of New Mexico led a study in Belize to document the very earliest indigenous stone tool tradition in southern Mesoamerica.

"This is an area of research for which we have very poor data regarding early humans, though this UNM-led project is expanding our knowledge of human behavior and relationships between people in North, Central and South America," said lead author Keith Prufer, professor from The University of New Mexico's Department of Anthropology.

This research, funded by grants from the National Science Foundation and the Alphawood Foundation, focuses on understanding the Late Pleistocene human colonization of tropics in the broad context of global changes occurring at the end of the last ice age (ca. 12,000-10,000 years ago). The research suggests the tools are part of a human adaptation story in response to emerging tropical conditions in what is today called the Neotropics, a broad region south of the Isthmus of Tehuantepec (in S Mexico).

As part of the research, the team conducted extensive excavations at two rock shelter sites from 2014-2018. The excavation sites, located in the Bladen Nature Reserve, are almost 30 miles from the nearest road or modern human settlement in a large undisturbed rainforest that is one of the best-protected wildlife refuges in Central America.

"We have identified and established an absolute chronology for the earliest stone tool types that are indigenous to Central America," said Prufer. "These have clear antecedents with the earliest known humans in both South America and North America, but appear to show more affinity with slightly younger Late Paleoindian toolkits in the Amazon and Northern Peru than with North America."

The research represents the first endogenous Paleoindian stone tool technocomplex recovered from well-dated stratigraphic contexts for Mesoamerica. Previously designated, these artifacts share multiple features with contemporary North and South American Paleoindian tool types. Once hafted, these bifaces appear to have served multiple functions for cutting, hooking, thrusting, or throwing.

"The tools were developed at a time of technological regionalization reflecting the diverse demands of a period of pronounced environmental change and population movement," said Prufer. "Combined stratigraphic, technological, and population paleogenetic data suggests that there were strong ties between lowland neotropic regions at the onset of the Holocene."

These findings support previous UNM research suggesting strong genetic relationships between early colonists in Central and South America, following the initial dispersal of humans from Asia into the Americas via the arctic prior to 14,000 years ago.

"We are partnering with Belizean conservation NGO Ya'axche Conservation Trust in our fieldwork to promote the importance of ancient cultural resources in biodiversity and protected areas management," said Prufer. "We spend a month every year camped out with no access to electricity, internet, phone or resupplies while we conduct excavations."

This field research involves several UNM graduate students in Archaeology and Evolutionary Anthropology as well as collaborators at Exeter University (UK) and Arizona State University. The analysis for this study was done in part at UNM's Center for Stable Isotopes, as well as with co-authors at Penn State and UC Santa Barbara. At UNM this involved the new radiocarbon preparation laboratories which are part of the Center for Stable isotopes, one of the anchors of UNM's interdisciplinary PAIS research and teaching facility.

The senior co-authors are world leaders in the study of early humans in the tropics and are committed to conservation efforts of cultural resources and regional biodiversity. Additionally, Prufer's long-term collaboration in indigenous Maya communities in the region was critical to the success of this project.

Read more at Science Daily

Climate is warming faster than it has in the last 2,000 years

High temperature on thermometer
Many people have a clear picture of the "Little Ice Age" (from approx. 1300 to 1850). It's characterized by paintings showing people skating on Dutch canals and glaciers advancing far into the alpine valleys. That it was extraordinarily cool in Europe for several centuries is proven by a large number of temperature reconstructions using tree rings, for example, not just by historical paintings. As there are also similar reconstructions for North America, it was assumed that the "Little Ice Age" and the similarly famous "Medieval Warm Period" (approx. 700 -- 1400) were global phenomena. But now an international group led by Raphael Neukom of the Oeschger Center for Climate Change Research at the University of Bern is painting a very different picture of these alleged global climate fluctuations. In a study which has just appeared in the well-known scientific journal Nature, and in a supplementary publication in Nature Geoscience, the team shows that there is no evidence that there were uniform warm and cold periods across the globe over the last 2,000 years.
y different picture of these alleged global climate fluctuations. In a study which has just appeared in the well-known scientific journal Nature, and in a supplementary publication in Nature Geoscience, the team shows that there is no evidence that there were uniform warm and cold periods across the globe over the last 2,000 years.

Climate fluctuations in the past varied from region to region


"It's true that during the Little Ice Age it was generally colder across the whole world," explains Raphael Neukom, "but not everywhere at the same time. The peak periods of pre-industrial warm and cold periods occurred at different times in different places." According to the climate scientist from Bern, the now-debunked hypothesis of climate phases occurring at the same time across the globe came about because of an impression that is defined by the climate history of Europe and North America. In the absence of data from other parts of the earth, this notion was applied to the whole planet, raising expectations that relatively cold or warm periods throughout the last 2,000 years were globally synchronous phenomena. But it has now been shown that this was not the case.

The authors of the study in Nature see the explanation for that as being that regional climates in pre-industrial times were primarily influenced by random fluctuations within the climate systems themselves. External factors such as volcanic eruptions or solar activity were not intense enough to cause markedly warm or cold temperatures across the whole world for decades, or even centuries.

The researchers relied on a database from the international research consortium PAGES, which provides a comprehensive overview of climate data from the last 2,000 years, for their investigation of five pre-industrial climate epochs. In addition to tree rings, it also includes data from ice cores, lake sediments and corals. To really put the results to the test, the team led by Raphael Neukom analyzed these data sets using six different statistical models -- more than ever before. This allowed for the calculation of the probability of extremely warm or cold decades and centuries, and not just the calculation of absolute temperatures. The result was that no globally coherent picture emerged during the periods being investigated. "The minimum and maximum temperatures were different in different areas," says Raphael Neukom. So thermal extremes across the world cannot be inferred from regional temperature phenomena like the oft-mentioned "Medieval Warm Period" in Europe and North America.

Read more at Science Daily

Chimpanzees' working memory similar to ours

Chimpanzee
Working memory is central to our mental lives; we use it to add up the cost of our shopping or to remember the beginning of this sentence at its end. Some scientists argue it is particularly developed in humans, but how do chimpanzees, one of our closest relatives, compare? Researchers set out to answer this question.

Previous studies showed that chimpanzees have excellent long-term memory abilities. However, little is known so far about their working memory abilities. To shed light on these abilities researchers presented chimpanzees with a task, in which they could search for food in a number of small, opaque boxes. The chimpanzees first watched how pieces of food were hidden in these boxes. Then the apes could start to search for the food items by pointing at these boxes one by one. If a chosen box contained food, the chimpanzees received this food reward. After each choice, the boxes were covered for 15 seconds.

To retrieve all of the food items chimpanzees needed to keep in mind, which boxes they had already searched for food. The researchers increased the difficulty of the task depending on the ability of each chimpanzee by increasing the number of boxes and by shuffling the boxes between each search. The study revealed key similarities between chimpanzee and human working memory: the best-performing chimpanzees remembered at least four items, one young chimpanzee more than seven items. They used both the appearance of the boxes as well as their position to remember their previous choices.

Humans typically perform worse in working memory tests if they need to do something in parallel. Likewise, if the chimpanzees had to perform a second, similar task in parallel, their performance declined. Differences in working memory ability between chimpanzees were stable over months. The most obvious difference between chimpanzees and humans was not the working memory capacity but the search strategies that humans typically employ to facilitate this task: chimpanzees did not come up with the idea to search the boxes in line from one side to the other.

Read more at Science Daily

Jul 23, 2019

Astronomers make first calculations of magnetic activity in 'hot Jupiter' exoplanets

Gas-giant planets orbiting close to other stars have powerful magnetic fields, many times stronger than our own Jupiter, according to a new study by a team of astrophysicists. It is the first time the strength of these fields has been calculated from observations.

The team, led by Wilson Cauley of the University of Colorado, also includes associate professor Evgenya Shkolnik of Arizona State University's School of Earth and Space Exploration. The other researchers are Joe Llama of Northern Arizona University and Antonino Lanza of the Astrophysical Observatory of Catania in Italy. Their report was published July 22 in Nature Astronomy.

"Our study is the first to use observed signals to derive exoplanet magnetic field strengths," says Shkolnik. "These signals appear to come from interactions between the magnetic fields of the star and the tightly orbiting planet."

Many worlds


More than 3,000 exoplanet systems containing over 4,000 planets have been discovered since 1988. Many of these star systems include what astronomers call "hot Jupiters." These are massive gaseous planets presumed to be like the Sun's Jupiter but orbiting their stars at close distances, typically about five times the star's diameter, or roughly 20 times the Moon's distance from Earth.

Such planets travel well inside their star's magnetic field, where interactions between the planetary field and the stellar one can be continual and strong.

Previous studies, the team says, have placed upper limits on exoplanet magnetic fields, for example from radio observations or derived purely from theory.

"We combined measurements of increased stellar emission from the magnetic star-planet interactions together with physics theory to calculate the magnetic field strengths for four hot Jupiters," says lead author Cauley.

The magnetic field strengths the team found range from 20 to 120?gauss. For comparison, Jupiter's magnetic field is 4.3 gauss and Earth's field strength is only half a gauss, although that is strong enough to orient compasses worldwide.

Triggering activity

The astrophysicists used telescopes in Hawaii and France to acquire high-resolution observations of emission from ionized calcium (Ca II) in the parent stars of the four hot Jupiters. The emission comes from a star's hot, magnetically heated chromosphere, a thin layer of gas above the cooler stellar surface. The observations let the team calculate how much energy was being released in the stars' calcium emission.

Says Shkolnik, "We used the power estimates to calculate magnetic field strengths for the planets using a theory for how the planets' magnetic fields interact with the stellar magnetic fields."

Cauley explains, "Magnetic fields like to be in a state of low energy. If you twist or stretch the field like a rubber band, this increases the energy stored in the magnetic field." Hot Jupiters orbit very close to their parent stars and so the planet's magnetic field can twist and stretch the star's magnetic field.

"When this happens," Cauley says,"energy can be released as the two fields reconnect, and this heats the star's atmosphere, increasing the calcium emission."

Probing deep

Astrophysicists have suspected that hot Jupiters would, like our own Jupiter, have magnetic fields produced deep inside them. The new observations provide the first probe of the internal dynamics of these massive planets.

"This is the first estimate of the magnetic field strengths for these planets based on observations, so it's a huge jump in our knowledge," Shkolnik notes. "It's giving us a better understanding of what is happening inside these planets."

She adds that it should also help researchers who model the internal dynamos of hot Jupiters. "We knew nothing about their magnetic fields -- or any other exoplanet magnetic fields -- and now we have estimates for four actual systems."

Surprisingly powerful

The field strengths, the team says, are larger than one would expect considering only the rotation and age of the planet. The standard dynamo theory of planetary magnetic fields predicts field strengths for the sampled planets that are much smaller than what the team found.

Instead, the observations support the idea that planetary magnetic fields depend on the amount of heat moving through the planet's interior. Because they are absorbing a lot of extra energy from their host stars, hot Jupiters should have larger magnetic fields than planets of similar mass and rotation rate.

Read more at Science Daily

Astronomers map vast void in our cosmic neighborhood

Milky Way
An astronomer from the University of Hawaii Institute for Astronomy and an international team published a new study that reveals more of the vast cosmic structure surrounding our Milky Way galaxy.

The universe is a tapestry of galaxy congregations and vast voids. In a new study being reported in The Astrophysical Journal, Brent Tully's team applies the same tools from an earlier study to map the size and shape of an extensive empty region they called the Local Void that borders the Milky Way galaxy. Using the observations of galaxy motions, they infer the distribution of mass responsible for that motion, and construct three-dimensional maps of our local Universe.

Galaxies not only move with the overall expansion of the universe, they also respond to the gravitational tug of their neighbors and regions with a lot of mass. As a consequence, relative to the overall expansion they are moving towards the densest areas and away from regions with little mass -- the voids.

Although we live in a cosmic metropolis, back in 1987 Tully and Richard Fisher noted that our Milky Way galaxy is also at the edge of an extensive empty region that they called the Local Void. The existence of the Local Void has been widely accepted, but it remained poorly studied because it lies behind the center of our galaxy and is therefore heavily obscured from our view.

Now, Tully and his team have measured the motions of 18,000 galaxies in the Cosmicflows-3 compendium of galaxy distances, constructing a cosmographic map that highlights the boundary between the collection of matter and the absence of matter that defines the edge of the Local Void. They used the same technique in 2014 to identify the full extent of our home supercluster of over one hundred thousand galaxies, giving it the name Laniakea, meaning "immense heaven" in Hawaiian.

For 30 years, astronomers have been trying to identify why the motions of the Milky Way, our nearest large galaxy neighbor Andromeda, and their smaller neighbors deviate from the overall expansion of the Universe by over 600 km/s (1.3 million mph). The new study shows that roughly half of this motion is generated "locally" from the combination of a pull from the massive nearby Virgo Cluster and our participation in the expansion of the Local Void as it becomes ever emptier.

An 11-minute video demonstrating the shape and extend of these cosmic structures is available online at:

https://vimeo.com/326346346

Interactive visualizations that allow the user to rotate, pan, and zoom maps of the mass distribution can be found at:

https://sketchfab.com/models/f0a44df256aa4faf93391887d66010e2

https://sketchfab.com/models/78885b3d303d4b6e99cfe099b43929fb

From Science Daily

Using antibiotics without a prescription is a prevalent public health problem

Antibiotics illustration.
People using antibiotics without a prescription seems to be a prevalent public health problem. Antibiotics were obtained through various means, including saving leftover prescriptions for later use, getting them from friends and family, or obtaining them from local markets "under the counter." Findings from a scoping review are published in Annals of Internal Medicine.

When people take antibiotics without a prescription, they often take unnecessary medication or choose an inappropriate drug or dose. This practice is associated with avoidable adverse events and may also increase the risk for inducing antibiotic resistance. It is important to understand how prevalent nonprescription antibiotic use is and the factors that contribute to the issue.

Researchers from Baylor College of Medicine and the Center for Innovations in Quality, Effectiveness, and Safety reviewed 31 published studies to determine the prevalence of nonprescription antibiotic use in the U.S. and to examine the factors that influence that use. The prevalence of nonprescription antibiotic use varied from 1 percent among people visiting a clinic to 66 percent among Latino migrant workers. Storage of antibiotics for future use varied from 14 percent to 48 percent and a quarter of the people in one study reported intention to use antibiotics without a prescription.

Factors that contribute to nonprescription use include lack of insurance or health care access, cost of a physician visit or prescription, embarrassment about seeking care for a sexually transmitted infection, not being able to get time off of work to visit a clinic or physician's office, and several other reasons. According to the researchers, more studies are needed to quantitate nonprescription antibiotic use and explore potentially modifiable factors that contribute to unsafe practices.

From Science Daily

More sensitive climates are more variable climates

Power plant
A decade without any global warming is more likely to happen if the climate is more sensitive to carbon dioxide emissions, new research has revealed.

A team of scientists from the University of Exeter and the Centre of Ecology and Hydrology in the UK has conducted pioneering new research into why both surges and slowdowns of warming take place.

Using sophisticated climate models the team, led by PhD student Femke Nijsse, discovered if the climate was more sensitive to CO2 concentration also displayed larger variations of warming over a decade.

When combined with information from simulations without any carbon dioxide increases, the authors were able to assess the natural variability of each climate model.

The research is published this week in Nature Climate Change.

Femke Nijsse, from the University of Exeter, said: "We were surprised to see that even when we took into account that sensitive climate models warm more over the last decades of the 20th century, these sensitive models were still more likely to have short periods of cooling."

Climate sensitivity, which sits at the very heart of climate science, is the amount of global warming that takes place as atmospheric CO2 concentrations rise.

For many years, estimates have put climate sensitivity somewhere between 1.5-4.5°C of warming for a doubling of pre-industrial CO2 levels.

The study found that cooling -- or "hiatus" -- decades were more than twice as likely around the turn of the century in high sensitivity models (models that warm 4.5 ºC after doubling CO2), compared to low sensitivity models (models that warm 1.5 ºC after doubling CO2).

Co-author Dr. Mark Williamson, A Research Fellow at Exeter: "This does not mean that the presence of a global warming slowdown at the beginning of the 21st century implies we live in a highly sensitive world.

"By looking at all decades together, we get a better picture and find observations are broadly consistent with a central estimate of climate sensitivity"

Ms Nijsse added: "We still don't exactly know how much the climate system will heat up, nor do we know exactly what the range of natural variability in trends will be over the coming decades. But our study shows that these risks should not be considered as separate."

The paper also studied the chance that a decade in the 21st century could warm by as much as the entire 20th century -- a scenario that the research team call "hyperwarming."

Under a scenario where carbon dioxide emissions from fossil fuels continue to increase, the chance of hyperwarming is even more dependent on climate sensitivity than the long-term global warming trend.

Read more at Science Daily

The early days of the Milky Way revealed

Spiral galaxy illustration
The universe 13,000 million years ago was very different from the universe we know today. It is understood that stars were forming at a very rapid rate, forming the first dwarf galaxies, whose mergers gave rise to the more massive present-day galaxies, including our own. However the exact chain of the events which produced the Milky Way was not known until now.

Exact measurements of position, brightness and distance for around a million stars of our galaxy within 6,500 light years of the sun, obtained with the Gaia space telescope, have allowed a team from the IAC to reveal some of its early stages. "We have analyzed, and compared with theoretical models, the distribution of colours and magnitudes (brightnesses) of the stars in the Milky Way, splitting them into several components; the so-called stellar halo (a spherical structure which surrounds spiral galaxies) and the thick disc (stars forming the disc of our Galaxy, but occupying a certain height range)" explains Carme Gallart, a researcher at the IAC and the first author of this article, which is published today in the journal Nature Astronomy.

Previous studies had discovered that the Galactic halo showed clear signs of being made up of two distinct stellar components, one dominated by bluer stars than the other. The movement of the stars in the blue component quickly allowed us to identify it as the remains of a dwarf galaxy (Gaia-Enceladus) which impacted onto the early Milky Way. However the nature of the red population, and the epoch of the merger between Gaia-Enceladus and our Galaxy were unknown until now.

"Analyzing the data from Gaia has allowed us to obtain the distribution of the ages of the stars in both components and has shown that the two are formed by equally old stars, which are older than those of the thick disc" says IAC researcher and co-author Chris Brook. But if both components were formed at the same time, what differentiates one from the other? "The final piece of the puzzle was given by the quantity of "metals" (elements which are not hydrogen or helium) in the stars of one component or the other" explains Tomás Ruiz Lara, an IAC researcher and another of the authors of the article. "The stars in the blue component have a smaller quantity of metals than those of the red component." These findings, with the addition of the predictions of simulations which are also analyzed in the article, have allowed the researchers to complete the history of the formation of the Milky Way.

Thirteen thousand million years ago stars began to form in two different stellar systems which then merged: one was a dwarf galaxy which we call Gaia-Enceladus, and the other was the main progenitor of our Galaxy, some four times more massive and with a larger proportion of metals. Some ten thousand million years ago there was a violent collision between the more massive system and Gaia-Enceladus. As a result some of its stars, and those of Gaia-Enceladus were set into chaotic motion, and eventually formed the halo of the present Milky Way. After that there were violent bursts of star formation until 6,000 million years ago, when the gas settled into the disc of the Galaxy, and produced what we know as the "thin disc."

Read more at Science Daily

Jul 22, 2019

New species of pocket shark identified

A team of researchers, including two from Tulane University, has identified a new species of pocket shark, following careful study of a pocket shark that made international headlines in 2015 after it was brought to the Royal D. Suttkus Fish Collection at the Tulane University Biodiversity Research Institute.

The 5½-inch male kitefin shark has been identified as the American Pocket Shark, or Mollisquama mississippiensis, based on five features not seen in the only other known specimen of this kind. That specimen was captured in the Eastern Pacific Ocean in 1979 and is now housed at the Zoological Museum in St. Petersburg, Russia.

The details of the new species, which was caught in the Gulf of Mexico in February 2010, are described in an article published in the animal taxonomy journal Zootaxa. The authors include Mark Grace of the NMFS Mississippi Laboratories of the National Oceanic and Atmospheric Administration (NOAA) and Henry Bart and Michael Doosey of the Tulane University Biodiversity Research. Other researchers involved in the study are John S. Denton and Gavin Taylor of the Florida Program for Shark Research at the University of Florida, and John Maisey of the Department of Vertebrate Paleontology at the American Museum of Natural History in New York.

"In the history of fisheries science, only two pocket sharks have ever been captured or reported," Grace said. "Both are separate species, each from separate oceans. Both are exceedingly rare."

Bart added, "The fact that only one pocket shark has ever been reported from the Gulf of Mexico, and that it is a new species, underscores how little we know about the Gulf -- especially its deeper waters -- and how many additional new species from these waters await discovery."

Researchers said there were notable differences between the original Pacific Ocean specimen and the newer specimen from the Gulf of Mexico. Those differences include fewer vertebrae and numerous light-producing photophores that cover much of the body. The two species both have two small pockets that produce luminous fluid (one on each side near the gills).

The pocket shark was collected in February 2010 in the eastern Gulf of Mexico by the NOAA ship Pisces, during a mission to study sperm whale feeding. Much to his surprise, Grace discovered the shark in 2013 while examining specimens that were collected during the NOAA survey. Grace asked Tulane to archive the specimen in its Fish Collection. Soon after, Grace and Tulane postdoctoral researcher Doosey undertook a study to determine what species it was.

Identifying the shark involved examining and photographing external features of the specimen with a dissecting microscope, studying radiographic (x-ray) images and high resolution CT scans. The most sophisticated images of internal features of the shark were produced at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France, which uses the most intense source of synchrotron-generated light in the world to produce x-rays 100 billion times brighter than the x-rays used in hospitals.

Read more at Science Daily

Airborne lidar system poised to improve accuracy of climate change models

Researchers have developed a laser-based system that can be used for airborne measurement of important atmospheric gases with unprecedented accuracy and resolution. The ability to collect this data will help scientists better understand how these atmospheric gases affect the climate and could help improve climate change predictions.

In the Optical Society journal Applied Optics, researchers from Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR) -- Germany's national center for aerospace, energy and transportation research -- describe how their lidar instrument was used aboard an aircraft to acquire the first simultaneous measurements of the vertical structure of water vapor and ozone in the tropopause region of the atmosphere. The researchers say that the new system might even be useful for monitoring atmospheric gases from space.

The tropopause separates the surface-based troposphere layer where weather takes place from the overlying stratosphere that contains the ozone layer that protects life on Earth from harmful radiation. Scientists want to study water vapor and ozone in the tropopause because the distribution of these atmospheric gases in this layer plays a crucial role in the Earth's climate.

"The ability to detect the vertical structure of water vapor and ozone is critical for understanding the exchange of these atmospheric gases between the troposphere and the stratosphere," said Andreas Fix, who led the research team. "These measurements could help us identify errors and uncertainties in climate models that would help improve predictions of the future climate, which is one of the central challenges for our society and economy."

Gaining a 3D perspective

Atmospheric gases can be assessed with instruments flown into the atmosphere or with data acquired from satellites. However, these methods haven't been able to provide a full picture of atmospheric gas distribution because they either lack the vertical component or don't provide high enough resolution. Although instruments carried with balloons -- known as balloon sondes -- can provide highly resolved vertical profiles they don't offer detailed temporal resolution and can only be used at selected sites.

To solve these problems, the researchers developed a lidar system that uses laser light to measure both ozone and water vapor at the same time. Their approach, called differential absorption lidar (DIAL), uses two slightly different UV wavelengths to measure each gas. The UV radiation at one wavelength is mostly absorbed by the gas molecules while most of the other wavelength is reflected. Measuring the ratio of the UV signals returning from the atmosphere allows calculation of a detailed gas profile.

The gas profiles created using the new lidar system exhibit a vertical resolution of around 250 meters and a horizontal resolution of about 10 kilometers below the aircraft's flight track.

"This vertical capability is a significant advancement in studying exchange processes at the tropopause," said Fix. "It helps overcome significant shortcomings in resolving the fine-scale distribution that have made it difficult to understand processes responsible for exchange at the tropopause."

Achieving energy efficiency

To perform this method aboard a plane, the researchers used a highly efficient optical parametric oscillator (OPO) they previously developed to convert the laser output to the UV wavelengths needed to measure water vapor and ozone. "The conversion needs to be very energy efficient to generate UV radiation with adequate pulse energies and high average power from the limited energy available on board an aircraft," explained Fix.

Tests of the new lidar system showed that its accuracy matched well with that of balloon sondes. In 2017, the researchers flew the new system aboard the wave-driven isentropic exchange (WISE) mission, which involved multiple long-range flights over the North Atlantic and Northern Europe. They found that the instrument worked remarkably well, remained stable during use and could measure characteristic ozone and water vapor distributions at the tropopause.

Read more at Science Daily

Heart disease biomarker linked to paleo diet

People who follow the paleo diet have twice the amount of a key blood biomarker linked closely to heart disease, the world's first major study examining the impact of the diet on gut bacteria has found.

Researchers from Edith Cowan University (ECU) compared 44 people on the diet with 47 following a traditional Australian diet.

The research, published in the European Journal of Nutrition, measured the amount of trimethylamine-n-oxide (TMAO) in participants' blood.

High levels of TMAO, an organic compound produced in the gut, are associated with an increased risk of heart disease, which kills one Australian every 12 minutes.

Impact on gut health

The controversial Paleo (or 'caveman') diet advocates eating meat, vegetables, nuts and limited fruit, and excludes grains, legumes, dairy, salt, refined sugar and processed oils.

Lead researcher Dr Angela Genoni said that with the diet's growing popularity, it was important to understand the impact it could have on overall health.

"Many Paleo diet proponents claim the diet is beneficial to gut health, but this research suggests that when it comes to the production of TMAO in the gut, the Paleo diet could be having an adverse impact in terms of heart health," she said.

"We also found that populations of beneficial bacterial species were lower in the Paleolithic groups, associated with the reduced carbohydrate intake, which may have consequences for other chronic diseases over the long term."

Reduced intake of whole grains to blame

She said the reason TMAO was so elevated in people on the Paleo diet appeared to be the lack of whole grains in their diet.

"We found the lack of whole grains were associated with TMAO levels, which may provide a link between the reduced risks of cardiovascular disease we see in populations with high intakes of whole grains," she said.

The researchers also found higher concentrations of the bacteria that produces TMAO in the Paleo group.

"The Paleo diet excludes all grains and we know that whole grains are a fantastic source of resistant starch and many other fermentable fibres that are vital to the health of your gut microbiome," Dr Genoni said.

"Because TMAO is produced in the gut, a lack of whole grains might change the populations of bacteria enough to enable higher production of this compound.

"Additionally, the Paleo diet includes greater servings per day of red meat, which provides the precursor compounds to produce TMAO, and Paleo followers consumed twice the recommended level of saturated fats, which is cause for concern.

Read more at Science Daily

Warning to those wanting to spice up their lives

Think twice before adding that extra kick of chili sauce or chopped jalapeno to your meal. New research involving the University of South Australia shows a spicy diet could be linked to dementia.

A 15-year study of 4582 Chinese adults aged over 55 found evidence of faster cognitive decline in those who consistently ate more than 50 grams of chili a day. Memory decline was even more significant if the chili lovers were slim.

The study, led by Dr Zumin Shi from Qatar University, showed that those who consumed in excess of 50 grams of chili a day had almost double the risk of memory decline and poor cognition.

"Chili consumption was found to be beneficial for body weight and blood pressure in our previous studies. However, in this study, we found adverse effects on cognition among older adults," Dr Zumin says.

UniSA epidemiologist Dr Ming Li, one of five researchers involved in the study, says chili intake included both fresh and dried chili peppers but not sweet capsicum or black pepper.

"Chili is one of the most commonly used spices in the world and particularly popular in Asia compared to European countries," Dr Li says. "In certain regions of China, such as Sichuan and Hunan, almost one in three adults consume spicy food every day."

Capsaicin is the active component in chili which reportedly speeds up metabolism, fat loss and inhibits vascular disorders but this is the first longitudinal study to investigate the association between chili intake and cognitive function.

Those who ate a lot of chili had a lower income and body mass index (BMI) and were more physically active compared to non-consumers. Researchers say people of normal body weight may be more sensitive to chili intake than overweight people, hence the impact on memory and weight. Education levels may also play a role in cognitive decline and this link requires further research.

From Science Daily

Jul 21, 2019

Flying the final approach to Tranquility Base, the moon

As the Apollo 11 Lunar Module approached the Moon's surface for the first manned landing, commander Neil Armstrong switched off the autopilot and flew the spacecraft manuallly to a landing.

A new video, created at Arizona State University's School of Earth and Space Exploration, shows what Armstrong saw out his window as the lander descended -- and you'll see for yourself why he took over control.

A team led by Mark Robinson, principal investigator for the Lunar Reconnaissance Orbiter Camera (LROC) and professor in the School, recreated the view in a striking video. They used the crew's voice recording, the timings, a video taken on film, and images taken from lunar orbit by the LRO Camera over the last 10 years.

Said Robinson, "The only visual record of the actual Apollo 11 landing is from a 16mm time-lapse movie camera, running at 6 frames a second and mounted in Buzz Aldrin's window." Edwin "Buzz" Aldrin was designated as the LM pilot, although for the actual landing his role was to announce the LM's altitude and rates of descent and forward motion. He stood on the right side of the cabin.

"Due to the small size of the lander windows and the angle at which the movie camera was mounted, what mission commander Neil Armstrong saw as he flew the LM to the landing was not recorded," Robinson explained. Armstrong's place was on the cabin's left side.

The LROC team reconstructed the last three minutes of the landing trajectory (latitude, longitude, orientation, velocity, altitude) using lunar landmark navigation and altitude callouts from the crew's voice recording.

Robinson said, "From this trajectory information, and high resolution LROC Narrow-Angle Camera images and topography, we simulated what Armstrong saw in those final minutes as he guided the LM down to the surface of the Moon."

Video: https://youtu.be/ScFJBcLfasQ

As the video begins, Armstrong could see the autopilot was aiming to land on the rocky flank of West Crater (625 feet wide). This caused him to take over manual control and fly horizontally, searching for a safe landing spot. At the time, only Armstrong saw the hazard, and he was too busy flying the LM to discuss the situation with mission control.

After flying over the hazards presented by the bouldery flank of West Crater, Armstrong spotted a safe landing site about 1,600 feet ahead where he carefully descended to the surface. Just before landing, the LM flew over what was later called Little West Crater (135 feet wide). After landing Armstrong visited and photographed this crater during his extravehicular activity.

"Of course, during the landing he was able to lean forward and back and turn his head to gain a view that was better than the simple, fixed viewpoint presented here," said Robinson. "However, our simulated movie lets you relive those dramatic moments."

Robinson points out that because LROC's images were taken almost 50 years after the actual landing, the video shows the lander's descent stage on the surface. (It was used as a launch pad when the astronauts blasted off for their return to Earth.) And the video shows where they disturbed the lunar soil as they walked: look for dark thread-like paths.

Read more at Science Daily