Apr 17, 2020

Arctic stratospheric ozone levels hit record low in March

Ozone levels above the Arctic reached a record low for March, NASA researchers report. An analysis of satellite observations show that ozone levels reached their lowest point on March 12 at 205 Dobson units.

While such low levels are rare, they are not unprecedented. Similar low ozone levels occurred in the upper atmosphere, or stratosphere, in 1997 and 2011. In comparison, the lowest March ozone value observed in the Arctic is usually around 240 Dobson units.

"This year's low Arctic ozone happens about once per decade," said Paul Newman, chief scientist for Earth Sciences at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "For the overall health of the ozone layer, this is concerning since Arctic ozone levels are typically high during March and April."

Ozone is a highly reactive molecule comprised of three oxygen atoms that occurs naturally in small amounts. The stratospheric ozone layer, roughly 7 to 25 miles above Earth's surface, is a sunscreen, absorbing harmful ultraviolet radiation that can damage plants and animals and affecting people by causing cataracts, skin cancer and suppressed immune systems.

The March Arctic ozone depletion was caused by a combination of factors that arose due to unusually weak upper atmospheric "wave" events from December through March. These waves drive movements of air through the upper atmosphere akin to weather systems that we experience in the lower atmosphere, but much bigger in scale.

In a typical year, these waves travel upward from the mid-latitude lower atmosphere to disrupt the circumpolar winds that swirl around the Arctic. When they disrupt the polar winds, they do two things. First, they bring with them ozone from other parts of the stratosphere, replenishing the reservoir over the Arctic.

"Think of it like having a red-paint dollop, low ozone over the North Pole, in a white bucket of paint," Newman said. "The waves stir the white paint, higher amounts of ozone in the mid-latitudes, with the red paint or low ozone contained by the strong jet stream circling around the pole."

The mixing has a second effect, which is to warm the Arctic air. The warmer temperatures then make conditions unfavorable for the formation of polar stratospheric clouds. These clouds enable the release of chlorine for ozone-depleting reactions. Ozone depleting chlorine and bromine come from chlorofluorocarbons and halons, the chemically active forms of chlorine and bromine derived from human-made compounds that are now banned by the Montreal Protocol. The mixing shuts down this chlorine and bromine driven ozone depletion.

In December 2019 and January through March of 2020, the stratospheric wave events were weak and did not disrupt the polar winds. The winds thus acted like a barrier, preventing ozone from other parts of the atmosphere from replenishing the low ozone levels over the Arctic. In addition, the stratosphere remained cold, leading to the formation of polar stratospheric clouds which allowed chemical reactions to release reactive forms of chlorine and cause ozone depletion.

"We don't know what caused the wave dynamics to be weak this year," Newman said. "But we do know that if we hadn't stopped putting chlorofluorocarbons into the atmosphere because of the Montreal Protocol, the Arctic depletion this year would have been much worse."

Since 2000, levels of chlorofluorocarbons and other human-made ozone-depleting substances have measurably decreased in the atmosphere and continue to do so. Chlorofluorocarbons are long-lived compounds that take decades to break down, and scientists expect stratospheric ozone levels to recover to 1980 levels by mid-century.

NASA researchers prefer the term "depletion" over the Arctic, since despite the ozone layer's record low this year, the ozone loss is still much less than the annual ozone "hole" that occurs over Antarctica in September and October during Southern Hemisphere spring. For comparison, ozone levels over Antarctica typically drop to about 120 Dobson units.

Read more at Science Daily

New clues to predict the risks astronauts will face from space radiation on long missions

The National Aeronautics and Space Administration, NASA, aims to send human missions to Mars in the 2030s. But scientists are still trying to learn more about the potential cancer risks for astronauts due to radiation exposure. Cancer risk from galactic cosmic radiation exposure is considered a potential "showstopper" for a manned mission to Mars.

A team led by researchers at Colorado State University used a novel approach to test assumptions in a model used by NASA to predict these health risks. The NASA model predicts that astronauts will have more than a three percent risk of dying of cancer from the radiation exposures they will receive on a Mars mission. That level of risk exceeds what is considered acceptable.

The study, "Genomic mapping in outbred mice reveals overlap in genetic susceptibility for HZE ion- and gamma-ray-induced tumors," was published April 15 in Science Advances.

Radiation exposure in space is 'exotic'

When astronauts are sent into space, they are exposed to a type of radiation that is "pretty exotic, compared to radiation on earth," said Michael Weil, senior author of the study and a professor in the Department of Environmental and Radiological Health Sciences at CSU. The radiation comes from two sources: the sun and from supernovas.

Scientists know very little about these types of radiation and their effects on humans, because exposure on earth is very limited, said Weil.

"The radiation type we're most concerned about are HZE ions or heavy ions," he added. "When you're in space, there is nothing to deflect this type of radiation. Some of these heavy ions will punch through a spacecraft hull, so when you send astronauts into space, you're exposing them to these types of radiation."

This radiation can damage molecules, cells and tissues, with the potential for cancer, cardiovascular disease and neurodegenerative disorders.

Previous studies of radiation risks have used health data from survivors of the atomic bombings of Hiroshima and Nagasaki. While those studies have provided some insight, Weil said the data poses a number of problems for real-world application, including comparing a wartime Japanese population to a peacetime U.S. population.

The type of radiation is also different, and has different effects. During the atomic bombings, people received radiation exposures instantaneously. But astronauts bound for Mars would be exposed to radiation continuously over three years.

Models mimicked genetically diverse humans

For the study, Weil and first author Dr. Elijah Edmondson, a veterinary pathologist and researcher based at the Frederick National Laboratory for Cancer Research in Maryland, used a unique stock of genetically diverse mice, mimicking a human population.

Mice were divided into three groups with the first group receiving no radiation exposure and the other two receiving varying levels of exposure.

Edmondson, who conducted the research while completing a veterinary residency in pathology at CSU, said that for this type of research project, genetic variability is crucial.

"Humans are very genetically diverse," he explained. "You want to model that when it's appropriate and feasible to do so."

Weil said although the research team saw different tumor types, similar to humans, but the heavy ions did not cause any unique types of cancer. They also saw differences by sex.

In humans, women are more susceptible to radiation-induced cancers than men; one of the main reasons is that women live longer, allowing sufficient time for cancer to develop. In assessing the cancer risk between male and female mice in the study, scientists said the findings parallel human data.

Edmondson said the study validates the NASA model to measure cancer risks for humans from space radiation.

NASA continuously updates its risk assessment model, said Weil, and has done so based on work that was previously done at CSU.

Read more at Science Daily

Genomics used to estimate Samoan population dynamics over 3,000 years

Reconstructing how many individuals first settled the many small islands in the Pacific and when they arrived remain important scientific questions, as well as an intriguing ones for understanding human history. Human migrations into the islands of Remote Oceania -- from circa 3,000 to 1,200 years ago -- mark the last major movement into locations previously uninhabited by humans.

These questions are also crucial as part of scientific efforts to understand the role of early history of Pacific islanders on contemporary public health problems including obesity and associated non-communicable diseases such as hypertension and Type 2 diabetes.

A new study in Proceedings of the National Academy of Sciences analyzed the genomes of 1,197 individuals in Samoa and found that the effective population size of the first Samoans was small -- ranging from 700 to 3,400 people during the time period from approximately 3,000 to about 1,000 years ago. Starting about 1,000 years ago, population size rapidly increase to about 10,000 individuals, coinciding with increasing agricultural and socio-political complexity, but also with previously hypothesized contacts with other Oceanic peoples.

This population history scenario for Samoa is consistent with the existing archaeological evidence of few, widely scattered and small-sized settlements in the first 2,000 years after Samoa's initial settlement. But it contrasts with archaeological population reconstructions of much larger population sizes for adjacent Pacific peoples in Tonga and Fiji during that first 1,500 to 2,000 years after initial discoveries around 3,000 years ago.

The research team's conclusions could help in understanding health conditions of particular importance to people in Samoa, home to some of the highest rates of obesity, heart disease and diabetes in the world.

"These findings are relevant for our ongoing public health research in Samoan populations because they highlight the importance of population history and size in influencing our ability to identify the effect of novel genetic variations, and their interactions with 21st century environments on population health," said Stephen McGarvey, study co-author and a professor of epidemiology and of anthropology at Brown University.

McGarvey has studied extensively obesity and diseases that stem from obesity -- including diabetes, cardiovascular disease, kidney disease and cancer -- in Samoa, which are not only a threat to individual health, but to the nations' economic and social development.

"Smaller populations and the evolutionary mechanisms resulting from them, including genetic drift from bottlenecks and natural selection from novel challenging environments such as experienced by the first settlers of Samoa, make it easier to detect new gene variants and different frequencies of known variants that affect cardiometabolic disease risk factors now in the 21th century," he said.

The new study also found that modern Samoans derive largely from the Austronesian lineage, including the aboriginal peoples of Taiwan, Island Southeast Asia, coastal New Guinea and other island groups of Oceania -- but share 24% of their ancestry with Papuans, the descendants of the people who settled Papua/New Guinea, an estimate markedly lower than found in neighboring Polynesian groups.

The researchers also found strong evidence of population reduction coincident with outside contact from European-derived groups, presumably from infectious diseases new to Samoan immune systems and societal shocks from such epidemics. The whole genome sequence data from participants' DNA also enabled findings about some genetic diversification within Samoa that may be reflective of regional and local social processes. The genomic data also showed an increase in population size about 150 years ago.

Read more at Science Daily

Climate-driven megadrought is emerging in western US, says study

Cracked earth, drought concept
With the western United States and northern Mexico suffering an ever-lengthening string of dry years starting in 2000, scientists have been warning for some time that climate change may be pushing the region toward an extreme long-term drought worse than any in recorded history. A new study says the time has arrived: a megadrought as bad or worse than anything even from known prehistory is very likely in progress, and warming climate is playing a key role. The study, based on modern weather observations, 1,200 years of tree-ring data and dozens of climate models, appears this week in the leading journal Science.

"Earlier studies were largely model projections of the future," said lead author Park Williams, a bioclimatologist at Columbia University's Lamont-Doherty Earth Observatory. "We're no longer looking at projections, but at where we are now. We now have enough observations of current drought and tree-ring records of past drought to say that we're on the same trajectory as the worst prehistoric droughts."

Reliable modern observations date only to about 1900, but tree rings have allowed scientists to infer yearly soil moisture for centuries before humans began influencing climate. Among other things, previous research has tied catastrophic naturally driven droughts recorded in tree rings to upheavals among indigenous Medieval-era civilizations in the Southwest. The new study is the most up-to-date and comprehensive long-term analysis. It covers an area stretching across nine U.S. states from Oregon and Montana down through California and New Mexico, and part of northern Mexico.

Using rings from many thousands of trees, the researchers charted dozens of droughts across the region, starting in 800 AD. Four stand out as so-called megadroughts, with extreme aridity lasting decades: the late 800s, mid-1100s, the 1200s, and the late 1500s. After 1600, there were other droughts, but none on this scale.

The team then compared the ancient megadroughts to soil moisture records calculated from observed weather in the 19 years from 2000 to 2018. Their conclusion: as measured against the worst 19-year increments within the previous episodes, the current drought is already outdoing the three earliest ones. The fourth, which spanned 1575 to 1603, may have been the worst of all -- but the difference is slight enough to be within the range of uncertainty. Furthermore, the current drought is affecting wider areas more consistently than any of the earlier ones -- a fingerprint of global warming, say the researchers. All of the ancient droughts lasted longer than 19 years -- the one that started in the 1200s ran nearly a century -- but all began on a similar path to to what is showing up now, they say.

Nature drove the ancient droughts, and still plays a strong role today. A study last year led by Lamont's Nathan Steiger showed that among other things, unusually cool periodic conditions over the tropical Pacific Ocean (commonly called La Niña) during the previous megadroughts pushed storm tracks further north, and starved the region of precipitation. Such conditions, and possibly other natural factors, appear to have also cut precipitation in recent years. However, with global warming proceeding, the authors say that average temperatures since 2000 have been pushed 1.2 degrees C (2.2 F) above what they would have been otherwise. Because hotter air tends to hold more moisture, that moisture is being pulled from the ground. This has intensified drying of soils already starved of precipitation.

All told, the researchers say that rising temperatures are responsible for about half the pace and severity of the current drought. If this overall warming were subtracted from the equation, the current drought would rank as the 11th worst detected -- bad, but nowhere near what it has developed into.

"It doesn't matter if this is exactly the worst drought ever," said coauthor Benjamin Cook, who is affiliated with Lamont and the Goddard Institute for Space Studies. "What matters is that it has been made much worse than it would have been because of climate change." Since temperatures are projected to keep rising, it is likely the drought will continue for the foreseeable future; or fade briefly only to return, say the researchers.

"Because the background is getting warmer, the dice are increasingly loaded toward longer and more severe droughts," said Williams. "We may get lucky, and natural variability will bring more precipitation for a while. But going forward, we'll need more and more good luck to break out of drought, and less and less bad luck to go back into drought." Williams said it is conceivable the region could stay arid for centuries. "That's not my prediction right now, but it's possible," he said.

Lamont climatologist Richard Seager was one of the first to predict, in a 2007 paper, that climate change might eventually push the region into a more arid climate during the 21st century; he speculated at the time that the process might already be underway. By 2015, when 11 of the past 14 years had seen drought, Benjamin Cook led a followup study projecting that warming climate would cause the catastrophic natural droughts of prehistory to be repeated by the latter 21st century. A 2016 study coauthored by several Lamont scientist reinforced those findings. Now, says Cook, it looks like they may have underestimated. "It's already happening," he said.

The effects are palpable. The mighty reservoirs of Lake Mead and Lake Powell along the Colorado River, which supply agriculture around the region, have shrunk dramatically. Insect outbreaks are ravaging dried-out forests. Wildfires in California and across wider areas of the U.S. West are growing in area. While 2019 was a relatively wet year, leading to hope that things might be easing up, early indications show that 2020 is already on a track for resumed aridity.

"There is no reason to believe that the sort of natural variability documented in the paleoclimatic record will not continue into the future, but the difference is that droughts will occur under warmer temperatures," said Connie Woodhouse, a climate scientist at the University of Arizona who was not involved in the study. "These warmer conditions will exacerbate droughts, making them more severe, longer, and more widespread than they would have been otherwise."

Angeline Pendergrass, a staff scientist at the U.S. National Center for Atmospheric Research, said that she thinks it is too early to say whether the region is at the cusp of a true megadrought, because the study confirms that natural weather swings are still playing a strong role. That said, "even though natural variability will always play a large role in drought, climate change makes it worse," she said.

Read more at Science Daily

Apr 16, 2020

ESO telescope sees star dance around supermassive black hole, proves Einstein right

Observations made with ESO's Very Large Telescope (VLT) have revealed for the first time that a star orbiting the supermassive black hole at the centre of the Milky Way moves just as predicted by Einstein's general theory of relativity. Its orbit is shaped like a rosette and not like an ellipse as predicted by Newton's theory of gravity. This long-sought-after result was made possible by increasingly precise measurements over nearly 30 years, which have enabled scientists to unlock the mysteries of the behemoth lurking at the heart of our galaxy.

"Einstein's General Relativity predicts that bound orbits of one object around another are not closed, as in Newtonian Gravity, but precess forwards in the plane of motion. This famous effect -- first seen in the orbit of the planet Mercury around the Sun -- was the first evidence in favour of General Relativity. One hundred years later we have now detected the same effect in the motion of a star orbiting the compact radio source Sagittarius A* at the centre of the Milky Way. This observational breakthrough strengthens the evidence that Sagittarius A* must be a supermassive black hole of 4 million times the mass of the Sun," says Reinhard Genzel, Director at the Max Planck Institute for Extraterrestrial Physics (MPE) in Garching, Germany and the architect of the 30-year-long programme that led to this result.

Located 26,000 light-years from the Sun, Sagittarius A* and the dense cluster of stars around it provide a unique laboratory for testing physics in an otherwise unexplored and extreme regime of gravity. One of these stars, S2, sweeps in towards the supermassive black hole to a closest distance less than 20 billion kilometres (one hundred and twenty times the distance between the Sun and Earth), making it one of the closest stars ever found in orbit around the massive giant. At its closest approach to the black hole, S2 is hurtling through space at almost three percent of the speed of light, completing an orbit once every 16 years. "After following the star in its orbit for over two and a half decades, our exquisite measurements robustly detect S2's Schwarzschild precession in its path around Sagittarius A*," says Stefan Gillessen of the MPE, who led the analysis of the measurements published today in the journal Astronomy & Astrophysics.

Most stars and planets have a non-circular orbit and therefore move closer to and further away from the object they are rotating around. S2's orbit precesses, meaning that the location of its closest point to the supermassive black hole changes with each turn, such that the next orbit is rotated with regard to the previous one, creating a rosette shape. General Relativity provides a precise prediction of how much its orbit changes and the latest measurements from this research exactly match the theory. This effect, known as Schwarzschild precession, had never before been measured for a star around a supermassive black hole.

The study with ESO's VLT also helps scientists learn more about the vicinity of the supermassive black hole at the centre of our galaxy. "Because the S2 measurements follow General Relativity so well, we can set stringent limits on how much invisible material, such as distributed dark matter or possible smaller black holes, is present around Sagittarius A*. This is of great interest for understanding the formation and evolution of supermassive black holes," say Guy Perrin and Karine Perraut, the French lead scientists of the project.

This result is the culmination of 27 years of observations of the S2 star using, for the best part of this time, a fleet of instruments at ESO's VLT, located in the Atacama Desert in Chile. The number of data points marking the star's position and velocity attests to the thoroughness and accuracy of the new research: the team made over 330 measurements in total, using the GRAVITY, SINFONI and NACO instruments. Because S2 takes years to orbit the supermassive black hole, it was crucial to follow the star for close to three decades, to unravel the intricacies of its orbital movement.

The research was conducted by an international team led by Frank Eisenhauer of the MPE with collaborators from France, Portugal, Germany and ESO. The team make up the GRAVITY collaboration, named after the instrument they developed for the VLT Interferometer, which combines the light of all four 8-metre VLT telescopes into a super-telescope (with a resolution equivalent to that of a telescope 130 metres in diameter). The[ same team reported in 2018] -- another effect predicted by General Relativity: they saw the light received from S2 being stretched to longer wavelengths as the star passed close to Sagittarius A*. "Our previous result has shown that the light emitted from the star experiences General Relativity. Now we have shown that the star itself senses the effects of General Relativity," says Paulo Garcia, a researcher at Portugal's Centre for Astrophysics and Gravitation and one of the lead scientists of the GRAVITY project.

Read more at Science Daily

Influenza: researchers show that new treatment reduces spread of virus

The antiviral drug, baloxavir (tradename Xofluza), is the first treatment for influenza with a new mode of "action" to be licensed in nearly 20 years. It was approved in Australia in February 2020 by the Therapeutic Goods Administration (TGA) and has been used to treat influenza in Japan, the USA, and several other countries since 2018.

Researchers at the WHO Collaborating Centre for Reference and Research on Influenza at the Peter Doherty Institute for Infection and Immunity (Doherty Institute -- a joint venture between the University of Melbourne and Royal Melbourne Hospital) and Imperial College London tested whether baloxavir could prevent the spread of influenza virus in an animal model in conditions that mimicked household settings, including direct and indirect contact. They also compared the treatment to oseltamivir (tradename Tamiflu), a widely prescribed influenza antiviral.

Published today in PLoS Pathogens is a detailed report of the study, which was conducted in ferrets -- considered the gold standard animal model for evaluating influenza -- detailing how baloxavir reduced the transmission of influenza across all settings, and did so immediately. Conversely, oseltamivir did not reduce the transmission of influenza to other ferrets.

First author Leo Yi Yang Lee, a medical scientist at the WHO Collaborating Centre for Reference and Research on Influenza, believes the results are an important breakthrough in our understanding of managing the influenza virus.

"Our research provides evidence that baloxavir can have a dramatic dual effect: a single dose reduces the length of influenza illness, while simultaneously reducing the chance of passing it on to others," Mr Lee said.

"This is very important, because current antiviral drugs only treat influenza illness in the infected patient. If you want to reduce the spread of influenza to others, people in close contact need to take antiviral drugs themselves to stave off infection."

Senior author Professor Wendy Barclay, head of the Department of Infectious Disease at Imperial College London, said if the results of the study were replicated in humans, the discovery could be a game changer in stemming outbreaks of influenza, particularly amongst vulnerable groups.

"We know that influenza can have serious and devastating outcomes for people with compromised immune systems, such as those in care facilities and hospitals, where finding more ways to reduce transmission is essential," Professor Barclay said.

A first-of-its-kind clinical trial is currently underway to test the effectiveness of baloxavir in reducing transmission amongst human household contacts by treating individuals infected with influenza and monitoring for infection in household members.

Read more at Science Daily

New geochemical tool reveals origin of Earth's nitrogen

Yellowstone National Park
Researchers at Woods Hole Oceanographic Institution (WHOI), the University of California Los Angeles (UCLA) and their colleagues used a new geochemical tool to shed light on the origin of nitrogen and other volatile elements on Earth, which may also prove useful as a way to monitor the activity of volcanoes. Their findings were published April 16, 2020, in the journal Nature.

Nitrogen is the most abundant gas in the atmosphere, and is the primary component of the air we breathe. Nitrogen is also found in rocks, including those tucked deep within the planet's interior. Until now, it was difficult to distinguish between nitrogen sources coming from air and those coming from inside the Earth's mantle when measuring gases from volcanoes.

"We found that air contamination was masking the pristine 'source signature' of many volcanic gas samples," says WHOI geochemist Peter Barry, a coauthor of the study.

Without that distinction, scientists weren't able to answer basic questions like: Is nitrogen left over from Earth's formation or was it delivered to the planet later on? How is nitrogen from the atmosphere related to nitrogen coming out of volcanoes?

Barry and lead author Jabrane Labidi of UCLA, now a researcher at Institut de Physique du Globe de Paris, worked in partnership with international geochemists to analyze volcanic gas samples from around the globe -- including gases from Iceland and Yellowstone National Park -- using a new method of analyzing "clumped" nitrogen isotopes. This method provided a unique way to identify molecules of nitrogen that come from air, which allowed the researchers to see the true gas compositions deep within Earth's mantle. This ultimately revealed evidence that nitrogen in the mantle has most likely been there since our planet initially formed.

"Once air contamination is accounted for, we gained new and valuable insights into the origin of nitrogen and the evolution of our planet," Barry says.

While this new method helps scientists understand the origins of volatile elements on Earth, it may also prove useful as a way of monitoring the activity of volcanoes. This is because the composition of gases bellowing from volcanic centers change prior to eruptions. It could be that the mix of mantle and air nitrogen could one day be used as a signal of eruptions.

This study was supported by the Deep Carbon Observatory and the Alfred P. Sloan Foundation. The research team also included colleagues David Bekaert and Mark Kurz from WHOI, scientists from several other U.S.-based universities, and from France, Canada, Italy, the United Kingdom and Iceland.

From Science Daily

Earth-size, habitable-zone planet found hidden in early NASA Kepler data

Exoplanet illustration
A team of transatlantic scientists, using reanalyzed data from NASA's Kepler space telescope, has discovered an Earth-size exoplanet orbiting in its star's habitable zone, the area around a star where a rocky planet could support liquid water.

Scientists discovered this planet, called Kepler-1649c, when looking through old observations from Kepler, which the agency retired in 2018. While previous searches with a computer algorithm misidentified it, researchers reviewing Kepler data took a second look at the signature and recognized it as a planet. Out of all the exoplanets found by Kepler, this distant world -- located 300 light-years from Earth -- is most similar to Earth in size and estimated temperature.

This newly revealed world is only 1.06 times larger than our own planet. Also, the amount of starlight it receives from its host star is 75% of the amount of light Earth receives from our Sun -- meaning the exoplanet's temperature may be similar to our planet's as well. But unlike Earth, it orbits a red dwarf. Though none have been observed in this system, this type of star is known for stellar flare-ups that may make a planet's environment challenging for any potential life.

"This intriguing, distant world gives us even greater hope that a second Earth lies among the stars, waiting to be found," said Thomas Zurbuchen, associate administrator of NASA's Science Mission Directorate in Washington. "The data gathered by missions like Kepler and our Transiting Exoplanet Survey Satellite [TESS] will continue to yield amazing discoveries as the science community refines its abilities to look for promising planets year after year."

There is still much that is unknown about Kepler-1649c, including its atmosphere, which could affect the planet's temperature. Current calculations of the planet's size have significant margins of error, as do all values in astronomy when studying objects so far away. But based on what is known, Kepler-1649c is especially intriguing for scientists looking for worlds with potentially habitable conditions.

There are other exoplanets estimated to be closer to Earth in size, such as TRAPPIST-1f and, by some calculations, Teegarden c. Others may be closer to Earth in temperature, such as TRAPPIST-1d and TOI 700d. But there is no other exoplanet that is considered to be closer to Earth in both of these values that also lies in the habitable zone of its system.

"Out of all the mislabeled planets we've recovered, this one's particularly exciting -- not just because it's in the habitable zone and Earth-size, but because of how it might interact with this neighboring planet," said Andrew Vanderburg, a researcher at the University of Texas at Austin and first author on the paper released today in The Astrophysical Journal Letters. "If we hadn't looked over the algorithm's work by hand, we would have missed it."

Kepler-1649c orbits its small red dwarf star so closely that a year on Kepler-1649c is equivalent to only 19.5 Earth days. The system has another rocky planet of about the same size, but it orbits the star at about half the distance of Kepler-1649c, similar to how Venus orbits our Sun at about half the distance that Earth does. Red dwarf stars are among the most common in the galaxy, meaning planets like this one could be more common than we previously thought.

Looking for False Positives


Previously, scientists on the Kepler mission developed an algorithm called Robovetter to help sort through the massive amounts of data produced by the Kepler spacecraft, managed by NASA's Ames Research Center in California's Silicon Valley. Kepler searched for planets using the transit method, staring at stars, looking for dips in brightness as planets passed in front of their host stars.

Most of the time, those dips come from phenomena other than planets -- ranging from natural changes in a star's brightness to other cosmic objects passing by -- making it look like a planet is there when it's not. Robovetter's job was to distinguish the 12% of dips that were real planets from the rest. Those signatures Robovetter determined to be from other sources were labeled "false positives," the term for a test result mistakenly classified as positive.

With an enormous number of tricky signals, astronomers knew the algorithm would make mistakes and would need to be double-checked -- a perfect job for the Kepler False Positive Working Group. That team reviews Robovetter's work, going through each false positive to ensure they are truly errors and not exoplanets, ensuring fewer potential discoveries are overlooked. As it turns out, Robovetter had mislabeled Kepler-1649c.

Even as scientists work to further automate analysis processes to get the most science as possible out of any given dataset, this discovery shows the value of double-checking automated work. Even six years after Kepler stopped collecting data from the original Kepler field -- a patch of sky it stared at from 2009 to 2013, before going on to study many more regions -- this rigorous analysis uncovered one of the most unique Earth analogs discovered yet.

A Possible Third Planet

Kepler-1649c not only is one of the best matches to Earth in terms of size and energy received from its star, but it provides an entirely new look at its home system. For every nine times the outer planet in the system orbits the host star, the inner planet orbits almost exactly four times. The fact that their orbits match up in such a stable ratio indicates the system itself is extremely stable and likely to survive for a long time.

Nearly perfect period ratios are often caused by a phenomenon called orbital resonance, but a nine-to-four ratio is relatively unique among planetary systems. Usually resonances take the form of ratios such as two-to-one or three-to-two. Though unconfirmed, the rarity of this ratio could hint to the presence of a middle planet with which both the inner and outer planets revolve in synchronicity, creating a pair of three-to-two resonances.

The team looked for evidence of such a mystery third planet, with no results. However, that could be because the planet is too small to see or at an orbital tilt that makes it impossible to find using Kepler's transit method.

Either way, this system provides yet another example of an Earth-size planet in the habitable zone of a red dwarf star. These small and dim stars require planets to orbit extremely close to be within that zone -- not too warm and not too cold -- for life as we know it to potentially exist. Though this single example is only one among many, there is increasing evidence that such planets are common around red dwarfs.

Read more at Science Daily

Apr 15, 2020

When damaged, the adult brain repairs itself by going back to the beginning

When adult brain cells are injured, they revert to an embryonic state, according to new findings published in the April 15, 2020 issue of Nature by researchers at University of California San Diego School of Medicine, with colleagues elsewhere. The scientists report that in their newly adopted immature state, the cells become capable of re-growing new connections that, under the right conditions, can help to restore lost function.

Repairing damage to the brain and spinal cord may be medical science's most daunting challenge. Until relatively recently, it seemed an impossible task. The new study lays out a "transcriptional roadmap of regeneration in the adult brain."

"Using the incredible tools of modern neuroscience, molecular genetics, virology and computational power, we were able for the first time to identify how the entire set of genes in an adult brain cell resets itself in order to regenerate. This gives us fundamental insight into how, at a transcriptional level, regeneration happens," said senior author Mark Tuszynski, MD, PhD, professor of neuroscience and director of the Translational Neuroscience Institute at UC San Diego School of Medicine.

Using a mouse model, Tuszynski and colleagues discovered that after injury, mature neurons in adult brains revert back to an embryonic state. "Who would have thought," said Tuszynski. "Only 20 years ago, we were thinking of the adult brain as static, terminally differentiated, fully established and immutable."

But work by Fred "Rusty" Gage, PhD, president and a professor at the Salk Institute for Biological Studies and an adjunct professor at UC San Diego, and others found that new brain cells are continually produced in the hippocampus and subventricular zone, replenishing these brain regions throughout life.

"Our work further radicalizes this concept," Tuszynski said. "The brain's ability to repair or replace itself is not limited to just two areas. Instead, when an adult brain cell of the cortex is injured, it reverts (at a transcriptional level) to an embryonic cortical neuron. And in this reverted, far less mature state, it can now regrow axons if it is provided an environment to grow into. In my view, this is the most notable feature of the study and is downright shocking."

To provide an "encouraging environment for regrowth," Tuszynski and colleagues investigated how damaged neurons respond after a spinal cord injury. In recent years, researchers have significantly advanced the possibility of using grafted neural stem cells to spur spinal cord injury repairs and restore lost function, essentially by inducing neurons to extend axons through and across an injury site, reconnecting severed nerves.

Last year, for example, a multi-disciplinary team led by Kobi Koffler, PhD, assistant professor of neuroscience, Tuszynski, and Shaochen Chen, PhD, professor of nanoengineering and a faculty member in the Institute of Engineering in Medicine at UC San Diego, described using 3D printed implants to promote nerve cell growth in spinal cord injuries in rats, restoring connections and lost functions.

The latest study produced a second surprise: In promoting neuronal growth and repair, one of the essential genetic pathways involves the gene Huntingtin (HTT), which, when mutated, causes Huntington's disease, a devastating disorder characterized by the progressive breakdown of nerve cells in the brain.

Tuszynski's team found that the "regenerative transcriptome" -- the collection of messenger RNA molecules used by corticospinal neurons -- is sustained by the HTT gene. In mice genetically engineered to lack the HTT gene, spinal cord injuries showed significantly less neuronal sprouting and regeneration.

Read more at Science Daily

COPD as a lung stem cell disease

Two stem cell experts have found an abundance of abnormal stem cells in the lungs of patients who suffer from Chronic Obstructive Pulmonary Disease (COPD), a leading cause of death worldwide. Frank McKeon, professor of biology and biochemistry and director of the Stem Cell Center, and Wa Xian, research associate professor at the center, used single cell cloning of lung stem cells to make their discovery. Now they are targeting the cells for new therapeutics.

"We actually found that three variant cells in all COPD patients drive all the key features of the disease. One produces tremendous amounts of mucins which block the small airways, while the other two drive fibrosis and inflammation which together degrade the function of the lung," Xian reports in the May 14 issue of the journal Cell. "These patients have normal stem cells, though not many of them, but they are dominated by the three variant cells that together make up the disease," she said.

COPD is a progressive inflammatory disease of the lungs marked by chronic bronchitis, small airway occlusion, inflammation, fibrosis and destruction of alveoli, tiny air sacs in the lungs which exchange oxygen and carbon dioxide molecules in the blood. The Global Burden of Disease Study reports 251 million cases of COPD globally in 2016.

"It's a frustrating disease to care for. We can try and improve the symptoms, but we don't have anything that can cure the disease or prevent death," said UConn Health pulmonologist and critical care doctor Mark Metersky, who gathered the stem cells from lung fluid while performing bronchoscopies.

Despite its accounting for more deaths than any single disease on the planet, relatively little has been written or understood about the root cause of COPD.

Over the past decade, Xian and McKeon developed technology for cloning stem cells of the lungs and airways and have been at it since, noting that different parts of the airways give different stem cells, related but distinguishable.

"It's quite remarkable," said McKeon. "In the deep lung, the distal airway stem cells gave rise to both the distal tubes and the alveoli and our research indicates those are the stem cells that make it possible for lungs to regenerate on their own." Xian and McKeon discovered lung regeneration in 2011 in their studies of subjects recovering from infections by an H1N1 influenza virus that was nearly identical to that which sparked the 1918 pandemic.

Xian and McKeon found that, in contrast to normal lungs, COPD lungs were inundated by three unusual variant lung stem cells that are committed to form metaplastic lesions known to inhabit COPD lungs, but seen by many as a secondary effect without a causal link to the pathology of COPD. After the team's postdoctoral fellow, Wei Rao, transplanted each of the COPD clones into immunodeficient subjects, the team found they not only gave rise to the distinct metaplastic lesions of COPD, but they separately triggered the triad of pathologies of COPD including mucus hypersecretion, fibrosis and chronic inflammation.

"The long-overlooked metaplastic lesions in COPD were, in fact, driving the disease rather than merely secondary consequences of the condition," said McKeon.

Now that the team knows the identity of the cells that cause inflammation, fibrosis and small airway obstruction, they are hard at work screening them against libraries of drug-like molecules to discover new therapeutics.

Read more at Science Daily

'Directing' evolution to identify potential drugs earlier in discovery

Illustration of molecular models of antibodies.
Scientists have developed a technique that could significantly reduce the time of discovering potential new antibody-based drugs to treat disease.

Antibodies are produced by the body in response to the presence of a disease-causing agent. They can also be synthesised in the laboratory to mimic natural antibodies and are used to treat a number of diseases.

Antibody therapies can be highly effective, but challenges can arise when promising candidate antibodies are produced at a large scale. Stresses encountered during manufacturing can disrupt the structure of these fragile proteins leading to aggregation and loss of activity. This in turn prevents them from being made into a therapeutic.

New research from an eight-year collaboration between scientists at the University of Leeds and the biopharmaceutical company AstraZeneca has resulted in a technique that allows fragments of antibodies to be screened for susceptibility to aggregation caused by structure disruption much earlier in the drug discovery process.

The approach is described in the journal Nature Communications, published today.

Dr David Brockwell, Associate Professor in the Astbury Centre for Structural Molecular Biology at the University of Leeds, led the research. He said: "Antibody therapeutics have revolutionised medicine. They can be designed to bind to almost any target and are highly specific.

"But a significant problem has been the failure rate of candidates upon manufacturing at industrial scale. This often only emerges at a very late stage in the development process -- these drugs are failing at the last hurdle.

"But our research is turning that problem on its head."

When it comes to developing an antibody drug, scientists are not restricted to a single protein sequence. Fortunately, there is often an array of similar antibodies with the same ability of locking or binding tightly onto a disease-causing agent. That gives researchers a range of proteins to screen, to determine which are more likely to progress through the development process.

Professor Sheena Radford, FRS, Director of the Astbury Centre for Structural Molecular Biology, said: "The collaboration that has existed between the team of scientists within the University of Leeds and AstraZeneca demonstrates the power of industry and academia working together to tackle what has been one of the major roadblocks preventing the efficient and rapid development of these powerful therapeutic molecules."

How the target proteins are screened

The target proteins are cloned into the centre of an enzyme that breaks down antibiotics in the bacterium E.coli. This enables the scientists to directly link antibiotic resistance of the bacteria to how aggregation-prone the antibody fragment is. A simple readout -- bacterial growth on an agar plate containing antibiotic -- gives an indication of whether the target protein will survive the manufacturing process. If the antibody proteins are susceptible to stress, they will unfold or clump together, become inactive, and the antibiotic will kill the bacteria. But if the protein chain is more stable, the bacteria thrives and will display antimicrobial resistance and will grow in the presence of the antibiotic.

The scientists harvest the bacteria that have survived and identify the cloned protein sequence. That indicates which protein sequences to take forward in the development pipeline. The whole cycle takes about a month and increases the chances of success.

Directed evolution

But the process can go a step further, using the idea of directed evolution.

Scientists use the idea of natural selection where mutations or changes take place in the proteins, sometimes making them more stable. Directed evolution could generate new better performing sequences that, at the current time, cannot even be imagined, let alone designed and manufactured. How does this method work? Like Darwin's natural selection, evolutionary pressure in this case is applied by the antibiotic and selects for the survival of bacteria that produce the protein variants that do not aggregate.

The protein sequences hosted in the bacterial cells that have shown resistance are harvested and their genes sequenced and scored, to select the best performing sequences. After a quick check to ensure that the new antibody sequences still retain their excellent binding capability for the original disease-causing target, they can be taken forward for further development.

Professor Radford said: "There is tremendous excitement about this approach. We are letting evolutionary selection change the sequence of the proteins for us and that might make some of them more useful as drug therapies. Importantly for industry, nature does the hard-work -- obviating the need for so called rational engineering which is time- and resource-intensive.

"As we do this, we will be putting the sequence information we gather into a database. As the database gets bigger, it may well be possible with artificial intelligence and machine learning to be able to identify the patterns in protein sequences that tell us that a protein can be scaled up for pharmaceutical production without needing any experiments. That is our next challenge and one we are tackling right now."

Dr David Lowe, who led the work at AstraZeneca, said: "The screening system that we have developed here is a great example of industry and academia working together to solve important challenges in the development of potential new medicines.

Read more at Science Daily

Volcanic carbon dioxide emissions helped trigger Triassic climate change

Volcanic eruption
A new study finds volcanic activity played a direct role in triggering extreme climate change at the end of the Triassic period 201 million year ago, wiping out almost half of all existing species. The amount of carbon dioxide released into the atmosphere from these volcanic eruptions is comparable to the amount of CO2 expected to be produced by all human activity in the 21st century.

The end-Triassic extinction has long been thought to have been caused by dramatic climate change and rising sea levels. While there was large-scale volcanic activity at the time, known as the Central Atlantic Magmatic Province eruptions, the role it played in directly contributing to the extinction event is debated. In a study for Nature Communications, an international team of researchers, including McGill professor Don Baker, found evidence of bubbles of carbon dioxide trapped in volcanic rocks dating to the end of the Triassic, supporting the theory that volcanic activity contributed to the devastating climate change believed to cause the mass extinction.

The researchers suggest that the end-Triassic environmental changes driven by volcanic carbon dioxide emissions may have been similar to those predicted for the near future. By analysing tiny gas exsolution bubbles preserved within the rocks, the team estimates that the amount of carbon emissions released in a single eruption -- comparable to 100,000 km3 of lava spewed over 500 years -- is likely equivalent to the total produced by all human activity during the 21st century, assuming a 2C rise in global temperature above pre-industrial levels.

"Although we cannot precisely determine the total amount of carbon dioxide released into the atmosphere when these volcanoes erupted, the correlation between this natural injection of carbon dioxide and the end-Triassic extinction should be a warning to us. Even a slight possibility that the carbon dioxide we are now putting into the atmosphere could cause a major extinction event is enough to make me worried," says professor of earth and planetary sciences Don Baker.

From Science Daily

Apr 14, 2020

Geneticists zeroing in on genes affecting life span

Scientists believe about 25 percent of the differences in human life span is determined by genetics -- with the rest determined by environmental and lifestyle factors. But they don't yet know all the genes that contribute to a long life.

A study published March 5, 2020, in PLOS Biology quantified variation in life span in the fruit fly genome, providing valuable insights for preserving health in elderly humans -- an ever-increasing segment of the population. The paper titled "Context-dependent genetic architecture of Drosophila life span" is the culmination of a decade of research by Clemson University geneticists Trudy Mackay and Robert Anholt.

It remains difficult to address the genetic basis for life span in humans, so researchers conduct their experiments with model systems. Mackay, the Self Family Endowed Chair of Human Genetics, is one of the world's leading experts on the Drosophila melanogaster model (aka the common fruit fly), which is an excellent model for comparative analysis of human disease and aging. About 70 percent of the fruit fly genome has a human counterpart.

"The fly lines are representative of a natural population and they are very diverse, with more than two million variants captured in these lines," said Mackay, who conducted the study using sequenced, inbred lines of the Drosophila melanogaster Genetic Reference Panel (DGRP).

In their experiment, Mackay and her team used the DGRP lines and an outbred population derived from these fly lines to examine variation in life span among male and female flies raised in three different temperature environments (18, 25 and 28 degrees Celsius).

In the process, they addressed which genes and variants are associated with increased life span and whether these genes and variants are the same in males and females and in different environments.

A variant, which is a change in a single DNA base of a person's DNA code, is introduced into a population as a mutation. The laws of natural selection indicate that variants with favorable characteristics will survive and be passed down to subsequent generations, while those with deleterious effects will not.

After conducting quantitative genetic analysis of life span and the micro-environmental variance of life span in the DGRP line, the researchers discovered that the genetic architecture of life span is context dependent. The same genes and variants had different effects in males and females and also different effects based on the temperature in which they were grown.

According to Mackay, understanding variants is much more complex than what the scientific community has previously believed, and emphasized that the male and female differences were particularly surprising.

"If average life span of a variant was increased in females, it was decreased in males," said Mackay, director of Clemson's Center for Human Genetics in Greenwood, South Carolina. "This is an example of what is called antagonistic pleiotropy, meaning the same variant has opposite effects on different traits. In this case it's the same trait, but its effects are opposite in males and females."

This is significant, Mackay said, because it has been predicted in theory that variants with opposite different effects in males and females would be maintained in natural populations and cause variation in life span. However, experimental examples of such variants have not been observed previously.

Mackay and her team had another major finding in their study. Of all the genes they identified as being associated with increased life span, 1,008 of them overlapped with genes previously identified as such by other researchers who had selected fruit flies for increased life span.

"We were very pleased to find out that even though life span is a very complicated trait caused by variation on a large number of loci, which is true for most complex traits, the number of loci that are in common is a totally finite number. So, we can imagine going on to the next stage and investigating one gene at a time and in combination," Mackay said.

Much of this research was conducted by Mackay and her team when she was a faculty member at North Carolina State University. She joined Clemson University in July 2018.

Read more at Science Daily

NASA missions help reveal the power of shock waves in a nova explosion

Nova illustration
Unprecedented observations of a nova outburst in 2018 by a trio of satellites, including two NASA missions, have captured the first direct evidence that most of the explosion's visible light arose from shock waves -- abrupt changes of pressure and temperature formed in the explosion debris.

A nova is a sudden, short-lived brightening of an otherwise inconspicuous star. It occurs when a stream of hydrogen from a companion star flows onto the surface of a white dwarf, a compact stellar cinder not much larger than Earth. NASA's Fermi and NuSTAR space telescopes, together with the Canadian BRITE-Toronto satellite and several ground-based facilities, studied the nova.

"Thanks to an especially bright nova and a lucky break, we were able to gather the best-ever visible and gamma-ray observations of a nova to date," said Elias Aydi, an astronomer at Michigan State University in East Lansing who led an international team from 40 institutions. "The exceptional quality of our data allowed us to distinguish simultaneous flares in both optical and gamma-ray light, which provides smoking-gun evidence that shock waves play a major role in powering some stellar explosions."

The 2018 outburst originated from a star system later dubbed V906 Carinae, which lies about 13,000 light-years away in the constellation Carina. Over time -- perhaps tens of thousands of years for a so-called classical nova like V906 Carinae -- the white dwarf's deepening hydrogen layer reaches critical temperatures and pressures. It then erupts in a runaway reaction that blows off all of the accumulated material.

Each nova explosion releases a total of 10,000 to 100,000 times the annual energy output of our Sun. Astronomers discover about 10 novae each year in our galaxy.

Fermi detected its first nova in 2010 and has observed 14 to date. Although X-ray and radio studies had shown the presence of shock waves in nova debris in the weeks after the explosions reached peak brightness, the Fermi discovery came as a surprise.

Gamma rays -- the highest-energy form of light -- require processes that accelerate subatomic particles to extreme energies. When these particles interact with each other and with other matter, they produce gamma rays. But astronomers hadn't expected novae to be powerful enough to produce the required degree of acceleration.

Because the gamma rays appear at about the same time as the peak in visible light, astronomers concluded that shock waves play a more fundamental role in the explosion and its aftermath.

In 2015, a paper led by Brian Metzger at Columbia University in New York showed how comparing Fermi gamma-ray data with optical observations would allow scientists to learn more about nova shock waves. In 2017, a study led by Kwon-Lok Li at Michigan State found that the overall gamma-ray and visible emissions rose and fell in step in a nova known as V5856 Sagittarii. This implied shock waves produced more of the eruption's light than the white dwarf itself.

The new observations from V906 Carinae, presented in a paper led by Aydi and published on Monday, April 13, in Nature Astronomy, spectacularly confirm this conclusion.

On March 20, 2018, the All-Sky Automated Survey for Supernovae, a set of two dozen robotic telescopes distributed around the globe and operated by Ohio State University, discovered the nova. By month's end, V906 Carinae was dimly visible to the naked eye.

Fortuitously, a satellite called BRITE-Toronto was already studying the nova's patch of sky. This miniature spacecraft is one of five 7.9-inch (20 centimeter) cubic nanosatellites comprising the Bright Target Explorer (BRITE) Constellation. Operated by a consortium of universities from Canada, Austria and Poland, the BRITE satellites study the structure and evolution of bright stars and observe how they interact with their environments.

BRITE-Toronto was monitoring a red giant star called HD 92063, whose image overlapped the nova's location. The satellite observed the star for 16 minutes out of every 98-minute orbit, returning about 600 measurements each day and capturing the nova's changing brightness in unparalleled detail.

"BRITE-Toronto revealed eight brief flares that fired up around the time the nova reached its peak, each one nearly doubling the nova's brightness," said Kirill Sokolovsky at Michigan State. "We've seen hints of this behavior in ground-based measurements, but never so clearly. Usually we monitor novae from the ground with many fewer observations and often with large gaps, which has the effect of hiding short-term changes."

Fermi, on the other hand, almost missed the show. Normally its Large Area Telescope maps gamma rays across the entire sky every three hours. But when the nova appeared, the Fermi team was busy troubleshooting the spacecraft's first hardware problem in nearly 10 years of orbital operations -- a drive on one of its solar panels stopped moving in one direction. Fermi returned to work just in time to catch the nova's last three flares.

In fact, V906 Carinae was at least twice as bright at billion-electron-volt, or GeV, energies as any other nova Fermi has observed. For comparison, the energy of visible light ranges from about 2 to 3 electron volts.

"When we compare the Fermi and BRITE data, we see flares in both at about the same time, so they must share the same source -- shock waves in the fast-moving debris," said Koji Mukai, an astrophysicist at the University of Maryland Baltimore County and NASA's Goddard Space Flight Center in Greenbelt, Maryland. "When we look more closely, there is an indication that the flares in gamma rays may lead the flares in the visible. The natural interpretation is that the gamma-ray flares drove the optical changes."

The team also observed the eruption's final flare using NASA's NuSTAR space telescope, which is only the second time the spacecraft has detected X-rays during a nova's optical and gamma-ray emission. The nova's GeV gamma-ray output far exceeded the NuSTAR X-ray emission, likely because the nova ejecta absorbed most of the X-rays. High-energy light from the shock waves was repeatedly absorbed and reradiated at lower energies within the nova debris, ultimately only escaping at visible wavelengths.

Putting all of the observations together, Aydi and his colleagues describe what they think happened when V906 Carinae erupted. During the outburst's first few days, the orbital motion of the stars swept a thick debris cloud made of multiple shells of gas into a doughnut shape that appeared roughly edge-on from our perspective. The cloud expanded outward at less than about 1.3 million mph (2.2 million kph), comparable to the average speed of the solar wind flowing out from the Sun.

Next, an outflow moving about twice as fast slammed into denser structures within the doughnut, creating shock waves that emitted gamma rays and visible light, including the first four optical flares.

Finally, about 20 days after the explosion, an even faster outflow crashed into all of the slower debris at around 5.6 million mph (9 million kph). This collision created new shock waves and another round of gamma-ray and optical flares. The nova outflows likely arose from residual nuclear fusion reactions on the white dwarf's surface.

Astronomers have proposed shock waves as a way to explain the power radiated by various kinds of short-lived events, such as stellar mergers, supernovae -- the much bigger blasts associated with the destruction of stars -- and tidal disruption events, where black holes shred passing stars. The BRITE, Fermi and NuSTAR observations of V906 Carinae provide a dramatic record of such a process. Further studies of nearby novae will serve as laboratories for better understanding the roles shock waves play in other more powerful and more distant events.

The Fermi Gamma-ray Space Telescope is an astrophysics and particle physics partnership managed by NASA's Goddard Space Flight Center in Greenbelt, Maryland. Fermi was developed in collaboration with the U.S. Department of Energy, with important contributions from academic institutions and partners in France, Germany, Italy, Japan, Sweden and the United States.

Read more at Science Daily

New formation theory explains the mysterious interstellar object 'Oumuamua

'Oumuamua illustration
Since its discovery in 2017, an air of mystery has surrounded the first known interstellar object to visit our solar system, an elongated, cigar-shaped body named 'Oumuamua (Hawaiian for "a messenger from afar arriving first").

How was it formed, and where did it come from? A new study published April 13 in Nature Astronomy offers a first comprehensive answer to these questions.

First author Yun Zhang at the National Astronomical Observatories of the Chinese Academy of Sciences and coauthor Douglas N. C. Lin at the University of California, Santa Cruz, used computer simulations to show how objects like 'Oumuamua can form under the influence of tidal forces like those felt by Earth's oceans. Their formation theory explains all of 'Oumuamua's unusual characteristics.

"We showed that 'Oumuamua-like interstellar objects can be produced through extensive tidal fragmentation during close encounters of their parent bodies with their host stars, and then ejected into interstellar space," said Lin, professor emeritus of astronomy and astrophysics at UC Santa Cruz.

Discovered on October 19, 2017, by the Panoramic Survey Telescope and Rapid Response System 1 (Pan-STARRS1) in Hawaii, 'Oumuamua is absolutely nothing like anything else in our solar system, according to Zhang. Its dry surface, unusually elongated shape, and puzzling motion even drove some scientists to wonder if it was an alien probe.

"It is really a mysterious object, but some signs, like its colors and the absence of radio emission, point to 'Oumuamua being a natural object," Zhang said.

"Our objective is to come up with a comprehensive scenario, based on well understood physical principles, to piece together all the tantalizing clues," Lin said.

Astronomers had expected that the first interstellar object they detected would be an icy body like a comet. Icy objects like those populating the Oort cloud, a reservoir of comets in the outermost reaches of our solar system, evolve at very large distances from their host stars, are rich in volatiles, and are often tossed out of their host systems by gravitational interactions. They are also highly visible due to the sublimation of volatile compounds, which creates a comet's coma (or "tail") when it is warmed by the sun. 'Oumuamua's dry appearance, however, is similar to rocky bodies like the solar system's asteroids, indicating a different ejection scenario.

Other researchers have calculated that there must be an extremely large population of interstellar objects like 'Oumuamua. "The discovery of 'Oumuamua implies that the population of rocky interstellar objects is much larger than we previously thought," Zhang said. "On average, each planetary system should eject in total about a hundred trillion objects like 'Oumuamua. We need to construct a very common scenario to produce this kind of object."

When a smaller body passes very close to a much bigger one, tidal forces of the larger body can tear the smaller one apart, as happened to comet Shoemaker-Levy 9 when it came close to Jupiter. The tidal disruption processes can eject some debris into interstellar space, which has been suggested as a possible origin for 'Oumuamua. But whether such a process could explain 'Oumuamua's puzzling characteristics remained highly uncertain.

Zhang and Lin ran high-resolution computer simulations to model the structural dynamics of an object flying close by a star. They found that if the object comes close enough to the star, the star can tear it into extremely elongated fragments that are then ejected into the interstellar space.

"The elongated shape is more compelling when we considered the variation of material strength during the stellar encounter. The ratio of long axis to short axis can be even larger than ten to one," Zhang said.

The researchers' thermal modeling showed that the surface of fragments resulting from the disruption of the initial body would melt at a very short distance from the star and recondense at greater distances, thereby forming a cohesive crust that would ensure the structural stability of the elongated shape.

"Heat diffusion during the stellar tidal disruption process also consumes large amounts of volatiles, which not only explains 'Oumuamua's surface colors and the absence of visible coma, but also elucidates the inferred dryness of the interstellar population," Zhang said. "Nevertheless, some high-sublimation-temperature volatiles buried under the surface, like water ice, can remain in a condensed form."

Observations of 'Oumuamua showed no cometary activity, and only water ice is a possible outgassing source to account for its non-gravitational motion. If 'Oumuamua was produced and ejected by the scenario of Zhang and Lin, plenty of residual water ice could be activated during its passage through the solar system. The resulting outgassing would cause accelerations that match 'Oumuamua's comet-like trajectory.

"The tidal fragmentation scenario not only provides a way to form one single 'Oumuamua, but also accounts for the vast population of asteroid-like interstellar objects," Zhang said.

The researchers' calculations demonstrate the efficiency of tidal forces in producing this kind of object. Possible progenitors, including long-period comets, debris disks, and even super-Earths, could be transformed into 'Oumuamua-size pieces during stellar encounters.

This work supports estimates of a large population of 'Oumuamua-like interstellar objects. Since these objects may pass through the domains of habitable zones, the possibility that they could transport matter capable of generating life (called panspermia) cannot be ruled out. "This is a very new field. These interstellar objects could provide critical clues about how planetary systems form and evolve," Zhang said.

According to Lin, "'Oumuamua is just the tip of the iceberg. We anticipate many more interstellar visitors with similar traits will be discovered by future observation with the forthcoming Vera C. Rubin Observatory."

U.S. Naval Academy astronomer Matthew Knight, who is co-leader of the 'Oumuamua International Space Science Institute team and was not involved in the new study, said this work "does a remarkable job of explaining a variety of unusual properties of 'Oumuamua with a single, coherent model."

Read more at Science Daily

Scientists discover supernova that outshines all others

Supernova illustration
A supernova at least twice as bright and energetic, and likely much more massive than any yet recorded has been identified by an international team of astronomers, led by the University of Birmingham.

The team, which included experts from Harvard, Northwestern University and Ohio University, believe the supernova, dubbed SN2016aps, could be an example of an extremely rare 'pulsational pair-instability' supernova, possibly formed from two massive stars that merged before the explosion. Their findings are published today in Nature Astronomy.

Such an event so far only exists in theory and has never been confirmed through astronomical observations.

Dr Matt Nicholl, of the School of Physics and Astronomy and the Institute of Gravitational Wave Astronomy at the University of Birmingham, is lead author of the study. He explains: "We can measure supernovae using two scales -- the total energy of the explosion, and the amount of that energy that is emitted as observable light, or radiation.

"In a typical supernova, the radiation is less than 1 per cent of the total energy. But in SN2016aps, we found the radiation was five times the explosion energy of a normal-sized supernova. This is the most light we have ever seen emitted by a supernova."

In order to become this bright, the explosion must have been much more energetic than usual. By examining the light spectrum, the team were able to show that the explosion was powered by a collision between the supernova and a massive shell of gas, shed by the star in the years before it exploded.

"While many supernovae are discovered every night, most are in massive galaxies," said Dr Peter Blanchard, from Northwestern University and a coauthor on the study. "This one immediately stood out for further observations because it seemed to be in the middle of nowhere. We weren't able to see the galaxy where this star was born until after the supernova light had faded."

The team observed the explosion for two years, until it faded to 1 per cent of its peak brightness. Using these measurements, they calculated the mass of the supernova was between 50 to 100 times greater than our sun (solar masses). Typically supernovae have masses of between 8 and 15 solar masses.

"Stars with extremely large mass undergo violent pulsations before they die, shaking off a giant gas shell. This can be powered by a process called the pair instability, which has been a topic of speculation for physicists for the last 50 years," says Dr Nicholl. "If the supernova gets the timing right, it can catch up to this shell and release a huge amount of energy in the collision. We think this is one of the most compelling candidates for this process yet observed, and probably the most massive."

"SN2016aps also contained another puzzle," added Dr Nicholl. "The gas we detected was mostly hydrogen -- but such a massive star would usually have lost all of its hydrogen via stellar winds long before it started pulsating. One explanation is that two slightly less massive stars of around, say 60 solar masses, had merged before the explosion. The lower mass stars hold onto their hydrogen for longer, while their combined mass is high enough to trigger the pair instability."

"Finding this extraordinary supernova couldn't have come at a better time," according to Professor Edo Berger, a coauthor from Harvard University. "Now that we know such energetic explosions occur in nature, NASA's new James Webb Space Telescope will be able to see similar events so far away that we can look back in time to the deaths of the very first stars in the Universe."

Supernova 2016aps was first detected in data from the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS), a large-scale astronomical survey programme. The team also used data from the Hubble Space Telescope, the Keck and Gemini Observatories, in Hawaii, and the MDM and MMT Observatories in Arizona. Other collaborating institutions included Stockholm University, Copenhagen University, California Institute of Technology, and Space Telescope Science Institute.

Read more at Science Daily

Apr 13, 2020

Carbon nanostructure created that is stronger than diamonds

Researchers at the University of California, Irvine and other institutions have architecturally designed plate-nanolattices -- nanometer-sized carbon structures -- that are stronger than diamonds as a ratio of strength to density.

In a recent study in Nature Communications, the scientists report success in conceptualizing and fabricating the material, which consists of closely connected, closed-cell plates instead of the cylindrical trusses common in such structures over the past few decades.

"Previous beam-based designs, while of great interest, had not been so efficient in terms of mechanical properties," said corresponding author Jens Bauer, a UCI researcher in mechanical & aerospace engineering. "This new class of plate-nanolattices that we've created is dramatically stronger and stiffer than the best beam-nanolattices."

According to the paper, the team's design has been shown to improve on the average performance of cylindrical beam-based architectures by up to 639 percent in strength and 522 percent in rigidity.

Members of the architected materials laboratory of Lorenzo Valdevit, UCI professor of materials science & engineering as well as mechanical & aerospace engineering, verified their findings using a scanning electron microscope and other technologies provided by the Irvine Materials Research Institute.

"Scientists have predicted that nanolattices arranged in a plate-based design would be incredibly strong," said lead author Cameron Crook, a UCI graduate student in materials science & engineering. "But the difficulty in manufacturing structures this way meant that the theory was never proven, until we succeeded in doing it."

Bauer said the team's achievement rests on a complex 3D laser printing process called two-photon lithography direct laser writing. As an ultraviolet-light-sensitive resin is added layer by layer, the material becomes a solid polymer at points where two photons meet. The technique is able to render repeating cells that become plates with faces as thin as 160 nanometers.

Bauer said the team's achievement rests on a complex 3D laser printing process called two-photon polymerization direct laser writing. As a laser is focused inside a droplet of an ultraviolet-light-sensitive liquid resin, the material becomes a solid polymer where molecules are simultaneously hit by two photons. By scanning the laser or moving the stage in three dimensions, the technique is able to render periodic arrangements of cells, each consisting of assemblies of plates as thin as 160 nanometers.

One of the group's innovations was to include tiny holes in the plates that could be used to remove excess resin from the finished material. As a final step, the lattices go through pyrolysis, in which they're heated to 900 degrees Celsius in a vacuum for one hour. According to Bauer, the end result is a cube-shaped lattice of glassy carbon that has the highest strength scientists ever thought possible for such a porous material.

Bauer said that another goal and accomplishment of the study was to exploit the innate mechanical effects of the base substances. "As you take any piece of material and dramatically decrease its size down to 100 nanometers, it approaches a theoretical crystal with no pores or cracks. Reducing these flaws increases the system's overall strength," he said.

"Nobody has ever made these structures independent from scale before," added Valdevit, who directs UCI's Institute for Design and Manufacturing Innovation. "We were the first group to experimentally validate that they could perform as well as predicted while also demonstrating an architected material of unprecedented mechanical strength."

Read more at Science Daily

Time on screens has little impact on kids' social skills

Despite the time spent with smartphones and social media, young people today are just as socially skilled as those from the previous generation, a new study suggests.

Researchers compared teacher and parent evaluations of children who started kindergarten in 1998 -- six years before Facebook launched -- with those who began school in 2010, when the first iPad debuted.

Results showed both groups of kids were rated similarly on interpersonal skills such as the ability to form and maintain friendships and get along with people who are different. They were also rated similarly on self-control, such as the ability to regulate their temper.

In other words, the kids are still all right, said Douglas Downey, lead author of the study and professor of sociology at The Ohio State University.

"In virtually every comparison we made, either social skills stayed the same or actually went up modestly for the children born later," Downey said.

"There's very little evidence that screen exposure was problematic for the growth of social skills."

Downey conducted the study with Benjamin Gibbs, associate professor of sociology at Brigham Young University. The study was just published online in the American Journal of Sociology.

The idea for the study came several years ago when Downey had an argument at a pizza restaurant with his son, Nick, about whether social skills had declined among the new generation of youth.

"I started explaining to him how terrible his generation was in terms of their social skills, probably because of how much time they spent looking at screens," Downey said.

"Nick asked me how I knew that. And when I checked there really wasn't any solid evidence."

So Downey, with his colleague, decided to investigate. For their study, they used data from The Early Childhood Longitudinal Study, which is run by the National Center for Educational Statistics.

The ECLS follows children from kindergarten to fifth grade. The researchers compared data on the ECLS-K cohort that included children who began kindergarten in 1998 (19,150 students) with the cohort that began kindergarten in 2010 (13,400 students).

Children were assessed by teachers six times between the start of kindergarten and the end of fifth grade. They were assessed by parents at the beginning and end of kindergarten and the end of first grade.

Downey and Gibbs focused mostly on the teacher evaluations, because they followed children all the way to fifth grade, although the results from parents were comparable.

Results showed that from the teachers' perspective, children's social skills did not decline between the 1998 and 2010 groups. And similar patterns persisted as the children progressed to fifth grade.

In fact, teachers' evaluations of children's interpersonal skills and self-control tended to be slightly higher for those in the 2010 cohort than those in the 1998 group, Downey said.

Even children within the two groups who had the heaviest exposure to screens showed similar development in social skills compared to those with little screen exposure, results showed.

There was one exception: Social skills were slightly lower for children who accessed online gaming and social networking sites many times a day.

"But even that was a pretty small effect," Downey said.

"Overall, we found very little evidence that the time spent on screens was hurting social skills for most children."

Downey said while he was initially surprised to see that time spent on screens didn't affect social skills, he really shouldn't have been.

"There is a tendency for every generation at my age to start to have concerns about the younger generation. It is an old story," he said.

These worries often involve "moral panic" over new technology, Downey explained. Adults are concerned when technological change starts to undermine traditional relationships, particularly the parent-child relationship.

"The introduction of telephones, automobiles, radio all led to moral panic among adults of the time because the technology allowed children to enjoy more autonomy," he said.

"Fears over screen-based technology likely represent the most recent panic in response to technological change."

If anything, new generations are learning that having good social relationships means being able to communicate successfully both face-to-face and online, Downey said.

Read more at Science Daily

Vaccine skeptics actually think differently than other people

In 2000, the measles virus was declared eliminated from the United States. Despite cases coming in from outside the country, there were few outbreaks because most people were vaccinated against measles. And then 2019 happened.

The U.S. saw 1,282 confirmed cases in 31 states -- the greatest number reported since 1992, with nearly three-fourths linked to recent outbreaks in New York, according to the Centers for Disease Control and Prevention. Most cases were among people who were not vaccinated against measles.

After events like this, many people express confusion about others' hesitancy or unwillingness to get vaccinated or to vaccinate their children, a concept called vaccine skepticism. As vaccine skepticism has become increasingly widespread, two researchers in the Texas Tech University Department of Psychological Sciences have suggested a possible explanation.

In an article published recently in the journal Vaccine, Mark LaCour and Tyler Davis suggest some people find vaccines risky because they overestimate the likelihood of negative events, particularly those that are rare.

The fact that these overestimations carry over through all kinds of negative events -- not just those related to vaccines -- suggests that people higher in vaccine skepticism actually may process information differently than people lower in vaccine skepticism, said Davis, an associate professor of experimental psychology and director of the Caprock FMRI Laboratory.

"We might have assumed that people who are high in vaccine skepticism would have overestimated the likelihood of negative vaccine-related events, but it is more surprising that this is true for negative, mortality-related events as a broader category," Davis said. "Here we saw an overestimation of rare events for things that don't have anything to do with vaccination. This suggests that there are basic cognitive or affective variables that influence vaccine skepticism."

In their first experiment, LaCour and Davis surveyed 158 participants to determine the level of vaccine skepticism underlying their perceived dangers, feelings of powerlessness, disillusionment and trust in authorities regarding vaccines. Participants then estimated the frequency of death associated with 40 different causes, ranging from cancers, animal bites and childbirth to fireworks, flooding and car accidents. LaCour and Davis found that people higher in vaccine skepticism were less accurate in their estimations of how frequently these causes of death occur. Specifically, they found that higher vaccine skepticism was associated with an overestimation of rare events.

The second experiment followed the same procedures as the first, but participants additionally estimated the frequency of neutral or positive events -- such as papal visits to the United States, triplet births or Willie Nelson concerts -- to test whether the negative tone of mortality statistics may play a role. LaCour and Davis found that people higher in vaccine skepticism were less accurate in their estimations of mortality-related events and overestimated the negative events more than the neutral/positive events.

"My takeaway is that vaccine skeptics probably don't have the best understanding of how likely or probable different events are," said LaCour, a doctoral student in psychological sciences. "They might be more easily swayed by anecdotal horror stories. For example, your child can have a seizure from getting vaccinated. It's extremely rare, but it is within the realm of possibility. If you were so inclined, you could follow Facebook groups that publicize extremely rare events. These cognitive distortions of anecdotes into trends are probably exacerbated by decisions to subscribe to statistically non-representative information sources."

While the researchers didn't find an association between a person's education level and their vaccine skepticism, LaCour and Davis believe there is a difference in the information being consumed and used by people higher in vaccine skepticism.

"It may be the case that they are specifically seeking out biased information, for example, to confirm their skeptical beliefs," Davis said. "It could be that they have more of an attentional bias to negative, mortality-related events, which makes them remember this information better. Strategies to get the right information to people through public service announcements or formal education may work, but it doesn't seem to be an issue that people with higher vaccine skepticism are less educated in any fundamental way in terms of basic science or math education. Thus, simple increases in these alone -- without targeted informational interventions -- would seem unlikely to help."

As LaCour noted, these results leave open many new avenues for further research.

"Do some people encode scary stories -- for instance, hearing about a child that has a seizure after getting vaccinated -- more strongly than others and then consequently remember these anecdotes more easily?" he asked. "Do they instead have certain attitudes and search their memory harder for evidence to support this belief? Is it a bit of both? How can you counteract these processes?

Read more at Science Daily

Deadliest malaria strain protects itself from the immune system

The parasite causing the most severe form of human malaria uses proteins to make red blood cells sticky, making it harder for the immune system to destroy it and leading to potentially fatal blood clots. New research at the Crick has identified how the parasite may control this process.

The Nature Microbiology study looked into how the parasite, Plasmodium falciparum, evades the immune system. This parasite causes more than 95% of the 400,000 deaths caused by malaria each year.

Once it enters the human bloodstream, the parasite releases proteins into the host's red blood cell which are then presented on the outside surface of the cell. These proteins stick to other blood cells and blood vessel walls so that the infected cells no longer circulate around the body and pass through the spleen. This protects the parasite as the spleen and the immune cells inside it would destroy these infected cells.

This stickiness can also lead to blood cells lumping together into blood clots. By blocking the blood flow to vital organs, these clots can have fatal consequences, especially if they form in the brain or placenta.

Heledd Davies, co-lead author and postdoc in the Signalling in Apicomplexan Parasites Laboratory at the Crick, says: "This malaria parasite species is able to use a number of different variants of the same protein to make red blood cells sticky. So, if the body develops antibodies that stop one variant working, the parasite can simply switch to another one, leading to a constant arms race."

A potentially more effective route for therapies could be to target the mechanism malaria uses to transport the proteins to the cell's surface, as blocking it would reduce symptoms and allow the body to clear the parasites."

In this study, the authors identified proteins, so-called kinases, which are involved in getting the sticky proteins to the cell surface. Kinases are enzymes that can turn many other proteins on or off, and often regulate important processes in cells.

"These kinases are not released by other strains of malaria that infect humans, so we predicted that they are some of the factors that makes this species deadlier," says Hugo Belda, co-lead author and PhD student in the Signalling in Apicomplexan Parasites Laboratory at the Crick.

Read more at Science Daily

Apr 12, 2020

Money can't buy love -- or friendship

While researchers have suggested that individuals who base their self-worth on their financial success often feel lonely in everyday life, a newly published study by the University at Buffalo and Harvard Business School has taken initial steps to better understand why this link exists.

"When people base their self-worth on financial success, they experience feelings of pressure and a lack of autonomy, which are associated with negative social outcomes," says Lora Park, an associate professor of psychology at UB and one of the paper's co-authors.

"Feeling that pressure to achieve financial goals means we're putting ourselves to work at the cost of spending time with loved ones, and it's that lack of time spent with people close to us that's associated with feeling lonely and disconnected," says Deborah Ward, a UB graduate student and adjunct faculty member at the UB's psychology department who led the research on a team that also included Ashley Whillans, an assistant professor at Harvard Business School, Kristin Naragon-Gainey, at the University of Western Australia, and Han Young Jung, a former UB graduate student.

The findings, published in the journal Personality and Social Psychology Bulletin, emphasize the role of social networks and personal relationships in maintaining good mental health and why people should preserve those connections, even in the face of obstacles or pursuing challenging goals.

"Depression and anxiety are tied to isolation, and we're certainly seeing this now with the difficulties we have connecting with friends during the COVID-19 pandemic," says Ward. "These social connections are important. We need them as humans in order to feel secure, to feel mentally healthy and happy. But much of what's required to achieve success in the financial domain comes at the expense of spending time with family and friends."

Ward says it's not financial success that's problematic or the desire for money that's leading to these associations.

At the center of this research is a concept psychologists identify as Financial Contingency of Self-Worth. When people's self-worth is contingent on money, they view their financial success as being tied to the core of who they are as a person. The degree to which they succeed financially relates to how they feel about themselves -- feeling good when they think they're doing well financially, but feeling worthless if they're feeling financially insecure.

The research involved more than 2,500 participants over five different studies that looked for relationships between financial contingency of self-worth and key variables, such as time spent with others, loneliness and social disconnection. This included a daily diary study that followed participants over a two-week period to assess how they were feeling over an extended time about the importance of money and time spent engaged in various social activities.

"We saw consistent associations between valuing money in terms of who you are and experiencing negative social outcomes in previous work, so this led us to ask the question of why these associations are present," says Ward. "We see these findings as further evidence that people who base their self-worth on money are likely to feel pressured to achieve financial success, which is tied to the quality of their relationships with others."

Ward says the current study represents the beginning of efforts to uncover the processes at work with Financial Contingency of Self-Worth.

Read more at Science Daily