Aug 17, 2019

Researcher decodes the brain to help patients with mental illnesses

Approximately 1 in 5 adults in the United States experience mental illness in a given year. Severe mental illnesses cause the brain to have trouble dealing with cognitively effortful states, like focusing attention over long periods of time, discriminating between two things that are difficult to tell apart, and responding quickly to information that is coming in fast.

A new study, published in the Journal of Neural Engineering, could improve patients' abilities to manage symptoms of mental illness.

Previous research demonstrated that applying electrical stimulation at just the right time helps the brain of a patient with a severe mental illness work through difficult cognitive tasks. However, it was done in a laboratory setting, free from the complexities of real-world activities of daily living.

Senior author Alik Widge, MD, Ph.D, Assistant Professor of Psychiatry at the University of Minnesota Medical School, and investigators at Massachusetts General Hospital (MGH), consisting of researchers from Brown University and MGH, including co-senior author David Borton, PhD, Assistant Professor of Engineering at Brown University, were the first to analyze patients' brain activity to detect precisely when a patient is focused and their attention is fully devoted, compared to when he or she is 'at rest'. They studied patients who were undergoing surgery for severe epilepsy, who already had measurement electrodes in the relevant brain areas.

The study, which was part of DARPA's SUBNETS program, found that specific signatures and algorithms can be used to tell when someone is focused and really trying to do a task that is hard for them, indicating that they could benefit from an electrical stimulation to get an extra push.

The study also demonstrates that there is no single region of the brain that can tell when someone is in this focused, effortful state. In order to detect when the patient started to focus on a cognitive task, the researchers had to analyze the information at the network level. It was essential to look at how the activity of one region coordinated with the activity of another.

"Using the same neural signals that could drive adaptive deep brain stimulation, we have shown that it is possible to detect mental states that might be amenable to closed-loop control," said lead author Nicole Provenza, MS, PhD candidate, Brown University. "While further research is necessary to generalize our findings to real-world applications, we hope that this work will ultimately contribute to the development of more effective brain stimulation therapies for mental illness."

"We want to take a patient-centered approach to treating mental illness," explained Widge. "The job of a stimulator is not to take away the symptoms; its job is to help the patient manage his or her symptoms. It gives the power back to the individual and just gives them a little extra help when they need it."

Read more at Science Daily

How stress can curb the desire to eat in an animal model

Eating disorder researchers at The University of Texas Health Science Center at Houston (UTHealth) have discovered a neurocircuit in mice that, when activated, increased their stress levels while decreasing their desire to eat. Findings appear in Nature Communications.

The scientists believe their research could aid efforts to develop treatments for a serious eating disorder called anorexia nervosa, which has the highest mortality rate of any mental disorder, according to the National Institute of Mental Health. People with anorexia nervosa avoid food, severely restrict food, or eat very small quantities of only certain foods. Even when they are dangerously underweight, they may see themselves as overweight.

"We have identified a part of the brain in a mouse model that controls the impact of emotions on eating," said Qingchun Tong, PhD, the study's senior author and an associate professor in the Center for Metabolic and Degenerative Disease at McGovern Medical School at UTHealth.

Because mice and humans have similar nervous systems, Tong, the Cullen Chair in Molecular Medicine at UTHealth, believes their findings could shed light on the part of the human brain that regulates hunger.

The investigators believe they are among the first to demonstrate the role of this neurocircuit in the regulation of both stress and hunger.

While previous research has established that stress can both reduce and increase a person's desire to eat, the neural mechanisms that act on the regulation of eating by stress-related responses largely remain a mystery.

Tong's team focused on a neurocircuit connecting two parts of the mouse brain: the paraventricular hypothalamus, an eating-related zone in the brain, and the ventral lateral septum, an emotional zone in the brain. The neurocircuit acts as an on/off switch.

When researchers activated the neurocircuit, there was an increase in anxiety levels and a decrease in appetite. Conversely, when the investigators inhibited the neurocircuit, anxiety levels dropped and hunger increased.

The scientists used a research technique called optogenetics to turn the neurons in question on and off.

Yuanzhong Xu, PhD, the study's lead author and an instructor at McGovern Medical School, said additional preclinical tests are needed to confirm their findings.

Coauthors from UTHealth include Yungang Lu, PhD; Ryan Cassidy; Leandra Mangieri, PhD; Canjun Zhu, PhD; Zhiying Jiang, PhD; Xugen Huang, PhD; and Nicholas Justice, PhD. Also contributing to the paper were Yong Xu, MD, PhD, and Benjamin Arenkiel, PhD, of Baylor College of Medicine.

Read more at Science Daily

Aug 16, 2019

Ancient feces reveal how 'marsh diet' left Bronze Age Fen folk infected with parasites

New research published today in the journal Parasitology shows how the prehistoric inhabitants of a settlement in the freshwater marshes of eastern England were infected by intestinal worms caught from foraging for food in the lakes and waterways around their homes.

The Bronze Age settlement at Must Farm, located near what is now the fenland city of Peterborough, consisted of wooden houses built on stilts above the water. Wooden causeways connected islands in the marsh, and dugout canoes were used to travel along water channels.

The village burnt down in a catastrophic fire around 3,000 years ago, with artefacts from the houses preserved in mud below the waterline, including food, cloth, and jewellery. The site has been called "Britain's Pompeii."

Also preserved in the surrounding mud were waterlogged "coprolites" -- pieces of human faeces -- that have now been collected and analysed by archaeologists at the University of Cambridge. They used microscopy techniques to detect ancient parasite eggs within the faeces and surrounding sediment.

Very little is known about the intestinal diseases of Bronze Age Britain. The one previous study, of a farming village in Somerset, found evidence of roundworm and whipworm: parasites spread through contamination of food by human faeces.

The ancient excrement of the Anglian marshes tells a different story. "We have found the earliest evidence for fish tapeworm, Echinostoma worm, and giant kidney worm in Britain," said study lead author Dr Piers Mitchell of Cambridge's Department of Archaeology.

"These parasites are spread by eating raw aquatic animals such as fish, amphibians and molluscs. Living over slow-moving water may have protected the inhabitants from some parasites, but put them at risk of others if they ate fish or frogs."

Disposal of human and animal waste into the water around the settlement likely prevented direct faecal pollution of the fenlanders' food, and so prevented infection from roundworm -- the eggs of which have been found at Bronze Age sites across Europe.

However, water in the fens would have been quite stagnant, due in part to thick reed beds, leaving waste accumulating in the surrounding channels. Researchers say this likely provided fertile ground for other parasites to infect local wildlife, which -- if eaten raw or poorly cooked -- then spread to village residents.

"The dumping of excrement into the freshwater channel in which the settlement was built, and consumption of aquatic organisms from the surrounding area, created an ideal nexus for infection with various species of intestinal parasite," said study first author Marissa Ledger, also from Cambridge's Department of Archaeology.

Fish tapeworms can reach 10m in length, and live coiled up in the intestines. Heavy infection can lead to anemia. Giant kidney worms can reach up to a metre in length. They gradually destroy the organ as they become larger, leading to kidney failure. Echinostoma worms are much smaller, up to 1cm in length. Heavy infection can lead to inflammation of the intestinal lining.

"As writing was only introduced to Britain centuries later with the Romans, these people were unable to record what happened to them during their lives. This research enables us for the first time to clearly understand the infectious diseases experienced by prehistoric people living in the Fens," said Ledger.

The Cambridge team worked with colleagues at the University of Bristol's Organic Chemistry Unit to determine whether coprolites excavated from around the houses were human or animal. While some were human, others were from dogs.

"Both humans and dogs were infected by similar parasitic worms, which suggests the humans were sharing their food or leftovers with their dogs," said Ledger.

Other parasites that infect animals were also found at the site, including pig whipworm and Capillaria worm. It is thought that they originated from the butchery and consumption of the intestines of farmed or hunted animals, but probably did not cause humans any harm.

The researchers compared their latest data with previous studies on ancient parasites from both the Bronze Age and Neolithic. Must Farm tallies with the trend of fewer parasite species found at Bronze Age compared with Neolithic sites.

"Our study fits with the broader pattern of a shrinking of the parasite ecosystem through time," said Mitchell. "Changes in diet, sanitation and human-animal relationships over millennia have affected rates of parasitic infection." Although he points out that infections from the fish tapeworm found at Must Farm have seen a recent resurgence due to the popularity of sushi, smoked salmon and ceviche.

Read more at Science Daily

Discovery of a bottleneck relief in photosynthesis may have a major impact on food crops

Scientists have found how to relieve a bottleneck in the process by which plants transform sunlight into food, which may lead to an increase in crop production. They discovered that producing more of a protein that controls the rate in which electrons flow during photosynthesis, accelerates the whole process.

"We tested the effect of increasing the production of the Rieske FeS protein, and found it increases photosynthesis by 10 percent," said lead researcher Dr Maria Ermakova from the ARC Centre of Excellence for Translational Photosynthesis (CoETP).

"The Rieske FeS protein belongs to a complex which is like a hose through which electrons flow, so the energy can be used by the carbon engine of the plant. By overexpressing this protein, we have discovered how to release the pressure of the hose, so more electrons can flow, accelerating the photosynthetic process," said Dr Ermakova, who works at The Australian National University (ANU) Centre Node.

Dr Ermakova, the lead author of the paper published this week in the journal Communications Biology, said that this is the first time that scientists have generated more of the Rieske FeS protein inside plants that use the C4 photosynthesis pathway.

Until now, the majority of efforts to improve photosynthesis have been done in species that use C3 photosynthesis, such as wheat and rice, but not a lot has been done in enhancing C4 photosynthesis.

This is despite the fact that C4 crop species -- like maize and sorghum -- play a key role in world agriculture, and are already some of the most productive crops in the world.

"These results demonstrate that changing the rate of electron transport enhances photosynthesis in the C4 model species, Setaria viridis, a close relative of maize and sorghum. It is an important proof of concept that helps us enormously to understand more about how C4 photosynthesis works," said CoETP's Deputy Director Professor Susanne von Caemmerer, one of the co-authors of this study.

The Rieske protein is particularly important in environments with high radiance, where C4 plants grow. Previous research has shown that overexpressing the Rieske protein in C3 plants improves photosynthesis, but more research was needed in C4 plants.

"It is really exciting, as we are now ready to transform this into sorghum and test the effect it has on biomass in a food crop," Professor von Caemmerer says.

The research is the result of an international collaboration with researchers from the University of Essex in the UK, who are part of the Realizing Increased Photosynthetic Efficiency (RIPE) project.

"This is a great example that we need international collaborations to solve the complex challenges faced in trying to improve crop production," said University of Essex researcher Patricia Lopez-Calcagno, who was involved in producing some of the essential genetic components for the plant transformation.

"In the last 30 years, we have learnt a lot about how C4 plants work by making them worse -- by breaking them as part of the process of discovery. However, this is the first example in which we have actually improved the plants," says Professor Robert Furbank, Director of the ARC Centre of Excellence for Translational Photosynthesis and one of the authors of the study.

"Our next steps are to assemble the whole protein FeS complex, which has many other components. There is a lot more to do and lots of things about this protein complex we still don't understand. We have reached 10 percent enhancement by overexpressing the Rieske FeS component, but we know we can do better than that," says Professor Furbank.

Read more at Science Daily

Best of both worlds: Asteroids and massive mergers

The race is on. Since the construction of technology able to detect the ripples in space and time triggered by collisions from massive objects in the universe, astronomers around the world have been searching for the bursts of light that could accompany such collisions, which are thought to be the sources of rare heavy elements.

The University of Arizona's Steward Observatory has partnered with the Catalina Sky Survey, which searches for near-Earth asteroids from atop Mount Lemmon, in an effort dubbed Searches after Gravitational Waves Using ARizona Observatories, or SAGUARO, to find optical counterparts to massive mergers.

"Catalina Sky Survey has all of this infrastructure for their asteroid survey. So we have deployed additional software to take gravitational wave alerts from LIGO (the Laser Interferometer Gravitational-Wave Observatory) and the Virgo interferometer then notify the survey to search an area of sky most likely to contain the optical counterpart," said Michael Lundquist, postdoctoral research associate and lead author on the study published today in the Astrophysical Journal Letters.

"Essentially, instead of searching the next section of sky that we would normally, we go off and observe some other area that has a higher probability of containing an optical counterpart of a gravitational wave event," said Eric Christensen, Catalina Sky Survey director and Lunar and Planetary Laboratory senior staff scientist. "The main idea is we can run this system while still maintaining the asteroid search."

The ongoing campaign began in April, and in that month alone, the team was notified of three massive collisions. Because it is difficult to tell the precise location from which the gravitational wave originated, locating optical counterparts can be difficult.

According to Lundquist, two strategies are being employed. In the first, teams with small telescopes target galaxies that are at the right approximate distance, according to the gravitational wave signal. Catalina Sky Survey, on the other hand, utilizes a 60-inch telescope with a wide field of view to scan large swaths of sky in 30 minutes.

Three alerts, on April 9, 25 and 26, triggered the team's software to search nearly 20,000 objects. Machine learning software then trimmed down the total number of potential optical counterparts to five.

The first gravitational wave event was a merger of two black holes, Lundquist said.

"There are some people who think you can get an optical counterpart to those, but it's definitely inconclusive," he said.

The second event was a merger of two neutron stars, the incredibly dense core of a collapsed giant star. The third is thought to be a merger between a neutron star and a black hole, Lundquist said.

While no teams confirmed optical counterparts, the UA team did find several supernovae. They also used the Large Binocular Telescope Observatory to spectroscopically classify one promising target from another group. It was determined to be a supernova and not associated with the gravitational wave event.

"We also found a near-Earth object in the search field on April 25," Christensen said. "That proves right there we can do both things at the same time."

They were able to do this because Catalina Sky Survey has observations of the same swaths of sky going back many years. Many other groups don't have easy access to past photos for comparison, offering the UA team a leg up.

"We have really nice references," Lundquist said. "We subtract the new image from the old image and use that difference to look for anything new in the sky."

"The process Michael described," Christensen said, "starting with a large number of candidate detections and filtering down to whatever the true detections are, is very familiar. We do that with near-Earth objects, as well."

The team is planning on deploying a second telescope in the hunt for optical counterparts: Catalina Sky Survey's 0.7-meter Schmidt telescope. While the telescope is smaller than the 60-inch telescope, it has an even wider field of view, which allows astronomers to quickly search an even larger chunk of sky. They've also improved their machine learning software to filter out stars that regularly change in brightness.

Read more at Science Daily

How E. coli knows how to cause the worst possible infection

A pair of University of Virginia School of Medicine scientists have revealed how E. coli seeks out the most oxygen-free crevices of your colon to cause the worst infection possible. The discovery could one day let doctors prevent the infection by allowing E. coli to pass harmlessly through the body.

The new discovery shows just how the foodborne pathogen knows where and when to begin colonizing the colon on its way to making you sick. By recognizing the low-oxygen environment of the large intestine, the dangerous bacterium gives itself the best odds of establishing a robust infection -- one that is punishing for the host.

"Bacterial pathogens typically colonize a specific tissue in the host. Therefore, as part of their infection strategies, bacterial pathogens precisely time deployment of proteins and toxins to these specific colonization niches in the human host. This allows the pathogens to save energy and avoid detection by our immune systems and ultimately cause disease," said researcher Melissa Kendall, PhD, of UVA's Department of Microbiology, Immunology and Cancer Biology. "By knowing how bacterial pathogens sense where they are in the body, we may one day be able to prevent E. coli, as well as other pathogens, from knowing where it is inside a human host and allow it to pass through the body without causing an infection."

A Bacterial Goldilocks
E. coli naturally lives in our colons, and most strains do us no harm. But there are several strains that can cause cramps, diarrhea, vomiting, even kidney failure and death. Children are at particular risk. As such, E. coli outbreaks appear periodically in the news. In July, for example, people in several states were sickened by E. coli linked to ground bison meat.

Kendall and graduate student Elizabeth M. Melson have shed important light on how harmful E. coli infections establish themselves in the body. The researchers outlined a process the bacteria use to detect low oxygen levels in the large intestine and then produce proteins that allow E. coli to attach to host cells and establish infection.

Oxygen actually diffuses from the intestinal tissue into the gut, and there are comparably higher levels in the small intestine than the large. E. coli specifically waits until it has reached the-low oxygen large intestine before striking.

E. coli's vital asset is a small form of RNA that activates particular genes when oxygen levels are low enough, the researchers reveal. It's at this point that the infection really gets established. Thanks to this natural sensing process, the bacteria are able to establish infection and begin to manufacture harmful Shiga toxins.

The researchers believe that other bacterial pathogens, such as Shigella and Salmonella, likely employ a similar control mechanism, though more work needs to be done to establish that.

Read more at Science Daily

Aug 15, 2019

July 2019 was hottest month on record for the planet

Thermometer showing high temperature in summer
Much of the planet sweltered in unprecedented heat in July, as temperatures soared to new heights in the hottest month ever recorded. The record warmth also shrank Arctic and Antarctic sea ice to historic lows.

Here's a closer look into NOAA's latest monthly global climate report:

Climate by the numbers: July 2019

The average global temperature in July was 1.71 degrees F above the 20th-century average of 60.4 degrees, making it the hottest July in the 140-year record, according to scientists at NOAA's National Centers for Environmental Information. The previous hottest month on record was July 2016.

Nine of the 10 hottest Julys have occurred since 2005 -- with the last five years ranking as the five hottest. Last month was also the 43rd consecutive July and 415th consecutive month with above-average global temperatures.

Year to date: January through July

The period from January through July produced a global temperature that was 1.71 degrees F above the 20th-century average of 56.9 degrees, tying with 2017 as the second-hottest year to date on record.

It was the hottest year to date for parts of North and South America, Asia, Australia, New Zealand, the southern half of Africa, portions of the western Pacific Ocean, western Indian Ocean and the Atlantic Ocean.

More notable stats and facts

Record-low sea ice: Average Arctic sea ice set a record low for July, running 19.8% below average -- surpassing the previous historic low of July 2012.

Average Antarctic sea-ice coverage was 4.3% below the 1981-2010 average, making it the smallest for July in the 41-year record.

Some cool spots: Parts of Scandinavia and western and eastern Russia had temperatures at least 2.7 degrees F below average.

NOAA's full climate report is available at: https://www.ncdc.noaa.gov/sotc/global/201907

From Science Daily

Moon glows brighter than sun in images from NASA's Fermi

These images show the steadily improving view of the Moon’s gamma-ray glow from NASA’s Fermi Gamma-ray Space Telescope. Each 5-by-5-degree image is centered on the Moon and shows gamma rays with energies above 31 million electron volts, or tens of millions of times that of visible light. At these energies, the Moon is actually brighter than the Sun. Brighter colors indicate greater numbers of gamma rays. This image sequence shows how longer exposure, ranging from two to 128 months (10.7 years), improved the view.
If our eyes could see high-energy radiation called gamma rays, the Moon would appear brighter than the Sun! That's how NASA's Fermi Gamma-ray Space Telescope has seen our neighbor in space for the past decade.

Gamma-ray observations are not sensitive enough to clearly see the shape of the Moon's disk or any surface features. Instead, Fermi's Large Area Telescope (LAT) detects a prominent glow centered on the Moon's position in the sky.

Mario Nicola Mazziotta and Francesco Loparco, both at Italy's National Institute of Nuclear Physics in Bari, have been analyzing the Moon's gamma-ray glow as a way of better understanding another type of radiation from space: fast-moving particles called cosmic rays.

"Cosmic rays are mostly protons accelerated by some of the most energetic phenomena in the universe, like the blast waves of exploding stars and jets produced when matter falls into black holes," explained Mazziotta.

Because the particles are electrically charged, they're strongly affected by magnetic fields, which the Moon lacks. As a result, even low-energy cosmic rays can reach the surface, turning the Moon into a handy space-based particle detector. When cosmic rays strike, they interact with the powdery surface of the Moon, called the regolith, to produce gamma-ray emission. The Moon absorbs most of these gamma rays, but some of them escape.

Mazziotta and Loparco analyzed Fermi LAT lunar observations to show how the view has improved during the mission. They rounded up data for gamma rays with energies above 31 million electron volts -- more than 10 million times greater than the energy of visible light -- and organized them over time, showing how longer exposures improve the view.

"Seen at these energies, the Moon would never go through its monthly cycle of phases and would always look full," said Loparco.

As NASA sets its sights on sending humans to the Moon by 2024 through the Artemis program, with the eventual goal of sending astronauts to Mars, understanding various aspects of the lunar environment take on new importance. These gamma-ray observations are a reminder that astronauts on the Moon will require protection from the same cosmic rays that produce this high-energy gamma radiation.

While the Moon's gamma-ray glow is surprising and impressive, the Sun does shine brighter in gamma rays with energies higher than 1 billion electron volts. Cosmic rays with lower energies do not reach the Sun because its powerful magnetic field screens them out. But much more energetic cosmic rays can penetrate this magnetic shield and strike the Sun's denser atmosphere, producing gamma rays that can reach Fermi.

Read more at Science Daily

Young Jupiter was smacked head-on by massive newborn planet

Jupiter
A colossal, head-on collision between Jupiter and a still-forming planet in the early solar system, about 4.5 billion years ago, could explain surprising readings from NASA's Juno spacecraft, according to a study this week in the journal Nature.

Astronomers from Rice University and China's Sun Yat-sen University say their head-on impact scenario can explain Juno's previously puzzling gravitational readings, which suggest that Jupiter's core is less dense and more extended that expected.

"This is puzzling," said Rice astronomer and study co-author Andrea Isella. "It suggests that something happened that stirred up the core, and that's where the giant impact comes into play."

Isella said leading theories of planet formation suggest Jupiter began as a dense, rocky or icy planet that later gathered its thick atmosphere from the primordial disk of gas and dust that birthed our sun.

Isella said he was skeptical when study lead author Shang-Fei Liu first suggested the idea that the data could be explained by a giant impact that stirred Jupiter's core, mixing the dense contents of its core with less dense layers above. Liu, a former postdoctoral researcher in Isella's group, is now a member of the faculty at Sun Yat-sen in Zhuhai, China.

"It sounded very unlikely to me," Isella recalled, "like a one-in-a-trillion probability. But Shang-Fei convinced me, by shear calculation, that this was not so improbable."

The research team ran thousands of computer simulations and found that a fast-growing Jupiter can have perturbed the orbits of nearby "planetary embryos," protoplanets that were in the early stages of planet formation.

Liu said the calculations included estimates of the probability of collisions under different scenarios and distribution of impact angles. In all cases, Liu and colleagues found there was at least a 40% chance that Jupiter would swallow a planetary embryo within its first few million years. In addition, Jupiter mass-produced "strong gravitational focusing" that made head-on collisions more common than grazing ones.

Isella said the collision scenario became even more compelling after Liu ran 3D computer models that showed how a collision would affect Jupiter's core.

"Because it's dense, and it comes in with a lot of energy, the impactor would be like a bullet that goes through the atmosphere and hits the core head-on," Isella said. "Before impact, you have a very dense core, surrounded by atmosphere. The head-on impact spreads things out, diluting the core."

Impacts at a grazing angle could result in the impacting planet becoming gravitationally trapped and gradually sinking into Jupiter's core, and Liu said smaller planetary embryos about as massive as Earth would disintegrate in Jupiter's thick atmosphere.

"The only scenario that resulted in a core-density profile similar to what Juno measures today is a head-on impact with a planetary embryo about 10 times more massive than Earth," Liu said.

Isella said the calculations suggest that even if this impact happened 4.5 billion years ago, "it could still take many, many billions of years for the heavy material to settle back down into a dense core under the circumstances suggested by the paper."

Isella, who is also a co-investigator on the Rice-based, NASA-funded CLEVER Planets project, said the study's implications reach beyond our solar system.

"There are astronomical observations of stars that might be explained by this kind of event," he said.

"This is still a new field, so the results are far from solid, but as some people have been looking for planets around distant stars, they sometimes see infrared emissions that disappear after a few years," Isella said. "One idea is that if you are looking at a star as two rocky planets collide head-on and shatter, you could create a cloud of dust that absorbs stellar light and reemits it. So, you kind of see a flash, in the sense that now you have this cloud of dust that emits light. And then after some time, the dust dissipates and that emission goes away."

The Juno mission was designed to help scientists better understand Jupiter's origin and evolution. The spacecraft, which launched in 2011, carries instruments to map Jupiter's gravitational and magnetic fields and probe the planet's deep, internal structure.

Read more at Science Daily

How many Earth-like planets are around sun-like stars?

Exoplanets concept illustration
A new study provides the most accurate estimate of the frequency that planets that are similar to Earth in size and in distance from their host star occur around stars similar to our Sun. Knowing the rate that these potentially habitable planets occur will be important for designing future astronomical missions to characterize nearby rocky planets around sun-like stars that could support life. A paper describing the model appears August 14, 2019 in The Astronomical Journal.

Thousands of planets have been discovered by NASA's Kepler space telescope. Kepler, which was launched in 2009 and retired by NASA in 2018 when it exhausted its fuel supply, observed hundreds of thousands of stars and identified planets outside of our solar system -- exoplanets -- by documenting transit events. Transits events occur when a planet's orbit passes between its star and the telescope, blocking some of the star's light so that it appears to dim. By measuring the amount of dimming and the duration between transits and using information about the star's properties astronomers characterize the size of the planet and the distance between the planet and its host star.

"Kepler discovered planets with a wide variety of sizes, compositions and orbits," said Eric B. Ford, professor of astronomy and astrophysics at Penn State and one of the leaders of the research team. "We want to use those discoveries to improve our understanding of planet formation and to plan future missions to search for planets that might be habitable. However, simply counting exoplanets of a given size or orbital distance is misleading, since it's much harder to find small planets far from their star than to find large planets close to their star."

To overcome that hurdle, the researchers designed a new method to infer the occurrence rate of planets across a wide range of sizes and orbital distances. The new model simulates 'universes' of stars and planets and then 'observes' these simulated universes to determine how many of the planets would have been discovered by Kepler in each `universe.'

"We used the final catalog of planets identified by Kepler and improved star properties from the European Space Agency's Gaia spacecraft to build our simulations," said Danley Hsu, a graduate student at Penn State and the first author of the paper. "By comparing the results to the planets cataloged by Kepler, we characterized the rate of planets per star and how that depends on planet size and orbital distance. Our novel approach allowed the team to account for several effects that have not been included in previous studies."

The results of this study are particularly relevant for planning future space missions to characterize potentially Earth-like planets. While the Kepler mission discovered thousands of small planets, most are so far away that it is difficult for astronomers to learn details about their composition and atmospheres.

"Scientists are particularly interested in searching for biomarkers -- molecules indicative of life -- in the atmospheres of roughly Earth-size planets that orbit in the 'habitable-zone' of Sun-like stars," said Ford. "The habitable zone is a range of orbital distances at which the planets could support liquid water on their surfaces. Searching for evidence of life on Earth-size planets in the habitable zone of sun-like stars will require a large new space mission."

How large that mission needs to be will depend on the abundance of Earth-size planets. NASA and the National Academies of Science are currently exploring mission concepts that differ substantially in size and their capabilities. If Earth-size planets are rare, then the nearest Earth-like planets are farther away and a large, ambitious mission will be required to search for evidence of life on potentially Earth-like planets. On the other hand, if Earth-size planets are common, then there will be Earth-size exoplanets orbiting stars that are close to the sun and a relatively small observatory may be able to study their atmospheres.

"While most of the stars that Kepler observed are typically thousands of light years away from the Sun, Kepler observed a large enough sample of stars that we can perform a rigorous statistical analysis to estimate of the rate of Earth-size planets in the habitable zone of nearby sun-like stars." said Hsu.

Based on their simulations, the researchers estimate that planets very close to Earth in size, from three-quarters to one-and-a-half times the size of earth, with orbital periods ranging from 237 to 500 days, occur around approximately one in four stars. Importantly, their model quantifies the uncertainty in that estimate. They recommend that future planet-finding missions plan for a true rate that ranges from as low about one planet for every 33 stars to as high as nearly one planet for every two stars.

"Knowing how often we should expect to find planets of a given size and orbital period is extremely helpful for optimize surveys for exoplanets and the design of upcoming space missions to maximize their chance of success," said Ford. "Penn State is a leader in brining state-of-the-art statistical and computational methods to the analysis of astronomical observations to address these sorts of questions. Our Institute for CyberScience (ICS) and Center for Astrostatistics (CASt) provide infrastructure and support that makes these types of projects possible."

Read more at Science Daily

Aug 14, 2019

James Webb Space Telescope could begin learning about TRAPPIST-1 atmospheres in a year

New research from astronomers at the University of Washington uses the intriguing TRAPPIST-1 planetary system as a kind of laboratory to model not the planets themselves, but how the coming James Webb Space Telescope might detect and study their atmospheres, on the path toward looking for life beyond Earth.

The study, led by Jacob Lustig-Yaeger, a UW doctoral student in astronomy, finds that the James Webb telescope, set to launch in 2021, might be able to learn key information about the atmospheres of the TRAPPIST-1 worlds even in its first year of operation, unless -- as an old song goes -- clouds get in the way.

"The Webb telescope has been built, and we have an idea how it will operate," said Lustig-Yaeger. "We used computer modeling to determine the most efficient way to use the telescope to answer the most basic question we'll want to ask, which is: Are there even atmospheres on these planets, or not?"

His paper, "The Detectability and Characterization of the TRAPPIST-1 Exoplanet Atmospheres with JWST," was published online in June in the Astronomical Journal.

The TRAPPIST-1 system, 39 light-years -- or about 235 trillion miles -- away in the constellation of Aquarius, interests astronomers because of its seven orbiting rocky, or Earth-like, planets. Three of these worlds are in the star's habitable zone -- that swath of space around a star that is just right to allow liquid water on the surface of a rocky planet, thus giving life a chance.

The star, TRAPPIST-1, was much hotter when it formed than it is now, which would have subjected all seven planets to ocean, ice and atmospheric loss in the past.

"There is a big question in the field right now whether these planets even have atmospheres, especially the innermost planets," Lustig-Yaeger said. "Once we have confirmed that there are atmospheres, then what can we learn about each planet's atmosphere -- the molecules that make it up?"

Given the way he suggests the James Webb Space Telescope might search, it could learn a lot in fairly short time, this paper finds.

Astronomers detect exoplanets when they pass in front of or "transit" their host star, resulting in a measurable dimming of starlight. Planets closer to their star transit more frequently and so are somewhat easier to study. When a planet transits its star, a bit of the star's light passes through the planet's atmosphere, with which astronomers can learn about the molecular composition of the atmosphere.

Lustig-Yaeger said astronomers can see tiny differences in the planet's size when they look in different colors, or wavelengths, of light.

"This happens because the gases in the planet's atmosphere absorb light only at very specific colors. Since each gas has a unique 'spectral fingerprint,' we can identify them and begin to piece together the composition of the exoplanet's atmosphere."

Lustig-Yaeger said the team's modeling indicates that the James Webb telescope, using a versatile onboard tool called the Near-Infrared Spectrograph, could detect the atmospheres of all seven TRAPPIST-1 planets in 10 or fewer transits -- if they have cloud-free atmospheres. And of course we don't know whether or not they have clouds.

If the TRAPPIST-1 planets have thick, globally enshrouding clouds like Venus does, detecting atmospheres might take up to 30 transits.

"But that is still an achievable goal," he said. "It means that even in the case of realistic high-altitude clouds, the James Webb telescope will still be capable of detecting the presence of atmospheres -- which before our paper was not known."

Many rocky exoplanets have been discovered in recent years, but astronomers have not yet detected their atmospheres. The modeling in this study, Lustig-Yaeger said, "demonstrates that, for this TRAPPIST-1 system, detecting terrestrial exoplanet atmospheres is on the horizon with the James Webb Space Telescope -- perhaps well within its primary five-year mission."

The team found that the Webb telescope may be able to detect signs that the TRAPPIST-1 planets lost large amounts of water in the past, when the star was much hotter. This could leave instances where abiotically produced oxygen -- not representative of life -- fills an exoplanet atmosphere, which could give a sort of "false positive" for life. If this is the case with TRAPPIST-1 planets, the Webb telescope may be able to detect those as well.

Lustig-Yaeger's co-authors, both with the UW, are astronomy professor Victoria Meadows, who is also principal investigator for the UW-based Virtual Planetary Laboratory; and astronomy doctoral student Andrew Lincowski. The work follows, in part, on previous work by Lincowski modeling possible climates for the seven TRAPPIST-1 worlds.

"By doing this study, we have looked at: What are the best-case scenarios for the James Webb Space Telescope? What is it going to be capable of doing? Because there are definitely going to be more Earth-sized planets found before it launches in 2021."

Read more at Science Daily

Males of a feather flock together

"Birds of a feather flock together" or rather "opposites attract"? The recently published study on male macaques in Thailand speaks for the former: Behavioral biologists from the German Primate Centre -- Leibniz Institute for Primate Research and psychologists from the University of Göttingen have observed that the more similar male Assamese macaques are in their personality, the closer they get and the stronger their social bonds. The scientists were able to rule out the possibility that the causality works the other way round, i.e. that close partners would become more and more similar over time, because the males' personality remained stable even if they migrated between groups and thus changed their social partners. It is suggested that this behavior provides an evolutionary advantage: If the friend has a similar personality, this facilitates communication and coordination and thus cooperation in critical situations (Animal Behaviour).

Social bonds in animals are defined as stable, equal and cooperative relationships, comparable to human friendships. Such bromance among unrelated adult males has been described in a few species. A close relationship with another male can be advantageous, as it promises support in critical situations, such as aggressive conflicts with other group mates. Personality homophily, i.e. the tendency to like others if they are similar, has been described both in humans and in a few species of animals. The advantage is obvious: The more similar our counterpart is to us, the better we can predict his reactions. This creates trust. But what are the characteristics that should be particularly similar for a relationship to succeed?

Within the framework of the Research Training Group "Understanding Social Relationships" of the German Primate Centre and the University of Göttingen, the team around PhD student Anja Ebenau obtained data on 24 free-living male Assamese macaques in the Phu Khieo Wildlife Sanctuary in Thailand over a period of almost two years. In close cooperation with the psychologists Lars Penke and Christoph von Borell, the individual personality of the males was described from detailed quantitative behavior protocols and questionnaires like those used in human psychology. The analyses demonstrate that the similarity of two males in the emergent personality dimensions gregariousness, aggressiveness, sociability, vigilance and confidence could be determined. It was found that the stronger the bond between two males, the more similar the animals were in terms of gregariousness. Notably, it did not matter whether the individuals are highly gregarious or not, they only had to be similar: Two rather solitary animals that avoid others can be just as close friends as two socially very central individuals.

In order to exclude the possibility that it is not the other way around, i.e. that friends become more and more similar in their personality over time, the characteristics of monkeys were examined before and after they had migrated to a new group and found new social partners there. It turned out that the personality of the animals remained rather stable, i.e. did not change with a new friend.

Read more at Science Daily

Sequential, concurrent multitasking is equally hard for men, women

Women and men perform equally when required to switch attention between tasks or perform two tasks simultaneously, according to a new study in the open-access journal PLOS ONE by Patricia Hirsch of Aachen University in Germany and colleagues. The finding adds to a growing literature that contradicts the widely held belief that women multitask better than men.

Multitasking -- performing several independent tasks within a short time -- requires rapidly and frequently switching attention from one task to another, increasing the cognitive demand, compared to completing single tasks in sequence. Despite scant evidence for gender differences, the popular perception is overwhelmingly that women are better at multitasking than men.

In the current study, the authors compared the abilities of 48 men and 48 women in performance of letter or number identification tasks. Some experiments required participants to pay attention to two tasks at once (concurrent multitasking), while others required them to switch attention between tasks (sequential multitasking). The researchers measured reaction time and accuracy for the multitasking experiments and for single task controls. They found that multitasking imposed a substantial cost on both speed and accuracy for both men and women, and there was no difference between the two groups in the magnitude of the cost.

The set of potential tasks and the cognitive operations underlying them is vast, and no single experiment can encompass all of them, the authors note. Discrepancies in the literature on gender differences in multi-tasking may reflect differences in the specific types of tasks assessed. However, the large sample size and lack of gender difference seen in this study indicate that at least for the underlying cognitive processes tested here -- working memory updating, the engagement and disengagement of task sets, and inhibition -- men and women do just as well, or just as poorly, when trying to multitask.

Hirsch adds: "The present findings strongly suggest that there are no substantial gender differences in multitasking performance across task-switching and dual-task paradigms, which predominantly measure cognitive control mechanisms such as working memory updating, the engagement and disengagement of task sets, and inhibition.

From Science Daily

Genes linked to Alzheimer's risk, resilience ID'd

An international team of researchers led by scientists at Washington University School of Medicine in St. Louis has identified a pair of genes that influence risk for both late-onset and early-onset Alzheimer's disease.

Most genes implicated thus far in Alzheimer's affect neurons that transmit messages, allowing different regions of the brain to communicate with one another. But the newly identified genes affect an entirely different population of cells: the brain's immune cells. The findings, published online Aug. 14 in the journal Science Translational Medicine, could provide scientists with new targets and a strategy for delaying the onset of Alzheimer's symptoms.

The genes -- known as MS4A4A and TREM2 -- operate in the microglia, the brain's immune cells. They influence Alzheimer's risk by altering levels of TREM2, a protein that is believed to help microglia cells clear excessive amounts of the Alzheimer's proteins amyloid and tau from the brain.

"The findings point to a new therapeutic strategy," said co-senior investigator Carlos Cruchaga, PhD, a professor of psychiatry and director of the NeuroGenomics and Informatics Group. "If we can do something to raise levels of the TREM2 protein in the cerebrospinal fluid, we may be able to protect against Alzheimer's disease or slow its development."

In this study, the researchers measured soluble TREM2 levels in the cerebrospinal fluid of 813 older adults, most of whom were ages 55 to 90. Of those subjects, 172 had Alzheimer's disease, 169 were cognitively normal, and another 183 had early mild cognitive impairment. They also analyzed the participants' DNA, conducting genomewide association studies to look for regions of the genome that may influence TREM2 levels in the cerebrospinal fluid.

Although variants in TREM2 are found in a very small percentage of patients with Alzheimer's disease, the gene previously had been linked to the disorder. People who carried those previously identified risk mutations were excluded from the study. Common variants in the MS4A4A gene also had been associated with risk for Alzheimer's, but this study connects those genes.

"We observed TREM2 risk variants more often in people who had Alzheimer's or were mildly cognitively impaired, compared with those who were cognitively normal," said co-senior investigator Celeste Karch, PhD, an assistant professor in the Department of Psychiatry. "It turns out that about 30 percent of the population in the study had variations in the MS4A4A gene that appear to affect their risk for developing Alzheimer's disease. Some variants protected people from Alzheimer's or made them more resilient while others increased their risk."

When the researchers dug further, they noted that variants in the MS4A4A gene cluster linked to an increase in risk for developing Alzheimer's disease are associated with lower levels of soluble TREM2 protein. The other variant, associated with higher levels of TREM2 in the cerebrospinal fluid, seemed to protect against Alzheimer's.

The research team validated its results in DNA from another 580 older adults. Once again, they found that higher soluble TREM2 levels in the cerebrospinal fluid seemed protective, while lower levels increased risk. And those protein levels -- whether high or low -- were linked to variants in the MS4A4A gene.

"For the past several years, we've been looking at TREM2 and increasing our focus on the involvement of the brain's immune cells in Alzheimer's disease" said another co-senior author, Bruno A. Benitez, MD, an assistant professor of psychiatry. "These findings give us a new therapeutic strategy to pursue, one focusing not only on neurons but on how the microglia may be involved in helping to clear damaging proteins, such as beta amyloid and tau, that are linked to Alzheimer's disease."

Those gene variants also may play a role in other diseases of the central nervous system, according to Laura Piccio, MD, PhD, an associate professor of neurology and another co-senior author.

Read more at Science Daily

Aug 13, 2019

Atomic 'Trojan horse' could inspire new generation of X-ray lasers and particle colliders

How do researchers explore nature on its most fundamental level? They build "supermicroscopes" that can resolve atomic and subatomic details. This won't work with visible light, but they can probe the tiniest dimensions of matter with beams of electrons, either by using them directly in particle colliders or by converting their energy into bright X-rays in X-ray lasers. At the heart of such scientific discovery machines are particle accelerators that first generate electrons at a source and then boost their energy in a series of accelerator cavities.

Now, an international team of researchers, including scientists from the Department of Energy's SLAC National Accelerator Laboratory, has demonstrated a potentially much brighter electron source based on plasma that could be used in more compact, more powerful particle accelerators.

The method, in which the electrons for the beam are released from neutral atoms inside the plasma, is referred to as the Trojan horse technique because it's reminiscent of the way the ancient Greeks are said to have invaded the city of Troy by hiding their forceful soldiers (electrons) inside a wooden horse (plasma), which was then pulled into the city (accelerator).

"Our experiment shows for the first time that the Trojan horse method actually works," says Bernhard Hidding from the University of Strathclyde in Glasgow, Scotland, the principal investigator of a study published today in Nature Physics. "It's one of the most promising methods for future electron sources and could push the boundaries of today's technology."

Replacing metal with plasma


In current state-of-the-art accelerators, electrons are generated by shining laser light onto a metallic photocathode, which kicks electrons out of the metal. These electrons are then accelerated inside metal cavities, where they draw more and more energy from a radiofrequency field, resulting in a high-energy electron beam. In X-ray lasers, such as SLAC's Linac Coherent Light Source (LCLS), the beam drives the production of extremely bright X-ray light.

But metal cavities can only support a limited energy gain over a given distance, or acceleration gradient, before breaking down, and therefore accelerators for high-energy beams become very large and expensive. In recent years, scientists at SLAC and elsewhere have looked into ways to make accelerators more compact. They demonstrated, for example, that they can replace metal cavities with plasma that allows much higher acceleration gradients, potentially shrinking the length of future accelerators 100 to 1,000 times.

The new paper expands the plasma concept to the electron source of an accelerator.

"We've previously shown that plasma acceleration can be extremely powerful and efficient, but we haven't been able yet to produce beams with high enough quality for future applications," says co-author Mark Hogan from SLAC. "Improving beam quality is a top priority for the next years, and developing new types of electron sources is an important part of that."

According to previous calculations by Hidding and colleagues, the Trojan horse technique could make electron beams 100 to 10,000 times brighter than today's most powerful beams. Brighter electron beams would also make future X-ray lasers brighter and further enhance their scientific capabilities.

"If we're able to marry the two major thrusts -- high acceleration gradients in plasma and beam creation in plasma -- we could be able to build X-ray lasers that unfold the same power over a distance of a few meters rather than kilometers," says co-author James Rosenzweig, the principal investigator for the Trojan horse project at the University of California, Los Angeles.

Producing superior electron beams

The researchers carried out their experiment at SLAC's Facility for Advanced Accelerator Experimental Tests (FACET). The facility, which is currently undergoing a major upgrade, generates pulses of highly energetic electrons for research on next-generation accelerator technologies, including plasma acceleration.

First, the team flashed laser light into a mixture of hydrogen and helium gas. The light had just enough energy to strip electrons off hydrogen, turning neutral hydrogen into plasma. It wasn't energetic enough to do the same with helium, though, whose electrons are more tightly bound than those for hydrogen, so it stayed neutral inside the plasma.

Then, the scientists sent one of FACET's electron bunches through the plasma, where it produced a plasma wake, much like a motorboat creates a wake when it glides through the water. Trailing electrons can "surf" the wake and gain tremendous amounts of energy.

In this study, the trailing electrons came from within the plasma (see animation above and movie below). Just when the electron bunch and its wake passed by, the researchers zapped the helium in the plasma with a second, tightly focused laser flash. This time the light pulse had enough energy to kick electrons out of the helium atoms, and the electrons were then accelerated in the wake.

The synchronization between the electron bunch, rushing through the plasma with nearly the speed of light, and the laser flash, lasting merely a few millionths of a billionth of a second, was particularly important and challenging, says UCLA's Aihua Deng, one of the study's lead authors: "If the flash comes too early, the electrons it produces will disturb the formation of the plasma wake. If it comes too late, the plasma wake has moved on and the electrons won't get accelerated."

The researchers estimate that the brightness of the electron beam obtained with the Trojan horse method can already compete with the brightness of existing state-of-the-art electron sources.

"What makes our technique transformative is the way the electrons are produced," says Oliver Karger, the other lead author, who was at the University of Hamburg, Germany, at the time of the study. When the electrons are stripped off the helium, they get rapidly accelerated in the forward direction, which keeps the beam narrowly bundled and is a prerequisite for brighter beams.

More R&D work ahead

But before applications like compact X-ray lasers could become a reality, much more research needs to be done.

Next, the researchers want to improve the quality and stability of their beam and work on better diagnostics that will allow them to measure the actual beam brightness, instead of estimating it.

Read more at Science Daily

Jurassic world of volcanoes found in central Australia

Volcanic eruption
An international team of subsurface explorers from the University of Adelaide in Australia and the University of Aberdeen in Scotland have uncovered a previously undescribed 'Jurassic World' of around 100 ancient volcanoes buried deep within the Cooper-Eromanga Basins of central Australia.

The Cooper-Eromanga Basins in the north-eastern corner of South Australia and south-western corner of Queensland is Australia's largest onshore oil and gas producing region of Australia. But, despite about 60 years of petroleum exploration and production, this ancient Jurassic volcanic underground landscape has gone largely unnoticed.

Published in the journal Gondwana Research, the researchers used advanced subsurface imaging techniques, analogous to medical CT scanning, to identify the plethora of volcanic craters and lava flows, and the deeper magma chambers that fed them. They've called the volcanic region the Warnie Volcanic Province, with a nod to Australian cricket legend Shane Warne.

The volcanoes developed in the Jurassic period, between 180 and 160 million years ago, and have been subsequently buried beneath hundreds of meters of sedimentary -- or layered -- rocks.

The Cooper-Eromanga Basins are now a dry and barren landscape but in Jurassic times, the researchers say, would have been a landscape of craters and fissures, spewing hot ash and lava into the air, and surrounded by networks of river channels, evolving into large lakes and coal-swamps.

"While the majority of Earth's volcanic activity occurs at the boundaries of tectonic plates, or under the Earth's oceans, this ancient Jurassic world developed deep within the interior of the Australian continent," says co-author Associate Professor Simon Holford, from the University of Adelaide's Australian School of Petroleum.

"Its discovery raises the prospect that more undiscovered volcanic worlds reside beneath the poorly explored surface of Australia."

The research was carried out by Jonathon Hardman, then a PhD student at the University of Aberdeen, as part of the Natural Environment Research Council Centre for Doctoral Training in Oil and Gas.

The researchers say that Jurassic-aged sedimentary rocks bearing oil, gas and water have been economically important for Australia, but this latest discovery suggests a lot more volcanic activity in the Jurassic period than previously supposed.

"The Cooper-Eromanga Basins have been substantially explored since the first gas discovery in 1963," says co-author Associate Professor Nick Schofield, from the University of Aberdeen's Department of Geology and Petroleum Geology.

"This has led to a massive amount of available data from underneath the ground but, despite this, the volcanics have never been properly understood in this region until now. It changes how we understand processes that have operated in Earth's past."

The researchers have named their discovery the Warnie Volcanic Province after one of the drill holes that penetrated Jurassic volcanic rocks (Warnie East-1), itself named after a nearby waterhole), but also in recognition of the explosive talent of former Australian cricketer Shane Warne.

Read more at Science Daily

Near-Earth asteroid 2006 QV89 not a threat for next century

Canada-France-Hawaii Telescope
The existing astronomical observatories on Maunakea returned to operations this weekend, and it didn't take long for a significant result to be achieved, not only for science, but for assuring the safety of Earth.

Observations of the near-Earth asteroid 2006 QV89 made on August 11 with the Canada-France-Hawaii Telescope (CFHT) have ruled out any potential future impact threat to Earth by this asteroid for the next century.

2006 QV89 was discovered on August 29, 2006, with a telescope in Arizona, and observations were only possible through September 8, 2006, when the asteroid became unobservable from telescopes on Earth. The orbit determined from these limited observations had significant uncertainty, and it was not possible to rule out the low probability of the asteroid impacting Earth in the future, possibly as early as 2019. Last month, observations with the European Southern Observatory's Very Large Telescope (VLT) in Chile did not find the asteroid where it would have appeared if it was on a trajectory that would impact Earth this September. This ruled out an impact in 2019, but an impact for 2020 remained a possibility, along with nearly two dozen more over the next hundred years, with eight of those in the next decade.

"There is a big difference between knowing where a hazardous asteroid isn't, and knowing where it is," said David Tholen, astronomer at the University of Hawai'i's Institute for Astronomy, who led the effort to recover 2006 QV89.

This summer provided the first clear opportunity to recover the asteroid since its discovery, but the uncertainty in its position on the sky spanned roughly 30 degrees (60 times the diameter of the moon) in mid-July, growing even larger as the asteroid approached Earth. "That made the use of a large telescope with a wide-field camera absolutely essential," noted Tholen. Only a fraction of that uncertainty region had been imaged with CFHT on July 14, but operations at the existing telescopes were suspended on July 16, due to the protest on Maunakea.

"We found at least a dozen asteroids in the July 14 data that fell close to the region where 2006 QV89 could have been, but the suspension of operations prevented us from confirming which, if any, of those objects was 2006 QV89," said Tholen.

With access to the Maunakea telescopes blocked, Tholen enlisted the aid of Marco Micheli of the European Space Agency's NEO Coordination Centre in Frascati, Italy. Micheli is a UH graduate who led the effort to rule out the 2019 impact scenario with ESO's VLT. He pointed a telescope in Spain at the position for the best of the candidate objects, but after two hours of data collection, the object at the predicted position could not be convincingly distinguished from electronic noise in the data. It came as a great relief to learn that CFHT would resume operations last weekend.

"Our highest priority target for Saturday night was the best 2006 QV89 candidate, and despite some thin cirrus clouds and a lot of moonlight, we needed only four minutes of data to obtain proof that we had found the right object," said Tholen.

The International Astronomical Union's Minor Planet Center announced the recovery to the world on Sunday, and the impact monitoring services at the Jet Propulsion Laboratory and the University of Pisa/SpaceDys in Italy immediately began crunching the numbers to update the impact predictions. A little over an hour later, Davide Farnocchia of Center of Near-Earth Object Studies at NASA's Jet Propulsion Laboratory in Pasadena reported that all the impact scenarios for the next century had been eliminated.

"This result is only one example of the telescopes on Maunakea protecting Earth by observing and studying the asteroids that enter Earth's neighborhood," said Kelly Fast, manager of the Near Earth Object Observations Program in NASA's Planetary Defense Coordination Office, which supported the observations.

Read more at Science Daily

First cells may have emerged because building blocks of proteins stabilized membranes

Cells abstract illustration.
Life on Earth arose about 4 billion years ago when the first cells formed within a primordial soup of complex, carbon-rich chemical compounds.

These cells faced a chemical conundrum. They needed particular ions from the soup in order to perform basic functions. But those charged ions would have disrupted the simple membranes that encapsulated the cells.

A team of researchers at the University of Washington has solved this puzzle using only molecules that would have been present on the early Earth. Using cell-sized, fluid-filled compartments surrounded by membranes made of fatty acid molecules, the team discovered that amino acids, the building blocks of proteins, can stabilize membranes against magnesium ions. Their results set the stage for the first cells to encode their genetic information in RNA, a molecule related to DNA that requires magnesium for its production, while maintaining the stability of the membrane.

The findings, published the week of Aug. 12 in the Proceedings of the National Academy of Sciences, go beyond explaining how amino acids could have stabilized membranes in unfavorable environments. They also demonstrate how the individual building blocks of cellular structures -- membranes, proteins and RNA -- could have co-localized within watery environments on the ancient Earth.

"Cells are made up of very different types of structures with totally different types of building blocks, and it has never been clear why they would come together in a functional way," said co-corresponding author Roy Black, a UW affiliate professor of chemistry and bioengineering. "The assumption was just that -- somehow -- they did come together."

Black came to the UW after a career at Amgen for the opportunity to fill in the crucial, missing details behind that "somehow." He teamed up with Sarah Keller, a UW professor of chemistry and an expert on membranes. Black had been inspired by the observation that fatty acid molecules can self-assemble to form membranes, and hypothesized that these membranes could act as a favorable surface to assemble the building blocks of RNA and proteins.

"You can imagine different types of molecules moving within the primordial soup as fuzzy tennis balls and hard squash balls bouncing around in a big box that is being shaken," said Keller, who is also co-corresponding author on the paper. "If you line one surface inside the box with Velcro, then only the tennis balls will stick to that surface, and they will end up close together. Roy had the insight that local concentrations of molecules could be enhanced by a similar mechanism."

The team previously showed that the building blocks of RNA preferentially attach to fatty acid membranes and, surprisingly, also stabilize the fragile membranes against detrimental effects of salt, a common compound on Earth past and present.

The team hypothesized that amino acids might also stabilize membranes. They used a variety of experimental techniques -- including light microscopy, electron microscopy and spectroscopy -- to test how 10 different amino acids interacted with membranes. Their experiments revealed that certain amino acids bind to membranes and stabilize them. Some amino acids even triggered large structural changes in membranes, such as forming concentric spheres of membranes -- much like layers of an onion.

"Amino acids were not just protecting vesicles from disruption by magnesium ions, but they also created multilayered vesicles -- like nested membranes," said lead author Caitlin Cornell, a UW doctoral student in the Department of Chemistry.

The researchers also discovered that amino acids stabilized membranes through changes in concentration. Some scientists have hypothesized that the first cells may have formed within shallow basins that went through cycles of high and low concentrations of amino acids as water evaporated and as new water washed in.

The new findings that amino acids protect membranes -- as well as prior results showing that RNA building blocks can play a similar role -- indicate that membranes may have been a site for these precursor molecules to co-localize, providing a potential mechanism to explain what brought together the ingredients for life.

Keller, Black and their team will turn their attention next to how co-localized building blocks did something even more remarkable: They bound to each other to form functional machines.

"That is the next step," said Black.

Their ongoing efforts are also forging ties across disciplines at the UW.

Read more at Science Daily

Aug 12, 2019

Arctic sea-ice loss has 'minimal influence' on severe cold winter weather

Arctic sea ice
The dramatic loss of Arctic sea ice through climate change has only a "minimal influence" on severe cold winter weather across Asia and North America, new research has shown.

The possible connection between Arctic sea-ice loss and extreme cold weather -- such as the deep freezes that can grip the USA in the winter months -- has long been studied by scientists.

Observations show that when the regional sea-ice cover is reduced, swathes of Asia and North America often experience unusually cold and hazardous winter conditions.

However, previous climate modelling studies have suggested that reduced sea ice cannot fully explain the cold winters.

Now, a new study by experts from the University of Exeter, the Royal Netherlands Meteorological Institute and the Energy and Sustainability Research Institute in Groningen, has shed new light on the link between sea-ice loss and cold winters.

For the research, the international team combined observations over the past 40 years with results from sophisticated climate modelling experiments. They found that the observations and models agreed that reduced regional sea ice and cold winters often coincide which each other.

They found that the correlation between reduced sea ice and extreme winters across the mid-latitude occurs because both are simultaneously driven by the same, large-scale atmospheric circulation patterns.

Crucially, it shows that reduced sea ice only has a minimal influence on whether a harsh and severe winter will occur.

The study is published in leading science journal, Nature Climate Change.

Dr Russell Blackport, a Mathematics Research Fellow at the University of Exeter and lead author of the paper said: "The correlation between reduced sea ice and cold winters does not mean one is causing the other. We show that the real cause is changes in atmospheric circulation which moves warm air into the Arctic and cold air into the mid-latitudes."

Over recent decades, the Arctic region has experienced warming temperatures through climate change, which has led to a large decline in sea-ice cover.

This reduction in sea-ice cover means that areas of open water increase, which in turn allows the ocean to lose more heat to the atmosphere in winter -- this can potentially alter the weather and climate, even well outside the Arctic.

Recent studies have suggested that the reduced sea ice or Arctic warming has contributed to recent cold winters experienced in the mid-latitude region -- and that as the sea-ice reduces further through climate change, cold winters will become more frequent and severe.

Now, this new study suggests that reduced sea ice is not the main cause of the cold winters. Instead, the cold winters are likely caused by random fluctuations in the atmospheric circulation.

Professor James Screen, an Associate Professor in Climate Science at the University of Exeter said: "The are many reasons to be concerned about the dramatic loss of Arctic sea ice, but an increased risk of severe winters in North America and Asia is not one of them."

Read more at Science Daily

Mars: Cause of methane spikes still unknown

Mars illustration
Wind erosion has been ruled out as the primary cause of methane gas release on Mars, Newcastle University academics have shown.

Methane can be produced over time through both geological and biological routes and since its first detection in the Martian atmosphere in 2003, there has been intense speculation about the source of the gas and the possibility that it could signal life on the planet.

Previous studies have suggested the methane may not be evenly distributed in the atmosphere around Mars, but instead appear in localised and very temporary pockets on the planet's surface. And the previous discovery of methane 'spikes' in the Martian atmosphere has further fuelled the debate.

Now research led by Newcastle University, UK, and published in Scientific Reports, has ruled out the possibility that the levels of methane detected could be produced by the wind erosion of rocks, releasing trapped methane from fluid inclusions and fractures on the planets' surface.

Principal Investigator Dr Jon Telling, a geochemist based in the School of Natural and Environmental Sciences at Newcastle University, said:

"The questions are -- where is this methane coming from, and is the source biological? That's a massive question and to get to the answer we need to rule out lots of other factors first.

"We realised one potential source of the methane that people hadn't really looked at in any detail before was wind erosion, releasing gases trapped within rocks. High resolution imagery from orbit over the last decade have shown that winds on Mars can drive much higher local rates of sand movement, and hence potential rates of sand erosion, than previously recognised.

"In fact, in a few cases, the rate of erosion is estimated to be comparable to those of cold and arid sand dune fields on Earth.

"Using the data available, we estimated rates of erosion on the surface of Mars and how important it could be in releasing methane.

"And taking all that into account we found it was very unlikely to be the source.

"What's important about this is that it strengthens the argument that the methane must be coming from a different source. Whether or not that's biological, we still don't know."

Pinpointing the source

Funded by the UK Space Agency, the research uses new alongside previously published data to consider the likely methane contents of different rock types and whether they have the capacity to produce measurable levels of methane when worn away.

The team found that for wind erosion to be a viable mechanism to produce detectable methane in the Martian atmosphere, the methane content of any gases trapped within rocks would have to rival those of some of the richest hydrocarbon containing shales on Earth; a highly unlikely scenario.

Lead author Dr Emmal Safi, a postdoctoral researcher in the School of Natural and Environmental Sciences at Newcastle University, concludes that the cause of methane spikes on Mars is still unknown.

"It's still an open question. Our paper is just a little part of a much bigger story," she says.

Read more at Science Daily

Glitch in neutron star reveals its hidden secrets

Neutron star illustration
Neutron stars are not only the most dense objects in the Universe, but they rotate very fast and regularly.

Until they don't.

Occasionally these neutron stars start to spin faster, caused by portions of the inside of the star moving outwards. It's called a "glitch" and it provides astronomers a brief insight into what lies within these mysterious objects.

In a paper published today in the journal, Nature Astronomy, a team from Monash University, the ARC Centre of Excellence for Gravitational Wave Discovery (OzGrav), McGill University in Canada, and the University of Tasmania, studied the Vela Pulsar, a neutron star in the southern sky, that is 1,000 light years away.

According to the paper's first author, Dr Greg Ashton, from the Monash School of Physics and Astronomy, and a member of OzGrav, Vela is famous -- not only because only 5% of pulsars are known to glitch but also because Vela "glitches" about once every three years, making it a favourite of "glitch hunters" like Dr Ashton and his colleague, Dr Paul Lasky, also from Monash and OzGrav.

By reanalysing data from observations of the Vela glitch in 2016 taken by co-author Dr Jim Palfreyman from the University of Tasmania, Dr Ashton and his team found that during the glitch the star actually started spinning even faster, before relaxing down to a final state.

According to Dr Lasky, an ARC Future Fellow also from the Monash School of Physics and Astronomy, and a member of OzGrav this observation (done at the Mount Pleasant Observatory in Tasmania) is particularly important because, for the first time, the scientists got a glimpse into the interior of the star -- revealing that the inside of the star actually has three different components.

"One of these components, a soup of superfluid neutrons in the inner layer of the crust, moves outwards first and hits the rigid outer crust of the star causing it to spin up," Dr Lasky said.

"But then, a second soup of superfluid that moves in the core catches up to the first causing the spin of the star to slow back down.

This overshoot has been predicted a couple of times in the literature, but this is the first real time it's been identified in observations," he said.

One such prediction of the overshoot came from study co-author Dr Vanessa Graber from McGill University, who visited the Monash team as an OzGrav international visitor earlier this year.

Another observation, according to Dr Ashton, defies explanation.

"Immediately before the glitch, we noticed that the star seems to slow down its rotation rate before spinning back up," Dr Ashton said.

"We actually have no idea why this is, and it's the first time it's ever been seen.

Read more at Science Daily

A new timeline of Earth's cataclysmic past

Asteroid impact illustration
Welcome to the early solar system. Just after the planets formed more than 4.5 billion years ago, our cosmic neighborhood was a chaotic place. Waves of comets, asteroids and even proto-planets streamed toward the inner solar system, with some crashing into Earth on their way.

Now, a team led by University of Colorado Boulder geologist Stephen Mojzsis has laid out a new timeline for this violent period in our planet's history.

In a study published today, the researchers homed in on a phenomenon called "giant planet migration." That's the name for a stage in the evolution of the solar system in which the largest planets, for reasons that are still unclear, began to move away from the sun.

Drawing on records from asteroids and other sources, the group estimated that this solar system-altering event occurred 4.48 billion years ago -- much earlier than some scientists had previously proposed.

The findings, Mojzsis said, could provide scientists with valuable clues around when life might have first emerged on Earth.

"We know that giant planet migration must have taken place in order to explain the current orbital structure of the outer solar system," said Mojzsis, a professor in the Department of Geological Sciences. "But until this study, nobody knew when it happened."

It's a debate that, at least in part, comes down to moon rocks collected by Apollo astronauts -- many of which seemed to be only 3.9 billion years old, hundreds of millions of years younger than the moon itself.

To explain those ages, some researchers suggested that our moon, and Earth, were slammed by a surge of comets and asteroids around that time. But not everyone agreed with the theory, Mojzsis said.

"It turns out that the part of the moon we landed on is very unusual," he said. "It is strongly affected by one big impact, the Imbrium Basin, that is about 3.9 billion years old and affects nearly everything we sampled."

To get around that bias, the researchers decided to compile the ages from an exhaustive database of meteorites that had crash landed on Earth.

"The surfaces of the inner planets have been extensively reworked both by impacts and indigenous events until about 4 billion years ago," said study coauthor Ramon Brasser of the Earth-Life Science Institute in Tokyo. "The same is not true for the asteroids. Their record goes back much further."

But those records, the team discovered, only went back to about 4.5 billion years ago.

For the researchers, that presented only one possibility: The solar system must have experienced a major bombardment just before that cut-off date. Very large impacts, Mojzsis said, can melt rocks and variably reset their radioactive ages, a bit like shaking an etch-a-sketch.

Mojzsis explained that this carnage was likely kicked off by the solar system's giant planets, which researchers believe formed much closer together than they are today. Using computer simulations, however, his group demonstrated that those bodies started to creep toward their present locations about 4.48 billion years ago.

Read more at Science Daily

Aug 11, 2019

NASA's MMS finds first interplanetary shock

The Magnetospheric Multiscale mission -- MMS -- has spent the past four years using high-resolution instruments to see what no other spacecraft can. Recently, MMS made the first high-resolution measurements of an interplanetary shock.

These shocks, made of particles and electromagnetic waves, are launched by the Sun. They provide ideal test beds for learning about larger universal phenomena, but measuring interplanetary shocks requires being at the right place at the right time. Here is how the MMS spacecraft were able to do just that.

What's in a Shock?

Interplanetary shocks are a type of collisionless shock -- ones where particles transfer energy through electromagnetic fields instead of directly bouncing into one another. These collisionless shocks are a phenomenon found throughout the universe, including in supernovae, black holes and distant stars. MMS studies collisionless shocks around Earth to gain a greater understanding of shocks across the universe.

Interplanetary shocks start at the Sun, which continually releases streams of charged particles called the solar wind.

The solar wind typically comes in two types -- slow and fast. When a fast stream of solar wind overtakes a slower stream, it creates a shock wave, just like a boat moving through a river creates a wave. The wave then spreads out across the solar system. On Jan. 8, 2018, MMS was in just the right spot to see one interplanetary shock as it rolled by.

Catching the Shock

MMS was able to measure the shock thanks to its unprecedentedly fast and high-resolution instruments. One of the instruments aboard MMS is the Fast Plasma Investigation. This suite of instruments can measure ions and electrons around the spacecraft at up to 6 times per second. Since the speeding shock waves can pass the spacecraft in just half a second, this high-speed sampling is essential to catching the shock.

Looking at the data from Jan. 8, the scientists noticed a clump of ions from the solar wind. Shortly after, they saw a second clump of ions, created by ions already in the area that had bounced off the shock as it passed by. Analyzing this second population, the scientists found evidence to support a theory of energy transfer first posed in the 1980s.

MMS consists of four identical spacecraft, which fly in a tight formation that allows for the 3D mapping of space. Since the four MMS spacecraft were separated by only 12 miles at the time of the shock (not hundreds of kilometers as previous spacecraft had been), the scientists could also see small-scale irregular patterns in the shock. The event and results were recently published in the Journal of Geophysical Research.

Read more at Science Daily

Climate change conversations can be difficult for both skeptics, environmentalists

Having productive conversations about climate change isn't only challenging when dealing with skeptics, it can also be difficult for environmentalists, according to two studies presented at the annual convention of the American Psychological Association.

The first of the studies found that reinforcing belief and trust in science may be a strategy to help shift the views of climate change skeptics and make them more open to the facts being presented by the other side.

"Within the United States, bipartisan progress on climate change has essentially come to a standstill because many conservatives doubt the findings of climate science and many liberals cannot fathom that any rational human can doubt the scientific consensus on the issue," said Carly D. Robinson, MEd, of Harvard University, who presented the research. "These opposing perspectives do not create a starting point for productive conversations to help our country address climate change. Our goal was to find an intervention that might change the current situation."

Though previous research has shown that social pressure to disbelieve in climate change stems from the political right and that conservatives' trust in science has eroded, Robinson and her colleagues theorized that most people would find at least some branches of science credible. Leveraging those beliefs could lead climate skeptics to shift their views, they said.

"When people are faced with two or more opposing beliefs, ideas and values, it tends to create discomfort, which can lead people to becoming more open-minded about a particular issue," said Christine Vriesema, PhD, of the University of California, Santa Barbara and a co-author of the study.

The researchers surveyed nearly 700 participants from the U.S. Half were given surveys about their belief in science (e.g., "How credible is the medical data that germs are a primary cause of disease?" and "How certain are you that physicists' theory of gravity accurately explains why objects fall when dropped?") and their belief in climate science (e.g., "How credible is the climate science data that ocean temperatures are rising?" and "How certain are you that global warming explains many of the new weather patterns we are seeing today?"). The other half was only surveyed about their belief in climate science. All participants reported if they considered themselves politically liberal, moderate or conservative.

"As we predicted in our pre-registration, conservatives reported a greater belief in climate science if they were asked questions first about their belief in other areas of science," said Robinson. "For climate skeptics, it likely became awkward to report on our survey that they believed in science while at the same time, denying the findings of climate science. That dissonance led many to adjust their beliefs to show greater support for the existence of climate change."

The findings showed that beliefs in climate science are malleable and not fixed, said Robinson.

"We were pleasantly surprised that a brief, two-minute survey changed skeptics' views on climate change," said Robinson. "It is exciting to know that in real-world settings, we might be able to have more productive climate conversations by starting from a place of common belief."

The second study showed that igniting a sense of resilience and perseverance can increase action and engagement around climate change for people who work in aquariums, national parks and zoos.

"Many educators working at these institutions reported wanting to talk about climate change and visitors reported wanting to hear about it, yet many educators still felt uncomfortable bringing the topic into their conversations because they were worried about being able to communicate effectively," said Nathaniel Geiger, PhD, of Indiana University who presented the research.

The study included 203 science educators from zoos, aquariums and national parks who were part of a yearlong communication training program from the National Network of Ocean and Climate Change Interpretation designed to build participants' confidence in talking about climate change. The training consisted of study groups, group assignments, readings, discussions and weekend retreats. During the last six months of the program, participants worked to integrate what they had learned into their jobs.

Survey data were collected one month before and one month after the training program and again six to nine months later.

Geiger and his colleagues examined two components of hopeful thinking to see which one might lead to the success of the training program: agency (e.g., enthusiasm, a sense of determination) and pathways (e.g., resilience and perseverance strategies) and how those influenced participants' reports of engagement about climate change.

Participants rated their "agency thinking" (e.g., "I energetically do all I can do to discuss climate change" and "I anticipate that efforts to discuss climate change will be pretty successful") and their "pathways thinking" (e.g., "I can think of many ways to discuss climate change") in each survey. The science educators also reported the frequency with which they discussed climate change with the general public and visitors to their institutions, ranging from never to daily.

Geiger and his team found that pathways thinking was more successful at inspiring conversations about climate change than agency.

"Our findings suggested that portions of the training that taught how to persevere and be resilient in the face of difficult climate change conversations may have been the most effective at promoting discussion," Geiger said.

The training program also increased the frequency with which the science educators spoke about climate change with visitors, from less than once per month prior to the training to more than two or three times per month afterward, he said.

Read more at Science Daily