Mar 14, 2020

Turbulent convection at the heart of stellar activity

Different stars can exhibit different levels of activity. The Sun's signs of solar activity are rather feeble on an astronomical scale. Other stars are up to ten times more active. While researchers have identified the magnetic fields generated in the interior of stars in a dynamo process as drivers of activity, the exact workings of this dynamo are unclear. Scientists now find that a common, turbulence-dependent dynamo mechanism plays a crucial role for stellar activity in all stages of stellar evolution.

In their interiors, stars are structured in a layered, onion-like fashion. In those with solar-like temperatures, the core is followed by the radiation zone. There, the heat from within is led outwards by means of radiation. As the stellar plasma becomes cooler farther outside, heat transport is dominated by plasma flows: hot plasma from within rises to the surface, cools, and sinks down again. This process is called convection. At the same time, the star's rotation, which depends on stellar latitude, introduces shear movements. Together, both processes twist and twirl magnetic field lines and create a star's complex magnetic fields in a dynamo process that is not yet fully understood.

"Unfortunately, we cannot look directly into the Sun and other stars to see these processes in action, but have to resort to more indirect methods," says Dr. Jyri Lehtinen from the Max Planck Institute for Solar System Research (MPS) in Germany, first author of the new paper published today in Nature Astronomy. In their current study, the researchers compared different stars' activity levels on the one hand, and their rotational and convective properties on the other. The goal was to determine, which properties have a strong influence on activity. This can help to understand the specifics of the dynamo process within.

Several models of the stellar dynamo have been proposed in the past, but two main paradigms prevail. While one of them puts a greater emphasis on the rotation and assumes only subtle effects of convectional flows, the other depends crucially on turbulent convection. In this type of convection, the hot stellar plasma does not rise to the surface in large-scale, sedate motions. Rather, small-scale vigorous flows dominate.

In order to find evidence for one or the other of the two paradigms, Lehtinen and his colleagues for the first time took a look at 224 very different stars. Their sample contained both main sequence stars, which are so to say in the prime of their life, and older, more evolved giant stars. Typically, both convection and rotational properties of stars change as they age. Compared to main sequence stars, evolved stars exhibit a thicker convection zone often expanding over much of the star's diameter and sometimes superseding the radiation zone completely. This leads to longer turnover times for convective heat transport. At the same time, rotation usually slows down.

For their study, the researchers analyzed a data set obtained at Mount Wilson Observatory in California (USA), which over several years recorded the stars' emissions in wavelengths typical of calcium ions found in the stellar plasma. These emissions are not only correlated with the stars' activity level. Complex data processing also made it possible to infer the stars' rotation periods.

Like the Sun, stars are sometimes dappled with regions of extremely high magnetic field strength, so-called active regions, which are often associated with dark spots on the stars' visible surface. "As a star rotates, these regions come into view and pass out of it leading to a periodic rise and fall in emission brightness," Prof. Dr. Maarit Käpylä from Aalto University in Finland, who also heads the research group "Solar and Stellar Dynamos" at MPS, explains. However, since stellar emissions can also fluctuate due to other effects, identifying periodic variations -- especially over long periods -- is tricky.

"Some of the stars we studied show rotation periods of several hundreds of days, and surprisingly still a magnetic activity level similar to the other stars, and remarkably even magnetic cycles like the Sun," says Dr. Nigul Olspert from MPS, who analyzed the data. The Sun, in comparison, rotates rather briskly with a rotation period of only approximately 25 days at the solar equator. The convective turnover times were calculated by means of stellar structure modelling taking into account each star's mass, chemical composition, and evolutionary stage.

Read more at Science Daily

New type of pulsating star discovered

A star that pulsates on just one side has been discovered in the Milky Way about 1500 light years from Earth. It is the first of its kind to be found and scientists expect to find many more similar systems as technology to listen inside the beating hearts of stars improves.

"What first caught my attention was the fact it was a chemically peculiar star," said co-author Dr Simon Murphy from the Sydney Institute for Astronomy at the University of Sydney. "Stars like this are usually fairly rich with metals -- but this is metal poor, making it a rare type of hot star."

Dr Murphy shared the find with international collaborators to discover others had started to study the star, known as HD74423, which is about 1.7 times the mass of the Sun.

Together they have published their findings today in Nature Astronomy.

"We've known theoretically that stars like this should exist since the 1980s," said co-author Professor Don Kurtz from the University of Central Lancashire in Britain.

"I've been looking for a star like this for nearly 40 years and now we have finally found one," said Professor Kurtz, who is the inaugural Hunstead Distinguished Visitor at the University of Sydney.

Stars that pulsate have been known in astronomy for a long time. Our own Sun dances to its own rhythms. These rhythmic pulsations of the stellar surface occur in young and in old stars, and can have long or short periods, a wide range of strengths and different causes.

There is however one thing that all these stars had thus far in common: the oscillations were always visible on all sides of the star. Now an international team, including researchers from the University of Sydney, has discovered a star that oscillates largely over one hemisphere.

The scientists have identified the cause of the unusual single-sided pulsation: the star is located in a binary star system with a red dwarf. Its close companion distorts the oscillations with its gravitational pull. The clue that led to its discovery came from citizen scientists poring over public data from NASA's TESS satellite, which is hunting for planets around distant stars.

The orbital period of the binary system, at less than two days, is so short that the larger star is being distorted into a tear-drop shape by the gravitational pull of the companion.

Professor Gerald Handler from the Nicolaus Copernicus Astronomical Centre in Poland and lead author said: "The exquisite data from the TESS satellite meant that we could observe variations in brightness due to the gravitational distortion of the star as well as the pulsations."

To their surprise the team observed that the strength of the pulsations depended on the aspect angle under which the star was observed, and the corresponding orientation of the star within the binary. This means the pulsation strength varies with the same period as that of the binary.

"As the binary stars orbit each other we see different parts of the pulsating star," said Dr David Jones at the Instituto de Astrofisica de Canarias and co-author of the study. "Sometimes we see the side that points towards the companion star, and sometimes we see the outer face."

This is how the astronomers could be certain that the pulsations were only found on one side of the star, with the tiny fluctuations in brightness always appearing in their observations when the same hemisphere of the star was pointed towards the telescope.

The discovery of the unusual behaviour of the star was initially made by citizen scientists. These amateur astronomy sleuths painstakingly inspected the enormous amounts of data that TESS regularly supplies, as they search for new and interesting phenomena.

Read more at Science Daily

Mar 13, 2020

Gorillas display territorial behavior

Scientists have discovered that gorillas really are territorial -- and their behaviour is very similar to our own.

Published in the journal Scientific Reports, the research shows for the first time that groups of gorillas recognise "ownership" of specific regions. They are also more likely to avoid contact with other groups the closer they are to the centre of their neighbours' home range, for fear of conflict.

The study, which was carried out by academics from the University of Cambridge, Anglia Ruskin University (ARU), the University of Barcelona, SPAC Scientific Field Station Network, and the University of Vienna, involved monitoring the movements of groups of western lowland gorillas (Gorilla gorilla gorilla).

Western lowland gorillas are difficult to track on foot because they live in dense forests. Instead, the scientists followed eight groups of gorillas using a network of cameras placed at 36 feeding "hotspots" across a 60km2 area of the Odzala-Kokoua National Park in the Republic of Congo.

It was previously thought that gorillas were non-territorial, due to the overlap of home ranges and their tolerance of other groups. This is markedly different to chimpanzees, which display extreme territorial-based violence.

However, this new research discovered that gorillas display more nuanced behaviours, and their movements are strongly influenced by the location of their neighbours -- they are less likely to feed at a site visited by another group that day -- and the distance from the centre of their neighbours' home range.

Lead author Dr Robin Morrison, who carried out the study during her PhD at the University of Cambridge, said: "Our findings indicate that there is an understanding among gorillas of 'ownership' of areas and the location of neighbouring groups restricts their movement.

"Gorillas don't impose hard boundaries like chimpanzees. Instead, gorilla groups may have regions of priority or even exclusive use close to the centre of their home range, which could feasibly be defended by physical aggression.

"At the same time groups can overlap and even peacefully co-exist in other regions of their ranges. The flexible system of defending and sharing space implies the presence of a complex social structure in gorillas."

Co-author Dr Jacob Dunn, Reader in Evolutionary Biology at Anglia Ruskin University (ARU), said: "This new research changes what we know about how groups of gorillas interact and has implications for what we understand about human evolution.

"Almost all comparative research into human evolution compares us to chimpanzees, with the extreme territorial violence observed in chimpanzees used as evidence that their behaviour provides an evolutionary basis for warfare among humans.

Read more at Science Daily

How stem cells repair damage from heart attacks

Mayo Clinic researchers have uncovered stem cell-activated mechanisms of healing after a heart attack. Stem cells restored cardiac muscle back to its condition before the heart attack, in turn providing a blueprint of how stem cells may work.

The study, published in NPJ Regenerative Medicine, finds that human cardiopoietic cells zero in on damaged proteins to reverse complex changes caused by a heart attack. Cardiopoietic cells are derived from adult stem cell sources of bone marrow.

"The extent of change caused by a heart attack is too great for the heart to repair itself or to prevent further damage from occurring. Notably, however, cardiopoietic stem cell therapy reversed, either fully or partially, two-thirds of these disease-induced changes, such that 85% of all cellular functional categories affected by disease responded favorably to treatment," says Andre Terzic, M.D., Ph.D., director of Mayo Clinic's Center for Regenerative Medicine. Dr. Terzic is the senior author of the study.

This new understanding of how stem cells restore heart health could provide the framework for broader applications of stem cell therapy across various conditions.

"The actual mode of action of stem cells in repairing a diseased organ has until now been poorly understood, limiting adoption in clinical care. This study sheds light on the most intimate, yet comprehensive, regenerative mechanisms ? paving a road map for responsible and increasingly informed stem cell application," says Dr. Terzic.

Heart disease is a leading cause of death in the U.S. Every 40 seconds, someone in the U.S. has a heart attack, according to the Centers for Disease Control and Prevention. During a heart attack, cardiac tissue dies, weakening the heart.

"The response of the diseased heart to cardiopoietic stem cell treatment revealed development and growth of new blood vessels, along with new heart tissue," adds Kent Arrell, Ph.D., a Mayo Clinic cardiovascular researcher and first author of the study.

The research


Researchers compared the diseased hearts of mice that did not receive human cardiopoietic stem cell therapy with those that did. Using a data science approach to map all the proteins in the heart muscle, researchers identified 4,000 cardiac proteins, more than 10% of which suffered damage by a heart attack.

"While we anticipated that the stem cell treatment would produce a beneficial outcome, we were surprised how far it shifted the state of diseased hearts away from disease and back toward a healthy, pre-disease state," says Dr. Arrell.

Cardiopoietic stem cells are being tested in advanced clinical trials in heart patients.

Read more at Science Daily

COVID-19 appears less severe in children

As outbreaks of COVID-19 disease caused by the novel severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) continue worldwide, there's reassuring evidence that children have fewer symptoms and less severe disease. That's among the insights provided by an expert review in The Pediatric Infectious Disease Journal, the official journal of The European Society for Paediatric Infectious Diseases. The journal is published in the Lippincott portfolio by Wolters Kluwer.

Like previous epidemic coronaviruses, "SARS-CoV-2 [seems] to cause fewer symptoms and less severe disease in children compared with adults," according to the review by Petra Zimmerman, MD, PhD, of the University of Fribourg, Switzerland and Nigel Curtis, FRCPCH, PhD, of The University of Melbourne, Australia. They summarize available evidence on coronavirus infections in children, including COVID-19.

"There is some suggestion that children are just as likely as adults to become infected with the virus but are less likely to be unwell or develop severe symptoms," Drs. Zimmerman and Curtis write. "However, the importance of children in transmitting the virus remains uncertain."

The Evidence on SARS-CoV-2 -- Focusing on Risks to Children

Coronaviruses are a large family of viruses that can cause infection and disease in animals. "Coronaviruses are capable of rapid mutation and recombination, leading to novel coronaviruses that can spread from animals to humans," Drs. Zimmerman and Curtis write. There are four coronaviruses that circulate in humans, mostly causing respiratory and gastrointestinal symptoms -- ranging from the common cold to severe disease.

Over the past two decades, there have been three major disease outbreaks due to novel coronaviruses: SARS-CoV in 2002, MERS-CoV in 2012, and now SARS-CoV-2 in 2019. Arising in the Chinese city of Wuhan, SARS-Cov-2 spread rapidly around the world and has been declared a pandemic by the World Health Organization. "The term COVID-19 is used for the clinical disease caused by SARS-CoV-2," according to the authors. Transmission of SARS-CoV-2 appears similar to that of the related SARS and MERS coronaviruses, but with a lower fatality rate. SARS-CoV-2 can still cause serious and life-threatening infections -- particularly in older people and those with pre-existing health conditions.

What are the risks for children from SARS-CoV-2? It's a pressing question for pediatric infectious disease specialists and concerned parents alike. Children appear to have milder clinical symptoms than adults and to be at substantially lower risk of severe disease -- which was also true in the SARS and MERS epidemics.

In Chinese data from February 2020, children and adolescents accounted for only two percent of SARS-CoV-2 hospitalizations, Drs. Zimmerman and Curtis write. However, as children are less frequently symptomatic and have less severe symptoms they are less often tested, which might lead to an underestimate of the true numbers infected. Also, children are less frequently exposed to the main sources of transmission.

Again based on Chinese data, "Most infected children recover one to two weeks after the onset of symptoms, and no deaths had been reported by February 2020," the researchers add. Most reported infections with SARS-CoV-2 have occurred in children with a documented household contact. Children with COVID-19 may be more likely to develop gastrointestinal symptoms.

The experts also review the diagnostic findings (laboratory tests and imaging studies) of children with COVID-19 laboratory and imaging findings in children. Whole genome sequencing approaches have enabled rapid development of molecular diagnostic tests for SARS-CoV-2. For now, treatment is supportive; no specific antiviral medications are available.

Read more at Science Daily

Heat stress may affect more than 1.2 billion people annually by 2100

Thermometer, warming concept
Heat stress from extreme heat and humidity will annually affect areas now home to 1.2 billion people by 2100, assuming current greenhouse gas emissions, according to a Rutgers study.

That's more than four times the number of people affected today, and more than 12 times the number who would have been affected without industrial era global warming.

The research is published in the journal Environmental Research Letters.

Rising global temperatures are increasing exposure to heat stress, which harms human health, agriculture, the economy and the environment. Most climate studies on projected heat stress have focused on heat extremes but not considered the role of humidity, another key driver.

"When we look at the risks of a warmer planet, we need to pay particular attention to combined extremes of heat and humidity, which are especially dangerous to human health," said senior author Robert E. Kopp, director of the Rutgers Institute of Earth, Ocean, and Atmospheric Sciences and a professor in the Department of Earth and Planetary Sciences in the School of Arts and Sciences at Rutgers University-New Brunswick.

"Every bit of global warming makes hot, humid days more frequent and intense. In New York City, for example, the hottest, most humid day in a typical year already occurs about 11 times more frequently than it would have in the 19th century," said lead author Dawei Li, a former Rutgers post-doctoral associate now at the University of Massachusetts.

Heat stress is caused by the body's inability to cool down properly through sweating. Body temperature can rise rapidly, and high temperatures may damage the brain and other vital organs. Heat stress ranges from milder conditions like heat rash and heat cramps to heat exhaustion, the most common type. Heat stroke, the most serious heat-related illness, can kill or cause permanent disability without emergency treatment, according to the U.S. Centers for Disease Control and Prevention.

The study looked at how combined extremes of heat and humidity increase on a warming Earth, using 40 climate simulations to get statistics on rare events. The study focused on a measure of heat stress that accounts for temperature, humidity and other environmental factors, including wind speed, sun angle and solar and infrared radiation.

Annual exposure to extreme heat and humidity in excess of safety guidelines is projected to affect areas currently home to about 500 million people if the planet warms by 1.5 degrees Celsius (2.7 degrees Fahrenheit) and nearly 800 million at 2 degrees Celsius (3.6 degrees Fahrenheit). The planet has already warmed by about 1.2 degrees (2.2 degrees Fahrenheit) above late 19th century levels.

An estimated 1.2 billion people would be affected with 3 degrees Celsius (5.4 degrees Fahrenheit) of warming, as expected by the end of this century under current global policies.

In New York City, extreme heat and humidity, comparable to the worst day in a typical year today, is projected to occur on four days in a typical year with global warming of 1.5 degrees Celsius (2.7 degrees Fahrenheit) and about eight days per year with warming of 2 degrees Celsius (3.6 degrees Fahrenheit). With 3 degrees Celsius (5.4 degrees Fahrenheit) of warming, extreme heat and humidity are projected to occur for about 24 days in a typical year.

Read more at Science Daily

Mar 12, 2020

Water, carbon and nitrogen were not immediately supplied to Earth

Spearheaded by earth scientists of the University of Cologne, an international team of geologists has found evidence that a large proportion of the elements that are important for the formation of oceans and life, such as water, carbon and nitrogen, were delivered to Earth very late in its history. Previously, many scientists believed that these elements were already present when the Earth began to form. However, geological investigations have now shown that most of the water in fact was only delivered to Earth when its formation was almost complete.

The new findings, which are a result of collaboration among scientists from Germany, Denmark, Wales, Australia and Japan, will be published in Nature under the title 'Ruthenium isotope vestige of Earth's pre-late veneer mantle preserved in Archean rocks' on 11 March 2020.

It is a generally accepted fact that volatile elements such as water originate from asteroids, the 'planetary building blocks' that formed in the outer solar system. However, there is ongoing discussion among experts as to when precisely they came to Earth. 'We have now been able to narrow down the timeframe much more precisely', said first author Dr. Mario Fischer-Gödde from the Institute of Geology and Mineralogy at the University of Cologne. 'To do so, we compared the composition of the oldest, approximately 3.8 billion-year-old mantle rocks from the Archean Eon with the composition of the asteroids from which they may have formed, and with the present-day composition of the Earth's mantle.'

To constrain the delivery of the so-called 'volatile' elements to Earth, the researchers measured the isotope abundances of a very rare platinum metal called ruthenium, which was already present in Earth's mantle by Archean time. Like a genetic fingerprint, this rare platinum metal is an indicator for the late growth phase of the Earth. 'Platinum group metals like ruthenium have an extremely high tendency to combine with iron. Therefore, when the Earth formed all ruthenium must have been completely sequestered into the Earth's metallic core', said Fischer-Gödde.

Professor Dr. Carsten Münker added: 'If we still find traces of the rare platinum metals in the Earth's mantle, we can assume that they were only added after the formation of the core was completed. They were certainly added during later collisions of the Earth with asteroids or smaller protoplanets, so called planetesimals.'

Scientists refer to these very late building blocks of the Earth, which were delivered by these collisions, as the 'late veneer'. If ruthenium was added during this stage, it is distributed and well mixed into Earth's mantle by now. The old Archean mantle relics in Greenland, on the other hand, have still preserved Earth's pristine composition.

'The up to 3.8 billion-year-old rocks from Greenland are the oldest preserved mantle rocks. They allow us a glimpse into the early history of the Earth as if through a window', Fischer-Gödde said. Interestingly, Earth's oldest mantle is openly accessible in surface outcrops in southwest Greenland, allowing the geologists to easily collect rock samples.

The pristine ruthenium preserved in the old mantle rocks most likely originates from the inner part of the solar system, the two Cologne-based geologists report. It is presumably the same material that -- for the most part -- also formed Mercury and Venus. The reference values for the asteroidal ruthenium were previously obtained from meteorites found on Earth.

'Our findings suggest that water and other volatile elements such as carbon and nitrogen did indeed arrive on Earth very late, during the "late veneer" phase', Fischer-Gödde concluded. This result is surprising because the scientific community had previously assumed that water-bearing planetary building blocks were already delivered to Earth during the early stages of its formation.

Read more at Science Daily

Environmental DNA in rivers offers new tool for detecting wildlife communities

Ecologists in England and Scotland, collaborating with ecologists Christopher Sutherland and Joseph Drake at the University of Massachusetts Amherst, report this week on a new method of identifying an "entire community of mammals" -- including elusive and endangered species that are otherwise difficult to monitor -- by collecting DNA from river water.

"Some mammal species are notoriously difficult to monitor," says environmental conservation Ph.D. student Drake. He adds that traditional survey methods are often tailored to a specific species, and therefore don't guarantee the detection of many other important species that are also present. Camera traps have improved the way conservation scientists study wildlife, but environmental DNA (eDNA) methods may offer a monitoring tool that could revolutionize conservation and ecology research, Sutherland adds, but the method required testing.

He adds, "We knew the potential of eDNA was massive, but when it comes to conservation, it is extremely important that we validate new approaches, and that's what we set out to do in this study." Details of their international collaborative work are in the Journal of Applied Ecology.

Naiara Sales of the University of Salford, U.K. and UMass's Drake led this study, in collaboration with researchers from the University of Tromsø, Norway, the universities of Aberdeen, Hull and Sheffield, and Liverpool John Moores University in the U.K. The research takes advantage of the fact that DNA shed from animals, either directly in the water or washed into the river, provides a snapshot of the local mammal community.

Mammologist Allan McDevitt of the University of Salford points out, "We currently use many ways of detecting and monitoring mammals, from looking for signs such as footprints or feces, to using camera traps to take photos of them over several weeks. Now, we may just simply need to collect a few bottles of water and take it to the laboratory and look at the DNA we find."

To test this, the researchers collected water and sediment from streams and rivers in Scotland and England. They found DNA from over 20 wild British mammals and compared the results to historical records, field signs such as fecal samples and cameras. They report that eDNA "provided a similar or better performance in detecting water voles, for example, when compared to looking for water voles using field signs or cameras."

They add that accurately assessing the conservation status and distribution of mammals is increasingly important as many species' populations decline worldwide. Further, surveys using traps, trail cameras and fields signs are time-consuming and costly.

Collaborators for several studies, McDevitt's group, collaborating with Sutherland's group at UMass Amherst, as well as at the universities of Aberdeen and Hull, now believe that using water bodies is an effective way of capturing all the mammals present within a watershed. McDevitt says, "We are always looking for ways to improve biodiversity assessments and monitoring, and we need to find methods which can be applied universally and cost-effectively."

Read more at Science Daily

New minor planets found beyond Neptune

Neptune illustration
Using data from the Dark Energy Survey (DES), researchers have found more than 300 trans-Neptunian objects (TNOs), minor planets located in the far reaches of the solar system, including more than 100 new discoveries. Published in The Astrophysical Journal Supplement Series, the study also describes a new approach for finding similar types of objects and could aid future searches for the hypothetical Planet Nine and other undiscovered planets. The work was led by graduate student Pedro Bernardinelli and professors Gary Bernstein and Masao Sako.

The goal of DES, which completed six years of data collection in January, is to understand the nature of dark energy by collecting high-precision images of the southern sky. While DES wasn't specifically designed with TNOs in mind, its breadth and depth of coverage made it particularly adept at finding new objects beyond Neptune. "The number of TNOs you can find depends on how much of the sky you look at and what's the faintest thing you can find," says Bernstein.

Because DES was designed to study galaxies and supernovas, the researchers had to develop a new way to track movement. Dedicated TNO surveys take measurements as frequently as every hour or two, which allows researchers to more easily track their movements. "Dedicated TNO surveys have a way of seeing the object move, and it's easy to track them down," says Bernardinelli. "One of the key things we did in this paper was figure out a way to recover those movements."

Using the first four years of DES data, Bernardinelli started with a dataset of 7 billion "dots," all of the possible objects detected by the software that were above the image's background levels. He then removed any objects that were present on multiple nights -- things like stars, galaxies, and supernova -- to build a "transient" list of 22 million objects before commencing a massive game of "connect the dots," looking for nearby pairs or triplets of detected objects to help determine where the object would appear on subsequent nights.

With the 7 billion dots whittled down to a list of around 400 candidates that were seen over at least six nights of observation, the researchers then had to verify their results. "We have this list of candidates, and then we have to make sure that our candidates are actually real things," Bernardinelli says.

To filter their list of candidates down to actual TNOs, the researchers went back to the original dataset to see if they could find more images of the object in question. "Say we found something on six different nights," Bernstein says. "For TNOs that are there, we actually pointed at them for 25 different nights. That means there's images where that object should be, but it didn't make it through the first step of being called a dot."

Bernardinelli developed a way to stack multiple images to create a sharper view, which helped confirm whether a detected object was a real TNO. They also verified that their method was able to spot known TNOs in the areas of the sky being studied and that they were able to spot fake objects that were injected into the analysis. "The most difficult part was trying to make sure that we were finding what we were supposed to find," says Bernardinelli.

After many months of method-development and analysis, the researchers found 316 TNOs, including 245 discoveries made by DES and 139 new objects that were not previously published. With only 3,000 objects currently known, this DES catalog represents 10% of all known TNOs. Pluto, the best-known TNO, is 40 times farther away from the sun than Earth is, and the TNOs found using the DES data range from 30 to 90 times Earth's distance from the sun. Some of these objects are on extremely long-distance orbits that will carry them far beyond Pluto.

Now that DES is complete, the researchers are rerunning their analysis on the entire DES dataset, this time with a lower threshold for object detection at the first filtering stage. This means that there's an even greater potential for finding new TNOs, possibly as many as 500, based on the researchers' estimates, in the near future.

The method developed by Bernardinelli can also be used to search for TNOs in upcoming astronomy surveys, including the new Vera C. Rubin Observatory. This observatory will survey the entire southern sky and will be able to detect even fainter and more distant objects than DES. "Many of the programs we've developed can be easily applied to any other large datasets, such as what the Rubin Observatory will produce," says Bernardinelli.

This catalog of TNOs will also be a useful scientific tool for research about the solar system. Because DES collects a wide spectrum of data on each detected object, researchers can attempt to figure out where the TNO originated from, since objects that form more closely to the Sun have are expected to have different colors than those that originated in more distant and colder locations. And, by studying the orbits of these objects, researchers might be one step closer to finding Planet Nine, a hypothesized Neptune-sized planet that's thought to exist beyond Pluto.

Read more at Science Daily

'Spillway' for electrons could keep lithium metal batteries from catching fire

Nanoengineers at the University of California San Diego developed a safety feature that prevents lithium metal batteries from rapidly heating up and catching fire in case of an internal short circuit.

The team made a clever tweak to the part of the battery called the separator, which serves as a barrier between the anode and cathode, so that it slows down the flow of energy (and thus heat) that builds up inside the battery when it short circuits.

The researchers, led by UC San Diego nanoengineering professor Ping Liu and his Ph.D. student Matthew Gonzalez, detail their work in a paper published in Advanced Materials.

"We're not trying to stop battery failure from happening. We're making it much safer so that when it does fail, the battery doesn't catastrophically catch on fire or explode," said Gonzalez, who is the paper's first author.

Lithium metal batteries fail because of the growth of needle-like structures called dendrites on the anode after repeated charging. Over time, dendrites grow long enough to pierce through the separator and create a bridge between the anode and cathode, causing an internal short circuit. When that happens, the flow of electrons between the two electrodes gets out of control, causing the battery to instantly overheat and stop working.

The separator that the UC San Diego team developed essentially softens this blow. One side is covered by a thin, partially conductive web of carbon nanotubes that intercepts any dendrites that form. When a dendrite punctures the separator and hits this web, electrons now have a pathway through which they can slowly drain out rather than rush straight towards the cathode all at once.

Gonzalez compared the new battery separator to a spillway at a dam.

"When a dam starts to fail, a spillway is opened up to let some of the water trickle out in a controlled fashion so that when the dam does break and spill out, there's not a lot of water left to cause a flood," he said. "That's the idea with our separator. We are draining out the charge much, much slower and prevent a 'flood' of electrons to the cathode. When a dendrite gets intercepted by the separator's conductive layer, the battery can begin to self-discharge so that when the battery does short, there's not enough energy left to be dangerous."

Other battery research efforts focus on building separators out of materials that are strong enough to block dendrites from breaking through. But a problem with this approach is that it just prolongs the inevitable, Gonzalez said. These separators still need to have pores that let ions flow through in order for the battery to work. As a consequence, when the dendrites eventually make it through, the short circuit will be even worse.

Rather than block dendrites, the UC San Diego team sought to mitigate their effects.

In tests, lithium metal batteries equipped with the new separator showed signs of gradual failure over 20 to 30 cycles. Meanwhile, batteries with a normal (and slightly thicker) separator experienced abrupt failure in a single cycle.

"In a real use case scenario, you wouldn't have any advance warning that the battery is going to fail. It could be fine one second, then catch on fire or short out completely the next. It's unpredictable," Gonzalez said. "But with our separator, you would get advance warning that the battery is getting a little bit worse, a little bit worse, a little bit worse, each time you charge it."

Read more at Science Daily

Mar 11, 2020

Dinosaur stomping ground in Scotland reveals thriving middle Jurassic ecosystem

During the Middle Jurassic Period, the Isle of Skye in Scotland was home to a thriving community of dinosaurs that stomped across the ancient coastline, according to a study published March 11, 2020 in the open-access journal PLOS ONE by Paige dePolo and Stephen Brusatte of the University of Edinburgh, Scotland and colleagues.

The Middle Jurassic Period is a time of major evolutionary diversification in many dinosaur groups, but dinosaur fossils from this time period are generally rare. The Isle of Skye in Scotland is an exception, yielding body and trace fossils of diverse Middle Jurassic ecosystems, serving as a valuable location for paleontological science as well as tourism.

In this paper, dePolo and colleagues describe two recently discovered fossil sites preserving around 50 dinosaur footprints on ancient coastal mudflats. These include the first record on the Isle of Skye of a track type called Deltapodus, most likely created by a stegosaurian (plate-backed) dinosaur. These are the oldest Deltapodus tracks known, and the first strong evidence that stegosaurian dinosaurs were part of the island's Middle Jurassic fauna. Additionally, three-toed footprints represent multiple sizes of early carnivorous theropods and a series of other large tracks are tentatively identified as some of the oldest evidence of large-bodied herbivorous ornithopod dinosaurs.

All tracks considered, these two sites expand the known diversity of what was apparently a thriving ecosystem of Middle Jurassic dinosaurs in Scotland, including at least one type of dinosaur (stegosaurs) not previously known from the region. These findings reflect the importance of footprints as a source of information supplemental to body fossils. Furthermore, the authors stress the importance of revisiting previously explored sites; these new sites were found in an area that has long been popular for fossil prospecting, but the trackways were only recently revealed by storm activity.

Lead author dePolo says: "These new tracksites help us get a better sense of the variety of dinosaurs that lived near the coast of Skye during the Middle Jurassic than what we can glean from the island's body fossil record. In particular, Deltapodus tracks give good evidence that stegosaurs lived on Skye at this time."

Author Brusatte adds: "These new tracksites give us a much clearer picture of the dinosaurs that lived in Scotland 170 million years ago. We knew there were giant long-necked sauropods and jeep-sized carnivores, but we can now add plate-backed stegosaurs to that roster, and maybe even primitive cousins of the duck-billed dinosaurs too. These discoveries are making Skye one of the best places in the world for understanding dinosaur evolution in the Middle Jurassic."

Read more at Science Daily

Scientists find Earth and moon not identical oxygen twins

Scientists at The University of New Mexico have found that the Earth and Moon have distinct oxygen compositions and are not identical in oxygen as previously thought according to a new study released today in Nature Geoscience.

The paper, titled Distinct oxygen isotope compositions of the Earth and Moon, may challenge the current understanding of the formation of the Moon.

Previous research led to scientists to develop the Giant Impact Hypothesis suggesting the Moon was formed from debris following a giant collision between early-Earth and a proto-planet named Theia. The Earth and Moon are geochemically similar. Samples returned from the Moon from the Apollo missions showed a near-identical composition in oxygen isotopes.

Although the Giant Impact Hypothesis can nicely explain many of the geochemical similarities between Earth and Moon, the extreme similarity in oxygen isotopes has been difficult to rationalize with this scenario: either the two bodies were compositionally identical in oxygen isotopes to start with, which is unlikely, or their oxygen isotopes were fully mixed in the aftermath of the impact, which has been difficult to model in simulations.

"Our findings suggest that the deep lunar mantle may have experienced the least mixing and is most representative of the impactor Theia," said Erick Cano. "The data imply the distinct oxygen isotope compositions of Theia and Earth were not completely homogenized by the Moon-forming impact and provides quantitative evidence that Theia could have formed farther from the Sun than did Earth."

To arrive at their findings, Cano, a research scientist, and along with colleagues Zach Sharp and Charles Shearer from UNM's Department of Earth and Planetary Sciences, conducted high-precision measurements of the oxygen isotopic composition on a range of lunar samples at UNM's Center for Stable Isotopes (CSI). The samples included basalts, highland anorthosites, norites and volcanic glass, a product of uncrystallized rapidly cooled magma.

They found that the oxygen isotopic composition varied depending on the type of rock tested. This may be due to the degree of mixing between the molten Moon and vapor atmosphere following the impact. Oxygen isotopes from samples taken from the deep lunar mantle were the most different to oxygen isotopes from Earth

"This data suggests that the deep lunar mantle may have experienced the least mixing and is most representative of the impactor Theia," said Sharp. "Based on the results from our isotopic analysis, Theia would have an origin farther out from the Sun relative to Earth and shows that Theia's distinct oxygen isotope composition was not completely lost through homogenization during the giant impact."

Read more at Science Daily

Exoplanet where it rains iron discovered

Researchers using ESO's Very Large Telescope (VLT) have observed an extreme planet where they suspect it rains iron. The ultra-hot giant exoplanet has a day side where temperatures climb above 2400 degrees Celsius, high enough to vaporise metals. Strong winds carry iron vapour to the cooler night side where it condenses into iron droplets.

"One could say that this planet gets rainy in the evening, except it rains iron," says David Ehrenreich, a professor at the University of Geneva in Switzerland. He led a study, published today in the journal Nature, of this exotic exoplanet. Known as WASP-76b, it is located some 640 light-years away in the constellation of Pisces.

This strange phenomenon happens because the 'iron rain' planet only ever shows one face, its day side, to its parent star, its cooler night side remaining in perpetual darkness. Like the Moon on its orbit around the Earth, WASP-76b is 'tidally locked': it takes as long to rotate around its axis as it does to go around the star.

On its day side, it receives thousands of times more radiation from its parent star than the Earth does from the Sun. It's so hot that molecules separate into atoms, and metals like iron evaporate into the atmosphere. The extreme temperature difference between the day and night sides results in vigorous winds that bring the iron vapour from the ultra-hot day side to the cooler night side, where temperatures decrease to around 1500 degrees Celsius.

Not only does WASP-76b have different day-night temperatures, it also has distinct day-night chemistry, according to the new study. Using the new ESPRESSO instrument on ESO's VLT in the Chilean Atacama Desert, the astronomers identified for the first time chemical variations on an ultra-hot gas giant planet. They detected a strong signature of iron vapour at the evening border that separates the planet's day side from its night side. "Surprisingly, however, we do not see the iron vapour in the morning," says Ehrenreich. The reason, he says, is that "it is raining iron on the night side of this extreme exoplanet."

"The observations show that iron vapour is abundant in the atmosphere of the hot day side of WASP-76b," adds María Rosa Zapatero Osorio, an astrophysicist at the Centre for Astrobiology in Madrid, Spain, and the chair of the ESPRESSO science team. "A fraction of this iron is injected into the night side owing to the planet's rotation and atmospheric winds. There, the iron encounters much cooler environments, condenses and rains down."

This result was obtained from the very first science observations done with ESPRESSO, in September 2018, by the scientific consortium who built the instrument: a team from Portugal, Italy, Switzerland, Spain and ESO.

ESPRESSO -- the Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations -- was originally designed to hunt for Earth-like planets around Sun-like stars. However, it has proven to be much more versatile. "We soon realised that the remarkable collecting power of the VLT and the extreme stability of ESPRESSO made it a prime machine to study exoplanet atmospheres," says Pedro Figueira, ESPRESSO instrument scientist at ESO in Chile.

Read more at Science Daily

Community factors influence how long you'll live, study shows

While lifestyle choices and genetics go a long way toward predicting longevity, a new study shows that certain community characteristics also play important roles. American communities with more fast food restaurants, a larger share of extraction industry-based jobs, or higher population density have shorter life expectancies, according to researchers from Penn State, West Virginia, and Michigan State Universities. Their findings can help communities identify and implement changes that may promote longer lifespans among their residents.

"American life expectancy recently declined for the first time in decades, and we wanted to explore the factors contributing to this decline. Because of regional variation in life expectancy, we knew community-level factors must matter," said Elizabeth Dobis, a postdoctoral scholar at the Penn State-based Northeast Regional Center for Rural Development (NERCRD), and lead author of the study. "By analyzing place-based factors alongside personal factors, we were able to draw several conclusions about which community characteristics contribute most strongly to this variation in life expectancy."

Life expectancy refers to the length of time a person born in a given year can expect to live. Dobis and her colleagues analyzed on a county-by-county basis how life expectancy in 2014 has changed from a 1980 baseline, using data from more than 3,000 U.S. counties.

They developed a statistical model to determine the relationship between a dozen community variables and each county's 2014 life expectancy, while controlling for personal variables that are known to be important, such as sex, race, education, single-parent status, obesity, and alcohol use.

The community variables they examined included health care access, population growth and density, fast food restaurants, healthy food access, employment by sector, urbanization, and social capital, which measures the networks and bonds providing social cohesion among residents. They looked at each variable in isolation while holding others constant, allowing them to determine which variables independently exert the strongest effect on life expectancy.

The researchers found that a county's 1980 life expectancy value strongly predicted variations in the 2014 value, but it didn't account for all of the variation.

"When we controlled for historical life expectancy, we found three additional community factors that each exert a significant negative effect -- a greater number of fast food restaurants, higher population density, and a greater share of jobs in mining, quarrying, and oil and gas extraction," Dobis said. "For example, for every one percentage point increase in the number of fast food restaurants in a county, life expectancy declined by .004 years for men and .006 years for women."

This represents a 15-20 days shorter life span for every man, woman and child in a community, for each 10 percentage point increase in fast food restaurants in a community -- or a 150-200 day shorter life span if the number of fast food restaurants were to double.

Similarly, a one percent increase in a county's share of jobs in the mining, quarrying, oil and gas sectors was found to decrease average life expectancy by .04 years for men (or 15 days) and .06 years (22 days) for women.

The research, which was published recently in Social Science and Medicine, also revealed several community factors that are positively related to life expectancy, including a growing population, good access to physicians, and a greater level of social cohesion.

"We were surprised by the strong positive contribution of social capital to life expectancy within communities," said NERCRD Director Stephan Goetz, professor of agricultural economics and regional economics at Penn State and a co-author on the study. "Places with residents who stick together more on a community or social level also appear to do a better of job of helping people in general live longer."

"Another interesting finding was that lower population density, or living in more rural areas, is associated with higher life expectancy," Goetz said. "This suggests that living in large, densely-settled metropolitan areas, with all of their amenities and other advantages, comes at the expense of lower life expectancy, at least in a statistical sense."

In addition to being the first life-expectancy study to include community variables in a county-level analysis, this also was the first study to statistically analyze the extent to which disparities in life expectancy are geographically clustered. This analysis revealed some striking patterns.

"We found exceptionally low life expectancies in the areas of the Pine Ridge and Rosebud Reservations in South Dakota," Dobis said. "We found similar 'cold spots' of low life expectancy in the arctic and interior portions of Alaska, the Deep South surrounding the Mississippi River, and in the Appalachian regions of Kentucky and West Virginia."

The research also revealed four "hot spots" of high life expectancy: a section of the Northeast spanning from Philadelphia to New England, southern Minnesota and the eastern Dakotas into Nebraska, an area in Colorado, and an area spanning central Idaho into the upper Rocky Mountains.

The team's findings have important policy implications, as they suggest that certain aspects of the built environment can be changed to enhance life expectancy. For example, public places that promote social interaction could increase a community's social capital levels, which in turn promote longer lifespans.

Read more at Science Daily

Mar 10, 2020

How plants protect themselves from sun damage

For plants, sunlight can be a double-edged sword. They need it to drive photosynthesis, the process that allows them to store solar energy as sugar molecules, but too much sun can dehydrate and damage their leaves.

A primary strategy that plants use to protect themselves from this kind of photodamage is to dissipate the extra light as heat. However, there has been much debate over the past several decades over how plants actually achieve this.

"During photosynthesis, light-harvesting complexes play two seemingly contradictory roles. They absorb energy to drive water-splitting and photosynthesis, but at the same time, when there's too much energy, they have to also be able to get rid of it," says Gabriela Schlau-Cohen, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor of Chemistry at MIT.

In a new study, Schlau-Cohen and colleagues at MIT, the University of Pavia, and the University of Verona directly observed, for the first time, one of the possible mechanisms that have been proposed for how plants dissipate energy. The researchers used a highly sensitive type of spectroscopy to determine that excess energy is transferred from chlorophyll, the pigment that gives leaves their green color, to other pigments called carotenoids, which can then release the energy as heat.

"This is the first direct observation of chlorophyll-to-carotenoid energy transfer in the light-harvesting complex of green plants," says Schlau-Cohen, who is the senior author of the study. "That's the simplest proposal, but no one's been able to find this photophysical pathway until now."

MIT graduate student Minjung Son is the lead author of the study, which appears today in Nature Communications. Other authors are Samuel Gordon '18, Alberta Pinnola of the University of Pavia, in Italy, and Roberto Bassi of the University of Verona.

Excess energy

When sunlight strikes a plant, specialized proteins known as light-harvesting complexes absorb light energy in the form of photons, with the help of pigments such as chlorophyll. These photons drive the production of sugar molecules, which store the energy for later use.

Much previous research has shown that plants are able to quickly adapt to changes in sunlight intensity. In very sunny conditions, they convert only about 30 percent of the available sunlight into sugar, while the rest is released as heat. If this excess energy is allowed to remain in the plant cells, it creates harmful molecules called free radicals that can damage proteins and other important cellular molecules.

"Plants can respond to fast changes in solar intensity by getting rid of extra energy, but what that photophysical pathway is has been debated for decades," Schlau-Cohen says.

The simplest hypothesis for how plants get rid of these extra photons is that once the light-harvesting complex absorbs them, chlorophylls pass them to nearby molecules called carotenoids. Carotenoids, which include lycopene and beta-carotene, are very good at getting rid of excess energy through rapid vibration. They are also skillful scavengers of free radicals, which helps to prevent damage to cells.

A similar type of energy transfer has been observed in bacterial proteins that are related to chlorophyll, but until now, it had not been seen in plants. One reason why it has been hard to observe this phenomenon is that it occurs on a very fast time scale (femtoseconds, or quadrillionths of a second). Another obstacle is that the energy transfer spans a broad range of energy levels. Until recently, existing methods for observing this process could only measure a small swath of the spectrum of visible light.

In 2017, Schlau-Cohen's lab developed a modification to a femtosecond spectroscopic technique that allows them to look at a broader range of energy levels, spanning red to blue light. This meant that they could monitor energy transfer between chlorophylls, which absorb red light, and carotenoids, which absorb blue and green light.

In this study, the researchers used this technique to show that photons move from an excited state, which is spread over multiple chlorophyll molecules within a light-harvesting complex, to nearby carotenoid molecules within the complex.

"By broadening the spectral bandwidth, we could look at the connection between the blue and the red ranges, allowing us to map out the changes in energy level. You can see energy moving from one excited state to another," Schlau-Cohen says.

Once the carotenoids accept the excess energy, they release most of it as heat, preventing light-induced damage to the cells.

Boosting crop yields


The researchers performed their experiments in two different environments -- one in which the proteins were in a detergent solution, and one in which they were embedded in a special type of self-assembling membrane called a nanodisc. They found that the energy transfer occurred more rapidly in the nanodisc, suggesting that environmental conditions affect the rate of energy dissipation.

It remains a mystery exactly how excess sunlight triggers this mechanism within plant cells. Schlau-Cohen's lab is now exploring whether the organization of chlorophylls and carotenoids within the chloroplast membrane play a role in activating the photoprotection system.

A better understanding of plants' natural photoprotection system could help scientists develop new ways to improve crop yields, Schlau-Cohen says. A 2016 paper from University of Illinois researchers showed that by overproducing all of the proteins involved in photoprotection, crop yields could be boosted by 15 to 20 percent. That paper also suggested that production could be further increased to a theoretical maximum of about 30 percent.

Read more at Science Daily

Our brains are powerful -- but secretive -- forecasters of video virality

When Stanford University neuroscientist Brian Knutson tracked his smartphone usage, he was shocked to learn that he spent twice as much time on his phone as he had anticipated.

"In many of our lives, every day, there is often a gap between what we actually do and what we intend to do," said Knutson, who is a professor of psychology in the School of Humanities and Sciences, reflecting on his smartphone habits. "We want to understand how and why people's choices lead to unintended consequences -- like wasting money or even time -- and also whether processes that generate individual choice can tell us something about choices made by large groups of people."

Toward that end, Knutson and colleagues are investigating an approach he calls "neuroforecasting" -- in which they use brain data from individuals who are in the process of making decisions to forecast how larger groups of unrelated people will respond to the same choices. His lab's latest neuroforecasting work in collaboration with researchers at Stanford's Graduate School of Business, published Mar. 9 in the journal Proceedings of the National Academy of Sciences, focused on how people spend time watching videos online.

By scanning people's brains as they selected and watched videos, the researchers discovered that both neural and behavioral responses to a video could forecast how long other people will watch that same video on the internet. When forecasting video popularity on the internet, however, brain responses were the only measure that mattered.

"Here, we have a case where there is information contained in subjects' brain activity that allows us to forecast the behavior of other, unrelated, people -- but it's not necessarily reflected in their self-reports or behavior," explained Lester Tong, a graduate student in the Knutson lab. "One of the key takeaways here is that brain activity matters, and can even reveal hidden information."

Cerebral secrets

The researchers analyzed data from 36 participants, who watched videos while being scanned with a brain imaging technique known as fMRI. The researchers also monitored participants' behavior -- like whether they chose to skip a video -- and asked them questions about each video, like how it made them feel and whether they thought it would be popular. Then, they examined how those same videos performed on the internet in terms of daily views and average duration of viewings.

Because videos are complex and change over time, the researchers specifically examined brain responses to the start and end of videos, as well as average responses to each video. They focused on activity in brain regions previously shown to predict peoples' willingness to spend money.

The researchers found that longer video views were associated with activity in reward-sensitive regions of the brain, while shorter video views were associated with activity in regions sensitive to arousal or punishment. The subjects' answers to questions about the videos also predicted their own behavior.

When it came to forecasting the behavior of others online, however, the data told a different story. Both the group's behavior and brain activity forecasted how long people would watch the videos online. However, only group brain activity forecasted the popularity (or views per day) of each video online. During just the first four seconds of watching each video, more activity in the brain region associated with anticipating reward forecasted a video's popularity online, whereas heightened activity in the region associated with anticipating punishment forecasted decreased popularity.

"If we examine our subjects' choices to watch the video or even their reported responses to the videos, they don't tell us about the general response online. Only brain activity seems to forecast a video's popularity on the internet," explained Knutson, who co-leads the NeuroChoice Initiative of the Stanford Wu Tsai Neurosciences Institute.

This and related research indicate that some steps of the choice process may prove more useful for broad neuroforecasting than others. By teasing out the specifics of which steps matter, the researchers think neuroforecasting might even apply across groups of different ages, genders, races or cultures when they show similar early neural responses.

Valuable choices

These findings suggest similarities between neuroforecasting how people spend time and how they spend money online, which the team has previously studied in non-traditional markets, including online markets for micro-loans and crowdfunding.

Knutson and his NeuroChoice colleagues have also been investigating neural mechanisms of choice in the context of drug addiction. In the future, they aim to continue exploring when brain data can complement behavioral data, and in which situations.

Read more at Science Daily

How a virus forms its symmetric shells

Viruses -- small disease-causing parasites that can infect all types of life forms -- have been well studied, but many mysteries linger. One such mystery is how a spherical virus circumvents energy barriers to form symmetric shells.

A research team led by physicist Roya Zandi at the University of California, Riverside, has made progress is solving this mystery. The team reports in a paper published in ACS Nano that an interplay of energies at the molecular level makes the formation of a shell possible.

Understanding the factors that contribute to viral assembly could enable biomedical attempts to block viral replication and infection. A better understanding of how viral shells -- nature's nano-containers -- form is of vital importance to material scientists and a crucial step in the design of engineered nano-shells that could serve as vehicles for delivering drugs to specific targets in the body.

Zandi's team explored the role of protein concentration and elastic energy in the self-organization of proteins on the curved shell surface to understand how a virus circumvents many energy barriers.

"Understanding the combined effect of elastic energy, genome-protein interaction, and protein concentration in the viral assembly constitutes the breakthrough of our work," said Zandi, a professor in the Department of Physics and Astronomy. "Our study shows that if a messy shell forms because of the high protein concentration or strong attractive interaction, then, as the shell grows larger, the cost of elastic energy becomes so high that several bonds can get broken, resulting in the disassembly and subsequent reassembly of a symmetric shell."

What is a virus?

The simplest physical object in biology, a virus consists of a protein shell called the capsid, which protects its nucleic acid genome -- RNA or DNA. Viruses can be thought of as mobile containers of RNA or DNA that insert their genetic material into living cells. They then take over the cells' reproductive machinery to reproduce their own genome and capsid.

Capsid formation is one of the most crucial steps in the process of viral infection. The capsid can be cylindrical or conical in shape, but more commonly it assumes an icosahedral structure, like a soccer ball.

An icosahedron is a geometrical structure with 12 vertices, 20 faces, and 30 sides. An official soccer ball is a kind of icosahedron called a truncated icosahedron; it has 32 panels cut into the shape of 20 hexagons and 12 pentagons, with the pentagons separated from each other by hexagons.

Viral assembly is not well understood because viruses are very small, measuring in nanometers, a nanometer being one-billionth of a meter. The assembly also happens very quickly, typically in milliseconds, a millisecond being one-thousandth of a second. Theoretical work and simulations are necessary to understand how a virus grows.

"A viral shell is highly symmetric," Zandi said. "If one pentagonal defect forms in the wrong location, it breaks down the symmetry. Despite this sensitivity, viral shells are often assembled into well-defined symmetric structures."

Nano vehicles


Zandi explained that due to a lack of experimental data, the virus assembly process is not well understood. The new work found the elastic properties of capsid proteins and the attractive interaction between them go hand in hand to form highly symmetric configurations that are energetically very stable.

"By fine-tuning these parameters, we can control the final structure and stability of viral capsids," she said. "These viral capsids can be used as nano-containers for transporting drugs as cargo to specific targets. What makes them highly promising for drug delivery and gene delivery purposes is that they are stable, have a high uptake efficiency, and have low toxicity."

Already, some experimental groups are working with pharmaceutical companies to design drugs that interfere or block viral assembly. Her lab is working with international collaborators to design simulations to better understand virus assembly.

Read more at Science Daily

Knowing more about a virus threat may not satisfy you

People who rate themselves as highly knowledgeable about a new infectious disease threat could also be more likely to believe they don't know enough, a new study suggests.

In the case of this study, the infectious disease threat was the Zika virus. But the authors of the new study, published recently in the journal Risk Analysis, say the results could apply to the recent novel coronavirus (COVID-19) outbreak.

"The Zika virus and the coronavirus have important things in common," said Shelly Hovick, co-author of the study and assistant professor of communication at The Ohio State University.

"In both cases, they are shrouded in uncertainty and have received a lot of media attention. Our research looks at how people seek and process information when there is so much uncertainty."

One of the key findings of the new study: With limited information about Zika available, more knowledge was not that comforting.

"We found that the more people thought they knew, the more they realized they didn't know enough," said Austin Hubner, lead author of the study and a doctoral student in communication at Ohio State.

"With the Zika virus, even the experts themselves didn't know much at the time. That's the same thing we're seeing with the coronavirus, and that's scary for people who believe they are at risk."

For the study, the researchers conducted an online survey of 494 people of childbearing age living in Florida in December 2016.

Florida residents were recruited for the study because it had the highest number of locally transmitted cases of Zika in the United States at the time.

Although most people infected with Zika don't have symptoms, pregnant women with the virus have a higher likelihood of their child being born with a specific birth defect.

Zika is primarily spread by mosquitoes, but it can also be transmitted from men and women to their sexual partners and through blood transfusions.

In the survey, respondents were asked a variety of questions about their knowledge and attitudes toward seeking information, how they processed what they learned about the Zika virus, and their plans for seeking more information.

As expected, participants who were pregnant or wanted to get pregnant (and men whose wives were in those situations) felt more at risk from Zika and were more likely to say they felt scared of Zika. But they weren't the only ones who felt worried about Zika.

"Novel risks like Zika or coronavirus may make some people react differently than well-known risks like cancer or the flu," Hovick said.

"Even if the data suggest someone is at low risk, the lack of information may make some people feel they are at high risk."

The findings showed that people who felt they didn't know enough about Zika didn't intend to spend more time than others seeking information. That was probably because they realized that there wasn't more information available, Hovick said.

But they did spend more time processing the information they uncovered and were more likely to agree with statements like "After I encounter information about Zika, I am likely to stop and think about it."

These findings suggest it is important for public health agencies to continuously update the public, Hovick said. Those who are worried or concerned about risks such as Zika are likely to process the information they encounter deeply, but they may not seek information on their own.

Participants were also more likely to intend to seek information about Zika if they believed other people expected them to do so. They were more likely to want to search for information if they agreed with statements like "People in my life whose opinions I value seek information about Zika."

"We should aim not just to provide information, but also shape messages that encourage people to stay on top of the situation, particularly in high-uncertainty environments," Hovick said.

"You have to make it clear that seeking more knowledge is something that their friends and family expect of them."

Hovick said they have considered trying to replicate the study with the current coronavirus outbreak, but that Zika virus was slower developing.

Read more at Science Daily

Mar 9, 2020

Cosmic impact caused destruction of one of world's earliest human settlements

Before the Taqba Dam impounded the Euphrates River in northern Syria in the 1970s, an archaeological site named Abu Hureyra bore witness to the moment ancient nomadic people first settled down and started cultivating crops. A large mound marks the settlement, which now lies under Lake Assad.

But before the lake formed, archaeologists were able to carefully extract and describe much material, including parts of houses, food and tools -- an abundance of evidence that allowed them to identify the transition to agriculture nearly 12,800 years ago. It was one of the most significant events in our Earth's cultural and environmental history.

Abu Hureyra, it turns out, has another story to tell. Found among the cereals and grains and splashed on early building material and animal bones was meltglass, some features of which suggest it was formed at extremely high temperatures -- far higher than what humans could achieve at the time -- or that could be attributed to fire, lighting or volcanism.

"To help with perspective, such high temperatures would completely melt an automobile in less than a minute," said James Kennett, a UC Santa Barbara emeritus professor of geology. Such intensity, he added, could only have resulted from an extremely violent, high-energy, high-velocity phenomenon, something on the order of a cosmic impact.

Based on materials collected before the site was flooded, Kennett and his colleagues contend Abu Hureyra is the first site to document the direct effects of a fragmented comet on a human settlement. These fragments are all part of the same comet that likely slammed into Earth and exploded in the atmosphere at the end of the Pleistocene epoch, according to Kennett. This impact contributed to the extinction of most large animals, including mammoths, and American horses and camels; the disappearance of the North American Clovis culture; and to the abrupt onset of the end-glacial Younger Dryas cooling episode.

The team's findings are highlighted in a paper published in the Nature journal Scientific Reports.

"Our new discoveries represent much more powerful evidence for very high temperatures that could only be associated with a cosmic impact," said Kennett, who with his colleagues first reported evidence of such an event in the region in 2012.

Abu Hureyra lies at the easternmost sector of what is known as the Younger Dryas Boundary (YDB) strewnfield, which encompasses about 30 other sites in the Americas, Europe and parts of the Middle East. These sites hold evidence of massive burning, including a widespread carbon-rich "black mat" layer that contains millions of nanodiamonds, high concentrations of platinum and tiny metallic spherules formed at very high temperatures. The YDB impact hypothesis has gained more traction in recent years because of many new discoveries, including a very young impact crater beneath the Hiawatha Glacier of the Greenland ice sheet, and high-temperature meltglass and other similar evidence at an archaeological site in Pilauco, located in southern Chile.

"The Abu Hureyra village would have been abruptly destroyed," Kennett said. Unlike the evidence from Pilauco, which was limited to human butchering of large animals up to but not younger than the YDB impact burn layer, Abu Hureyra shows direct evidence of the disaster on this early human settlement. An impact or an airburst must have occurred sufficiently close to send massive heat and molten glass over the entire early village, Kennett noted.

The glass was analyzed for geochemical composition, shape, structure, formation temperature, magnetic characteristics and water content. Results from the analysis showed that it formed at very high temperatures and included minerals rich in chromium, iron, nickel, sulfides, titanium and even platinum- and iridium-rich melted iron -- all of which formed in temperatures higher than 2200 degrees Celsius.

"The critical materials are extremely rare under normal temperatures, but are commonly found during impact events," Kennett said. According to the study, the meltglass was formed "from the nearly instantaneous melting and vaporization of regional biomass, soils and floodplain deposits, followed by instantaneous cooling." Additionally, because the materials found are consistent with those found in the YDB layers at the other sites across the world, it's likely that they resulted from a fragmented comet, as opposed to impacts caused by individual comets or asteroids.

Read more at Science Daily

Safety zone saves giant moons from fatal plunge

Numerical simulations showed that the temperature gradient in the disk of gas around a young gas giant planet could play a critical role in the development of a satellite system dominated by a single large moon, similar to Titan around Saturn. Researchers found that dust in the circumplanetary disk can create a "safety zone," which keeps the moon from falling into the planet as the system evolves.

Astronomers believe that many of the moons we see in the Solar System, especially large moons, formed along with the parent planet. In this scenario, moons form from the gas and dust spinning around the still forming planet. But previous simulations have resulted in either all large moons falling into the planet and being swallowed-up or in multiple large moons remaining. The situation we observe around Saturn, with many small moons but only one large moon, does not fit in either of these models.

Yuri Fujii, a Designated Assistant Professor at Nagoya University, and Masahiro Ogihara, a Project Assistant Professor at the National Astronomical Observatory of Japan (NAOJ), created a new model of circumplanetary disks with a more realistic temperature distribution by considering multiple sources of opacities including dust and ice. Then, they simulated the orbital migration of moons considering pressure from disk gas and the gravity of other satellites.

Their simulations show that there is a "safety zone" where a moon is pushed away from the planet. In this area, warmer gas inside the orbit pushes the satellite outward and prevents it from falling into the planet.

"We demonstrated for the first time that a system with only one large moon around a giant planet can form," says Fujii. "This is an important milestone to understand the origin of Titan."

But Ogihara cautions, "It would be difficult to examine whether Titan actually experienced this process. Our scenario could be verified through research of satellites around extrasolar planets. If many single-exomoon systems are found, the formation mechanisms of such systems will become a red-hot issue."

From Science Daily

'Strange' glimpse into neutron stars and symmetry violation

New results from precision particle detectors at the Relativistic Heavy Ion Collider (RHIC) offer a fresh glimpse of the particle interactions that take place in the cores of neutron stars and give nuclear physicists a new way to search for violations of fundamental symmetries in the universe. The results, just published in Nature Physics, could only be obtained at a powerful ion collider such as RHIC, a U.S. Department of Energy (DOE) Office of Science user facility for nuclear physics research at DOE's Brookhaven National Laboratory.

The precision measurements reveal that the binding energy holding together the components of the simplest "strange-matter" nucleus, known as a "hypertriton," is greater than obtained by previous, less-precise experiments. The new value could have important astrophysical implications for understanding the properties of neutron stars, where the presence of particles containing so-called "strange" quarks is predicted to be common.

The second measurement was a search for a difference between the mass of the hypertriton and its antimatter counterpart, the antihypertriton (the first nucleus containing an antistrange quark, discovered at RHIC in 2010). Physicists have never found a mass difference between matter-antimatter partners so seeing one would be a big discovery. It would be evidence of "CPT" violation -- a simultaneous violation of three fundamental symmetries in nature pertaining to the reversal of charge, parity (mirror symmetry), and time.

"Physicists have seen parity violation, and violation of CP together (each earning a Nobel Prize for Brookhaven Lab[ -- ), but never CPT," said Brookhaven physicist Zhangbu Xu, co-spokesperson of RHIC's STAR experiment, where the hypertriton research was done.

But no one has looked for CPT violation in the hypertriton and antihypertriton, he said, "because no one else could yet."

The previous CPT test of the heaviest nucleus was performed by the ALICE collaboration at Europe's Large Hadron Collider (LHC), with a measurement of the mass difference between ordinary helium-3 and antihelium-3. The result, showing no significant difference, was published in Nature Physics in 2015.

Spoiler alert: The STAR results also reveal no significant mass difference between the matter-antimatter partners explored at RHIC, so there's still no evidence of CPT violation. But the fact that STAR physicists could even make the measurements is a testament to the remarkable capabilities of their detector.

Strange matter

The simplest normal-matter nuclei contain just protons and neutrons, with each of those particles made of ordinary "up" and "down" quarks. In hypertritons, one neutron is replaced by a particle called a lambda, which contains one strange quark along with the ordinary up and down varieties.

Such strange matter replacements are common in the ultra-dense conditions created in RHIC's collisions -- and are also likely in the cores of neutron stars where a single teaspoon of matter would weigh more than 1 billion tons. That's because the high density makes it less costly energy-wise to make strange quarks than the ordinary up and down varieties.

For that reason, RHIC collisions give nuclear physicists a way to peer into the subatomic interactions within distant stellar objects without ever leaving Earth. And because RHIC collisions create hypertritons and antihypertritons in nearly equal amounts, they offer a way to search for CPT violation as well.

But finding those rare particles among the thousands that stream from each RHIC particle smashup -- with collisions happening thousands of times each second -- is a daunting task. Add to the challenge the fact that these unstable particles decay almost as soon as they form -- within centimeters of the center of the four-meter-wide STAR detector.

Precision detection

Fortunately, detector components added to STAR for tracking different kinds of particles made the search a relative cinch. These components, called the "Heavy-Flavor Tracker," are located very close to the STAR detector's center. They were developed and built by a team of STAR collaborators led by scientists and engineers at DOE's Lawrence Berkeley National Laboratory (Berkeley Lab). These inner components allow scientists to match up tracks created by decay products of each hypertriton and antihypertriton with their point of origin just outside the collision zone.

"What we look for are the 'daughter' particles -- the decay products that strike detector components at the outer edges of STAR," said Berkeley Lab physicist Xin Dong. Identifying tracks of pairs or triplets of daughter particles that originate from a single point just outside the primary collision zone allows the scientists to pick these signals out from the sea of other particles streaming from each RHIC collision.

"Then we calculate the momentum of each daughter particle from one decay (based on how much they bend in STAR's magnetic field), and from that we can reconstruct their masses and the mass of the parent hypertriton or antihypertriton particle before it decayed," explained Declan Keane of Kent State University (KSU). Telling the hypertriton and antihypertriton apart is easy because they decay into different daughters, he added.

"Keane's team, including Irakli Chakeberia, has specialized in tracking these particles through the detectors to 'connect the dots,'" Xu said. "They also provided much needed visualization of the events."

As noted, compiling data from many collisions revealed no mass difference between the matter and antimatter hypernuclei, so there's no evidence of CPT violation in these results.

But when STAR physicists looked at their results for the binding energy of the hypertriton, it turned out to be larger than previous measurements from the 1970s had found.

The STAR physicists derived the binding energy by subtracting their value for the hypertriton mass from the combined known masses of its building-block particles: a deuteron (a bound state of a proton and a neutron) and one lambda.

"The hypertriton weighs less than the sum of its parts because some of that mass is converted into the energy that is binding the three nucleons together," said Fudan University STAR collaborator Jinhui Chen, whose PhD student, Peng Liu, analyzed the large datasets to arrive at these results. "This binding energy is really a measure of the strength of these interactions, so our new measurement could have important implications for understanding the 'equation of state' of neutron stars," he added.

Read more at Science Daily

Ancient shell shows days were half-hour shorter 70 million years ago

Dinosaur scene illustration
Earth turned faster at the end of the time of the dinosaurs than it does today, rotating 372 times a year, compared to the current 365, according to a new study of fossil mollusk shells from the late Cretaceous. This means a day lasted only 23 and a half hours, according to the new study in AGU's journal Paleoceanography and Paleoclimatology.

The ancient mollusk, from an extinct and wildly diverse group known as rudist clams, grew fast, laying down daily growth rings. The new study used lasers to sample minute slices of shell and count the growth rings more accurately than human researchers with microscopes.

The growth rings allowed the researchers to determine the number of days in a year and more accurately calculate the length of a day 70 million years ago. The new measurement informs models of how the Moon formed and how close to Earth it has been over the 4.5-billion-year history of the Earth-Moon gravitational dance.

The new study also found corroborating evidence that the mollusks harbored photosynthetic symbionts that may have fueled reef-building on the scale of modern-day corals.

The high resolution obtained in the new study combined with the fast growth rate of the ancient bivalves revealed unprecedented detail about how the animal lived and the water conditions it grew in, down to a fraction of a day.

"We have about four to five datapoints per day, and this is something that you almost never get in geological history. We can basically look at a day 70 million years ago. It's pretty amazing," said Niels de Winter, an analytical geochemist at Vrije Universiteit Brussel and the lead author of the new study.

Climate reconstructions of the deep past typically describe long term changes that occur on the scale of tens of thousands of years. Studies like this one give a glimpse of change on the timescale of living things and have the potential to bridge the gap between climate and weather models.

Chemical analysis of the shell indicates ocean temperatures were warmer in the Late Cretaceous than previously appreciated, reaching 40 degrees Celsius (104 degrees Fahrenheit) in summer and exceeding 30 degrees Celsius (86 degrees Fahrenheit) in winter. The summer high temperatures likely approached the physiological limits for mollusks, de Winter said.

"The high fidelity of this data-set has allowed the authors to draw two particularly interesting inferences that help to sharpen our understanding of both Cretaceous astrochronology and rudist palaeobiology," said Peter Skelton, a retired lecturer of palaeobiology at The Open University and a rudist expert unaffiliated with the new study.

Ancient reef-builders

The new study analyzed a single individual that lived for over nine years in a shallow seabed in the tropics -- a location which is now, 70-million-years later, dry land in the mountains of Oman.

Torreites sanchezi mollusks look like tall pint glasses with lids shaped like bear claw pastries. The ancient mollusks had two shells, or valves, that met in a hinge, like asymmetrical clams, and grew in dense reefs, like modern oysters. They thrived in water several degrees warmer worldwide than modern oceans.

In the late Cretaceous, rudists like T. sanchezi dominated the reef-building niche in tropical waters around the world, filling the role held by corals today. They disappeared in the same event that killed the non-avian dinosaurs 66 million years ago.

"Rudists are quite special bivalves. There's nothing like it living today," de Winter said. "In the late Cretaceous especially, worldwide most of the reef builders are these bivalves. So they really took on the ecosystem building role that the corals have nowadays."

The new method focused a laser on small bits of shell, making holes 10 micrometers in diameter, or about as wide as a red blood cell. Trace elements in these tiny samples reveal information about the temperature and chemistry of the water at the time the shell formed. The analysis provided accurate measurements of the width and number of daily growth rings as well as seasonal patterns. The researchers used seasonal variations in the fossilized shell to identify years.

The new study found the composition of the shell changed more over the course of a day than over seasons, or with the cycles of ocean tides. The fine-scale resolution of the daily layers shows the shell grew much faster during the day than at night

"This bivalve had a very strong dependence on this daily cycle, which suggests that it had photosymbionts," de Winter said. "You have the day-night rhythm of the light being recorded in the shell."

This result suggests daylight was more important to the lifestyle of the ancient mollusk than might be expected if it fed itself primarily by filtering food from the water, like modern day clams and oysters, according to the authors. De Winter said the mollusks likely had a relationship with an indwelling symbiotic species that fed on sunlight, similar to living giant clams, which harbor symbiotic algae.

"Until now, all published arguments for photosymbiosis in rudists have been essentially speculative, based on merely suggestive morphological traits, and in some cases were demonstrably erroneous. This paper is the first to provide convincing evidence in favor of the hypothesis," Skelton said, but cautioned that the new study's conclusion was specific to Torreites and could not be generalized to other rudists.

Moon retreat


De Winter's careful count of the number of daily layers found 372 for each yearly interval. This was not a surprise, because scientists know days were shorter in the past. The result is, however, the most accurate now available for the late Cretaceous, and has a surprising application to modeling the evolution of the Earth-Moon system.

The length of a year has been constant over Earth's history, because Earth's orbit around the Sun does not change. But the number of days within a year has been shortening over time because days have been growing longer. The length of a day has been growing steadily longer as friction from ocean tides, caused by the Moon's gravity, slows Earth's rotation.

The pull of the tides accelerates the Moon a little in its orbit, so as Earth's spin slows, the Moon moves farther away. The moon is pulling away from Earth at 3.82 centimeters (1.5 inches) per year. Precise laser measurements of distance to the Moon from Earth have demonstrated this increasing distance since the Apollo program left helpful reflectors on the Moon's surface.

But scientists conclude the Moon could not have been receding at this rate throughout its history, because projecting its progress linearly back in time would put the Moon inside the Earth only 1.4 billion years ago. Scientists know from other evidence that the Moon has been with us much longer, most likely coalescing in the wake of a massive collision early in Earth's history, over 4.5 billion years ago. So the Moon's rate of retreat has changed over time, and information from the past, like a year in the life of an ancient clam, helps researchers reconstruct that history and model of the formation of the moon.

Read more at Science Daily