Sep 26, 2020

Marine heatwaves are human-made

 A marine heatwave (ocean heatwave) is an extended period of time in which the water temperature in a particular ocean region is abnormally high. In recent years, heatwaves of this kind have caused considerable changes to the ecosystems in the open seas and at the coast. Their list of negative effects is long: Marine heatwaves can lead to increased mortality among birds, fish and marine mammals, they can trigger harmful algal blooms, and greatly reduce the supply of nutrients in the ocean. Heatwaves also lead to coral bleaching, trigger movements of fish communities to colder waters, and may contribute to the sharp decline of the polar icecaps.

Researchers led by Bern-based marine scientist Charlotte Laufkötter have been investigating the question of how anthropogenic climate change has been affecting major marine heatwaves in recent decades. In a study recently published in the well-known scientific journal Science, Charlotte Laufkötter, Jakob Zscheischler and Thomas Frölicher concluded that the probability of such events has increased massively as a result of global warming. The analysis has shown that in the past 40 years, marine heatwaves have become considerably longer and more pronounced in all of the world's oceans. "The recent heatwaves have had a serious impact on marine ecosystems, which need a long time to recover afterwards -- if they ever fully recover," explains Charlotte Laufkötter.

A huge increase since the 1980s

In its investigations, the Bern team studied satellite measurements of the sea surface temperature between 1981 and 2017. It was found that in the first decade of the study period, 27 major heatwaves occurred which lasted 32 days on average. They reached maximum temperatures of 4.8 degrees Celsius above the long-term average temperature. In the most recent decade to be analyzed, however, 172 major events occurred, lasting an average of 48 days and reaching peaks of 5.5 degrees above the long-term average temperature. The temperatures in the sea usually fluctuate only slightly. Week-long deviations of 5.5 degrees over an area of 1.5 million square kilometers -- an area 35 times the size of Switzerland -- present an extraordinary change to the living conditions of marine organisms.

Statistical analyses demonstrate human influence

For the seven marine heatwaves with the greatest impact, researchers at the University of Bern carried out what is referred to as attribution studies. Statistical analyses and climate simulations are used to assess the extent to which anthropogenic climate change is responsible for the occurrence of individual extremes in the weather conditions or the climate. Attribution studies typically demonstrate how the frequency of the extremes has changed through human influence.

Read more at Science Daily

Solving the strange storms on Jupiter

 At the south pole of Jupiter lurks a striking sight -- even for a gas giant planet covered in colorful bands that sports a red spot larger than the earth. Down near the south pole of the planet, mostly hidden from the prying eyes of humans, is a collection of swirling storms arranged in an unusually geometric pattern.

Since they were first spotted by NASA's Juno space probe in 2019, the storms have presented something of a mystery to scientists. The storms are analogous to hurricanes on Earth. However, on our planet, hurricanes do not gather themselves at the poles and twirl around each other in the shape of a pentagon or hexagon, as do Jupiter's curious storms.

Now, a research team working in the lab of Andy Ingersoll, Caltech professor of planetary science, has discovered why Jupiter's storms behave so strangely. They did so using math derived from a proof written by Lord Kelvin, a British mathematical physicist and engineer, nearly 150 years ago.

Ingersoll, who was a member of the Juno team, says Jupiter's storms are remarkably similar to the ones that lash the East Coast of the United States every summer and fall, just on a much larger scale.

"If you went below the cloud tops, you would probably find liquid water rain drops, hail, and snow," he says. "The winds would be hurricane-force winds. Hurricanes on Earth are a good analog of the individual vortices within these arrangements we see on Jupiter, but there is nothing so stunningly beautiful here."

As on Earth, Jupiter's storms tend to form closer to the equator and then drift toward the poles. However, Earth's hurricanes and typhoons dissipate before they venture too far from the equator. Jupiter's just keep going until they reach the poles.

"The difference is that on the earth hurricanes run out of warm water and they run into continents," Ingersoll says. Jupiter has no land, "so there's much less friction because there's nothing to rub against. There's just more gas under the clouds. Jupiter also has heat left over from its formation that is comparable to the heat it gets from the sun, so the temperature difference between its equator and its poles is not as great as it is on Earth."

However, Ingersoll says, this explanation still does not account for the behavior of the storms once they reach Jupiter's south pole, which is unusual even compared to other gas giants. Saturn, which is also a gas giant, has one enormous storm at each of its poles, rather than a geometrically arranged collection of storms.

The answer to the mystery of why Jupiter has these geometric formations and other planets do not, Ingersoll and his colleagues discovered, could be found in the past, specifically in work conducted in 1878 by Alfred Mayer, an American physicist and Lord Kelvin. Mayer had placed floating circular magnets in a pool of water and observed that they would spontaneously arrange themselves into geometric configurations, similar to those seen on Jupiter, with shapes that depended on the number of magnets. Kelvin used Mayer's observations to develop a mathematical model to explain the magnets' behavior.

"Back in the 19th century, people were thinking about how spinning pieces of fluid would arrange themselves into polygons," Ingersoll says. "Although there were lots of laboratory studies of these fluid polygons, no one had thought of applying that to a planetary surface."

To do so, the research team used a set of equations known as the shallow-water equations to build a computer model of what might be happening on Jupiter, and began to run simulations.

"We wanted to explore the combination of parameters that makes these cyclones stable," says Cheng Li (Phd '17), lead author and 51 Pegasi b postdoctoral fellow at UC Berkeley. "There are established theories that predict that cyclones tend to merge at the pole due to the rotation of the planet and so we found in the initial trial runs."

Eventually, however, the team found that a Jupiter-like stable geometric arrangement of storms would form if the storms were each surrounded by a ring of winds that turned in the opposite direction from the spinning storms, or a so-called anticyclonic ring. The presence of anticyclonic rings causes the storms to repel each other, rather than merge.

Ingersoll says the research could help scientists better understand how weather on Earth behaves.

Read more at Science Daily

Sep 25, 2020

Gravity causes homogeneity of the universe

 The temporal evolution of the universe, from the Big Bang to the present, is described by Einstein's field equations of general relativity. However, there are still a number of open questions about cosmological dynamics, whose origins lie in supposed discrepancies between theory and observation. One of these open questions is: Why is the universe in its present state so homogeneous on large scales?

From the Big Bang to the present

It is assumed that the universe was in an extreme state shortly after the Big Bang, characterized in particular by strong fluctuations in the curvature of spacetime. During the long process of expansion, the universe then evolved towards its present state, which is homogeneous and isotropic on large scales -- in simple terms: the cosmos looks the same everywhere. This is inferred, among other things, from the measurement of the so-called background radiation, which appears highly uniform in every direction of observation. This homogeneity is surprising in that even two regions of the universe that were causally decoupled from each other -- i.e., they could not exchange information -- still exhibit identical values of background radiation.

Alternative theories

To resolve this supposed contradiction, the so-called inflation theory was developed, which postulates a phase of extremely rapid expansion immediately after the Big Bang, which in turn can explain the homogeneity in the background radiation.

However, how this phase can be explained in the context of Einstein's theory requires a number of modifications of the theory, which seem artificial and cannot be verified directly.

New findings: Homogenization by gravitation

Up to now it was not clear whether the homogenization of the universe can be explained completely by Einstein's equations. The reason for this is the complexity of the equations and the associated difficulty to analyze their solutions -- models for the universe -- and to predict their behavior.

In the concrete problem, the time evolution of the originally strong deviations from the homogeneous state as cosmological gravitational waves has to be analyzed mathematically. It has to be shown that they decay in the course of the expansion thus allowing the universe to get its homogeneous structure.

Read more at Science Daily

Primate brain size does not predict their intelligence

 Chimpanzees, gorillas and orangutans are our closest relatives, and like us they have relatively large brains and they are very intelligent. But do animals with larger brains really perform better in cognitive tests? A research team from the German Primate Center (DPZ) -- Leibniz Institute for Primate Research in Göttingen has for the first time systematically investigated the cognitive abilities of lemurs, which have relatively small brains compared to other primates. Conducting systematic tests with identical methods revealed that cognitive abilities of lemurs hardly differ from those of monkeys and great apes. Instead, this study revealed that the relationship between brain size and cognitive abilities cannot be generalized and it provides new insights into the evolution of cognitive abilities in primates.

Humans and non-human primates are among the most intelligent living beings. Their brain size may underly their intelligence as primates have relatively large brains in relation to their body size. For example, it is assumed that larger brains enable faster learning and better memory capacities. Within primates, however, species can differ up to 200-fold in brain size. A team of researchers from the German Primate Center (DPZ) has now investigated whether the cognitive performances of lemurs with their relatively small brains differ from those of other primates.

Using a comprehensive standardized test series of cognitive experiments, the so-called "Primate Cognition Test Battery" (PCTB), small children, great apes as well as baboons and macaques have already been tested for their cognitive abilities in the physical and social domain. Cognitive skills in the physical domain include the understanding of spatial, numerical and causal relationships between inanimate objects, while cognitive skills in the social domain deal with intentional actions, perceptions and the understanding of the knowledge of other living beings. Initial studies have shown that children possess a better social intelligence than non-human primates. In the physical domain, however, the species hardly differed even though they show great variation in their relative brain sizes.

For the first time, researchers of the "Behavioral Ecology and Sociobiology Unit" of the DPZ have now tested three lemur species with the PCTB. Lemurs are the most basal living primates and represent the evolutionary link between primates and other mammals, which is why they serve as a living model of primates' origin of cognitive abilities. The study examined ring-tailed lemurs, black-and-white ruffed lemurs and grey mouse lemurs, which differ in their social system, diet and brain size, not only among each other, but also compared to the previously tested Old World monkeys and great apes.

The results of the new study show that despite their smaller brains lemurs' average cognitive performance in the tests of the PCTB was not fundamentally different from the performances of the other primate species. This is even true for mouse lemurs, which have brains about 200 times smaller than those of chimpanzees and orangutans. Only in tests examining spatial reasoning primate species with larger brains performed better. However, no systematic differences in species performances were neither found for the understanding of causal and numerical relationships nor in tests of the social domain. Neither diet, nor social system or brain size could explain the results from the PCTB experiments. "With our study we show that cognitive abilities cannot be generalized, but that species instead differ in domain-specific cognitive skills," says Claudia Fichtel, one of the two first authors of the study funded by the German Research Foundation. "Accordingly, the relationship between brain size and cognitive abilities cannot be generalized."

Read more at Science Daily

Unusual climate conditions influenced WWI mortality and subsequent influenza pandemic

 Scientists have spotted a once-in-a-century climate anomaly during World War I that likely increased mortality during the war and the influenza pandemic in the years that followed.

Well-documented torrential rains and unusually cold temperatures affected the outcomes of many major battles on the Western Front during the war years of 1914 to 1918. Most notably, the poor conditions played a role in the battles of Verdun and the Somme, during which more than one million soldiers were killed or wounded.

The bad weather may also have exacerbated the influenza pandemic that claimed 50 to 100 million lives between 1917 and 1919, according to the new study. Scientists have long studied the spread of the H1N1 influenza strain that caused the pandemic, but little research has focused on whether environmental conditions played a role.

In a new study in AGU's journal GeoHealth, scientists analyzed an ice core taken from a glacier in the European Alps to reconstruct climate conditions during the war years. They found an extremely unusual influx of air from the North Atlantic Ocean affected weather on the European continent from 1914 to 1919. The incessant rain and cold caused by this influx of ocean air hung over major battlefields on the Western Front but also affected the migratory patterns of mallard ducks, the main animal host for H1N1 flu virus strains.

Mallard ducks likely stayed put in western Europe in the autumns of 1917 and 1918 because of the bad weather, rather than migrating northeast to Russia as they normally do, according to the new study. This kept them close to military and civilian populations and may have allowed the birds to transfer a particularly virulent strain of H1N1 influenza to humans through bodies of water. Listen to the latest episode of AGU's podcast Third Pod from the Sun to learn more about climate and pandemics.

The findings help scientists better understand the factors that contributed to making the war and pandemic so deadly, according to Alexander More, a climate scientist and historian at the Harvard University/Climate Change Institute, associate professor of environmental health at Long Island University and lead author of the new study.

"I'm not saying that this was 'the' cause of the pandemic, but it was certainly a potentiator, an added exacerbating factor to an already explosive situation," More said.

"It's interesting to think that very heavy rainfall may have accelerated the spread of the virus," said Philip Landrigan, director of the Global Public Health Program at Boston College who was not connected to the new study. "One of the things we've learned in the COVID pandemic is that some viruses seem to stay viable for longer time periods in humid air than in dry air. So it makes sense that if the air in Europe were unusually wet and humid during the years of World War I, transmission of the virus might have been accelerated."

War and weather

The rainy, cold, muddy landscapes of the Western Front are well documented by historians. Poet Mary Borden described it as "the liquid grave of our armies" in her poem "The Song of the Mud" about 1916's Battle of the Somme.

Historical accounts of early battles in France describe how the intense rain affected British, French and German troops. Newly dug trenches and tunnels filled with rainwater; muddy fields slowed the movement of troops during the day; and cold nighttime temperatures caused thousands to endure frostbite. However, little research has been done on the environmental conditions that may have caused the torrential rains and unusual cold.

In the new study, More and his colleagues reconstructed the environmental conditions over Europe during the war using data from an ice core taken from the Alps. They then compared the environmental conditions to historical records of deaths during the war years.

They found mortality in Europe peaked three times during the war, and these peaks occurred during or soon after periods of cold temperatures and heavy rain caused by extremely unusual influxes of ocean air in the winters of 1915, 1916 and 1918.

"Atmospheric circulation changed and there was much more rain, much colder weather all over Europe for six years," More said. "In this particular case, it was a once in a 100-year anomaly."

The new ice core record corroborates historical accounts of torrential rain on battlefields of the Western Front, which caused many soldiers to die from drowning, exposure, pneumonia and other infections.

Interestingly, the results suggest the bad weather may have kept mallard ducks and other migratory birds in Europe during the war years, where they could easily transmit influenza to humans by water contaminated with their fecal droppings. Mallard ducks are the main animal reservoir of H1N1 flu viruses and as many as 60 percent of mallard ducks can be infected with H1N1 every year. Previous research has shown that migratory patterns of mallards and other birds are disrupted during bouts of unusual weather.

"Mallards have been shown to be very sensitive to climate anomalies in their migration patterns," More said. "So it is likely is that they stayed put for much of that period."

The first wave of H1N1 influenza infection in Europe occurred in the spring of 1918, most likely originating among allied troops arriving in France from Asia in the fall and winter of 1917, according to previous research. The new study found the deadliest wave of the pandemic in Europe began in the autumn of 1918, closely following a period of heavy precipitation and cold temperatures.

"These atmospheric reorganizations happen and they affect people," More said. "They affect how we move, how much water is available, what animals are around. Animals bring their own diseases with them in their movements, and their migrations are due to the environment and how it changes, or how we change it."

Read more at Science Daily

Some severe COVID-19 cases linked to genetic mutations or antibodies that attack the body

 

Coronavirus illustration
People infected by the novel coronavirus can have symptoms that range from mild to deadly. Now, two new analyses suggest that some life-threatening cases can be traced to weak spots in patients' immune systems.

At least 3.5 percent of study patients with severe COVID-19, the disease caused by the novel coronavirus, have mutations in genes involved in antiviral defense. And at least 10 percent of patients with severe disease create "auto-antibodies" that attack the immune system, instead of fighting the virus. The results, reported in two papers in the journal Science on September 24, 2020, identify some root causes of life-threatening COVID-19, says study leader Jean-Laurent Casanova, a Howard Hughes Medical Institute Investigator at The Rockefeller University.

Seeing these harmful antibodies in so many patients -- 101 out of 987 -- was "a stunning observation," he says. "These two papers provide the first explanation for why COVID-19 can be so severe in some people, while most others infected by the same virus are okay."

The work has immediate implications for diagnostics and treatment, Casanova says. If someone tests positive for the virus, they should "absolutely" be tested for the auto-antibodies, too, he adds, "with medical follow-up if those tests are positive." It's possible that removing such antibodies from the blood could ease symptoms of the disease.

A global effort

Casanova's team, in collaboration with clinicians around the world, first began enrolling COVID-19 patients in their study in February. At the time, they were seeking young people with severe forms of the disease to investigate whether these patients might have underlying weaknesses in their immune systems that made them especially vulnerable to the virus.

The plan was to scan patients' genomes -- in particular, a set of 13 genes involved in interferon immunity against influenza. In healthy people, interferon molecules act as the body's security system. They detect invading viruses and bacteria and sound the alarm, which brings other immune defenders to the scene.

Casanova's team has previously discovered genetic mutations that hinder interferon production and function. People with these mutations are more vulnerable to certain pathogens, including those that cause influenza. Finding similar mutations in people with COVID-19, the team thought, could help doctors identify patients at risk of developing severe forms of the disease. It could also point to new directions for treatment, he says.

In March, Casanova's team was aiming to enroll 500 patients with severe COVID-19 worldwide in their study. By August, they had more than 1,500, and they now have over 3,000. As the researchers began analyzing patient samples, they started to uncover harmful mutations, in people young and old. The team found that 23 out of 659 patients studied carried errors in genes involved in producing antiviral interferons.

Without a full complement of these antiviral defenders, COVID-19 patients wouldn't be able to fend off the virus, the researchers suspected. That thought sparked a new idea. Maybe other patients with severe COVID-19 also lacked interferons -- but for a different reason. Maybe some patients' bodies were harming these molecules themselves. As in autoimmune disorders such as type 1 diabetes and rheumatoid arthritis, some patients might be making antibodies that target the body. "That was the eureka moment for us," Casanova says.

The team's analysis of 987 patients with life-threatening COVID-19 revealed just that. At least 101 of the patients had auto-antibodies against an assortment of interferon proteins. "We said, 'bingo'!" Casanova remembers. These antibodies blocked interferon action and were not present in patients with mild COVID-19 cases, the researchers discovered.

"It's an unprecedented finding," says study co-author Isabelle Meyts, a pediatrician at the University Hospitals KU Leuven, in Belgium, who earlier this year helped enroll patients in the study, gather samples, and perform experiments. By testing for the presence of these antibodies, she says, "you can almost predict who will become severely ill."

The vast majority -- 94 percent -- of patients with the harmful antibodies were men, the team found. Men are more likely to develop severe forms of COVID-19, and this work offers one explanation for that gender variability, Meyts says.

Casanova's lab is now looking for the genetic driver behind those auto-antibodies. They could be linked to mutations on the X chromosome, he says. Such mutations might not affect women, because they have a second X chromosome to compensate for any defects in the first. But for men, who carry only a single X, even small genetic errors can be consequential.

Looking ahead Clinically, the team's new work could change how doctors and health officials think about vaccination distribution strategies, and even potential treatments. A clinical trial could examine, for instance, whether infected people who have the auto-antibodies benefit from treatment with one of the 17 interferons not neutralized by the auto-antibodies, or with plasmapheresis, a medical procedure that strips the antibodies from patients' blood. Either method could potentially counteract the effect of these harmful antibodies, Meyts says.

Read more at Science Daily

Sep 24, 2020

Uncovering new understanding of Earth's carbon cycle

 A new study led by a University of Alberta PhD student -- and published in Nature -- is examining the Earth's carbon cycle in new depth, using diamonds as breadcrumbs of insight into some of Earth's deepest geologic mechanisms.

"Geologists have recently come to the realization that some of the largest, most valuable diamonds are from the deepest portions of our planet," said Margo Regier, PhD student in the Department of Earth and Atmospheric Sciences under the supervision of Graham Pearson and Thomas Stachel. "While we are not yet certain why diamonds can grow to larger sizes at these depths, we propose a model where these 'superdeep' diamonds crystallize from carbon-rich magmas, which may be critical for them to grow to their large sizes."

Beyond their beauty and industrial applications, diamonds provide unique windows into the deep Earth, allowing scientists to examine the transport of carbon through the mantle.

"The vast majority of Earth's carbon is actually stored in its silicate mantle, not in the atmosphere," Regier explained. "If we are to fully understand Earth's whole carbon cycle then we need to understand this vast reservoir of carbon deep underground."

The study revealed that the carbon-rich oceanic crust that sinks into the deep mantle releases most of its carbon before it gets to the deepest portion of the mantle. This means that most carbon is recycled back to the surface and only small amounts of carbon will be stored in the deep mantle -- with significant implications for how scientists understand the Earth's carbon cycle. The mechanism is important to understand for a number of reasons, as Regier explained.

"The movement of carbon between the surface and mantle affects Earth's climate, the composition of its atmosphere, and the production of magma from volcanoes," said Regier. "We do not yet understand if this carbon cycle has changed over time, nor do we know how much carbon is stored in the deepest parts of our planet. If we want to understand why our planet has evolved into its habitable state it is today and how the surfaces and atmospheres of other planets may be shaped by their interior processes, we need to better understand these variables."

From Science Daily

Shadow of black hole in M87 galaxy is wobbling and has been for a while

 Analysis of previously unpublished data from observations of M87* between 2009 and 2013 by scientists at the Event Horizon Telescope (EHT) has revealed that the crescent shadow of the black hole is wobbling, and has rotated significantly over the past ten years of observation. Published today in The Astrophysical Journal, and led by scientists from the Center for Astrophysics | Harvard & Smithsonian (CfA), the study focused on the morphology of the black hole over time, and was made possible by advances in analysis and understanding achieved as a result of EHT's groundbreaking black hole photo in 2019.

"EHT can detect changes in the M87 morphology on timescales as short as a few days, but its general geometry should be constant on long timescales," said Maciek Wielgus, an astronomer at CfA, Black Hole Initiative (BHI) Fellow, and lead author on the paper. "In 2019, we saw the shadow of a black hole for the first time, but we only saw images observed during a one-week window, which is too short to see a lot of changes."

Combining previous data from 2009-2013 with data leading up to 2019 revealed that M87* adheres to theoretical predictions. The shape of the black hole's shadow has remained consistent, and its diameter remains in agreement with Einstein's theory of general relativity for a black hole of 6.5-billion solar masses. "In this study, we show that the general morphology, or presence of an asymmetric ring, most likely persists on timescales of several years," said Kazu Akiyama, a scientist at the MIT Haystack Observatory, and a participant on the project. "This is an important confirmation of theoretical expectations as the consistency throughout multiple observational epochs gives us more confidence than ever about the nature of M87* and the origin of the shadow."

While the crescent diameter remained consistent, new data also proves it was hiding a surprise: the ring is wobbling, and that means big news for scientists. For the first time, scientists will be able to catch a glimpse of the dynamical structure of the black hole's accretion flow; studying this region holds the key to understanding phenomena like launching relativistic jets. "The morphology of a relativistic jet -- low density outflow of tremendously energetic particles and fields -- for example, is key to understanding the interactions with the surrounding medium in a black hole's host galaxy," said Richard Anantua, a postdoc at the Center for Astrophysics | Harvard & Smithsonian and BHI Fellow, adding that studying morphology weaves an important story about black holes and their hosts.

The gas falling onto a black hole heats up to billions of degrees, ionizes and becomes turbulent in the presence of magnetic fields. This turbulence causes the appearance of the black hole to vary over time. "Because the flow of matter falling onto a black hole is turbulent, we can see that the ring wobbles with time," said Wielgus. "The dynamics of this wobbling will allow us to constrain the accretion flow." Anantua added that it is important to constrain accretion flows because, "The accretion flow contains matter than gets close enough to the black hole to allow us to observe the effects of strong gravity, and in some circumstances, allows us to test predictions from general relativity, like we've done in this study."

In the current study, multiple years of data allow scientists to perceive the amount of variability in the ring's appearance. "Actually, we see quite a lot of variation there, and not all theoretical models of accretion flow allow for this much variability," said Wielgus. "As we obtain more measurements in the future, we will be able to confidently put constraints on models and rule some of them out."

Early data in the EHT collaboration were taken by just a few telescopes and a few dozen people. The CfA's Submillimeter Array (SMA) -- a radio telescope located on Mauna Kea, Hawai'i -- was among the small group that started the collaboration and captured the early data used for the current study. Simon Radford, Operations Director at the SMA said, "Hawai'i telescopes pioneered this technique over the past decade and were crucial to the success of early EHT experiments," adding that the combination of the technology, telescopes, and location are what made the early data useful and meaningful.

Ten years later the data has become an invaluable tool to understanding not only M87, but all black holes. "These early EHT experiments provide us with a treasure trove of long-term observations that the current EHT, even with its remarkable imaging capability, cannot match," said Shep Doeleman, Founding Director, EHT. "When we first measured the size of M87 in 2009, we couldn't have foreseen that it would give us the first glimpse of black hole dynamics. If you want to see a black hole evolve over a decade, there is no substitute for having a decade of data." Wielgus added that the continued analysis of past observations, along with new observations "will lead to a better understanding of the dynamical properties of M87, and black holes in general."

Read more at Science Daily

Seismic data explains continental collision beneath Tibet

 In addition to being the last horizon for adventurers and spiritual seekers, the Himalaya region is a prime location for understanding geological processes. It hosts world-class mineral deposits of copper, lead, zinc, gold and silver, as well as rarer elements like lithium, antimony and chrome, that are essential to modern technology. The uplift of the Tibetan plateau even affects global climate by influencing atmospheric circulation and the development of seasonal monsoons.

Yet despite its importance, scientists still don't fully understand the geological processes contributing to the region's formation. "The physical and political inaccessibility of Tibet has limited scientific study, so most field experiments have either been too localized to understand the big picture or they've lacked sufficient resolution at depths to properly understand the processes," said Simon Klemperer, a geophysics professor at Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth).

Now, new seismic data gathered by Klemperer and his colleagues provides the first west-to-east view of the subsurface where India and Asia collide. The research contributes to an ongoing debate over the structure of the Himalaya collision zone, the only place on Earth where continental plates continue crashing today -- and the source of catastrophes like the 2015 Gorkha earthquake that killed about 9,000 people and injured thousands more.

The new seismic images suggest that two competing processes are simultaneously operating beneath the collision zone: movement of one tectonic plate under another, as well as thinning and collapse of the crust. The research, conducted by scientists at Stanford University and the Chinese Academy of Geological Sciences, was published in Proceedings of the National Academy of Sciences Sept. 21.

The study marks the first time that scientists have collected truly credible images of what's called an along-strike, or longitudinal, variation in the Himalaya collision zone, co-author Klemperer said.

As the Indian plate collides with Asia it forms Tibet, the highest and largest mountain plateau on the planet. This process started very recently in geological history, about 57 million years ago. Researchers have proposed various explanations for its formation, such as a thickening of the Earth's crust caused by the Indian plate forcing its way beneath the Tibetan Plateau.

To test these hypotheses, researchers began the major logistical effort of installing new seismic recorders in 2011 in order to resolve details that might have been previously overlooked. Importantly, the new recorders were installed from east to west across Tibet; traditionally, they had only been deployed from north to south because that is the direction the country's valleys are oriented and thus the direction that roads have historically been built.

The final images, pieced together from recordings by 159 new seismometers closely spaced along two 620-mile long profiles, reveal where the Indian crust has deep tears associated with the curvature of the Himalayan arc.

"We're seeing at a much finer scale what we never saw before," Klemperer said. "It took a heroic effort to install closely spaced seismometers across the mountains, instead of along the valleys, to collect data in the west-east direction and make this research possible."

Building and breaking

As the Indian tectonic plate moves from the south, the mantle, the thickest and strongest part of the plate, is dipping beneath the Tibetan plateau. The new analyses reveal that this process is causing small parts of the Indian plate to break off beneath two of the surface rifts, likely creating tears in the plate -- similar to how a truck barreling through a narrow gap between two trees might chip off pieces of tree trunk. The location of such tears can be critical for understanding how far a major earthquake like Gorkha will spread.

"These transitions, these jumps between the faults, are so important and they're at a scale that we don't normally notice until after an earthquake has happened," Klemperer said.

An unusual aspect of Tibet involves the occurrence of very deep earthquakes, more than 40 miles below the surface. Using their seismic data, the researchers found associations between the plate tears and the occurrence of those deep quakes.

The research also explains why the strength of gravity varies in different parts of the collision zone. The co-authors hypothesized that after the small pieces dropped off of the Indian plate, softer material from underneath bubbled up, creating mass imbalances in the India-Tibet collision zone.

Read more at Science Daily

Scientists shine light on tiny crystals behind unexpected violent eruptions

 In a new study of volcanic processes, Bristol scientists have demonstrated the role nanolites play in the creation of violent eruptions at otherwise 'calm' and predictable volcanoes.

The study, published in Science Advances, describes how nano-sized crystals (nanolites), 10,000 times smaller than the width of a human hair, can have a significant impact of the viscosity of erupting magma, resulting in previously unexplained and explosive eruptions.

"This discovery provides an eloquent explanation for violent eruptions at volcanos that are generally well behaved but occasionally present us with a deadly surprise, such as the 122 BC eruption of Mount Etna," said Dr Danilo Di Genova from the University of Bristol's School of Earth Sciences.

"Volcanoes with low silica magma compositions have very low viscosity, which usually allows the gas to gently escape. However, we've shown that nanolites can increase the viscosity for a limited time, which would trap gas in the sticky liquid, leading to a sudden switch in behaviour that was previously difficult to explain."

Dr Richard Brooker also from Earth Sciences, said: "We demonstrated the surprising effect of nanolites on magma viscosity, and thereby volcanic eruptions, using cutting-edge nano-imaging and Raman spectroscopy to hunt for evidence of these almost invisible particles in ash erupted during very violent eruptions."

"The next stage was to re-melt these rocks in the laboratory and recreate the correct cooling rate to produce nanolites in the molten magma. Using the scattering of extremely bright synchrotron source radiation (10 billion times brighter than the sun) we were able to document nanolite growth."

"We then produced a nanolite-bearing basaltic foam (pumice) under laboratory conditions, also demonstrating how these nanolites can be produced by undercooling as volatiles are exsolved from magma, lowering the liquidus."

Professor Heidy Mader added: "By conducting new experiments on analogue synthetic materials, at low shear rates relative to volcanic systems, we were able to demonstrate the possibility of extreme viscosities for nanolite-bearing magma, extending our understanding of the unusual (non-Newtonian) behaviour of nanofluids, which have remained enigmatic since the term was coined 25 years ago."

Read more at Science Daily

Sep 23, 2020

Living in an anoxic world: Microbes using arsenic are a link to early life

 Much of life on planet Earth today relies on oxygen to exist, but before oxygen was present on our blue planet, lifeforms likely used arsenic instead. These findings are detailed in research published today in Communications Earth and Environment.

A key component of the oxygen cycle is where plants and some types of bacteria essentially take sunlight, water and CO2 and convert them to carbohydrates and oxygen which are then cycled and used by other organisms that breathe oxygen. This oxygen serves as a vehicle for electrons, gaining and donating electrons as it powers through the metabolic processes. However, for half of the time life has existed on Earth, there was no oxygen present and for the first 1.5 billion years we really do not know how these systems worked, says lead author of the study and UConn Professor of Marine Sciences and Geosciences Pieter Visscher.

Light-driven, photosynthetic organisms appear in the fossil record as layered carbonate rocks called stromatolites dating to around 3.7 billion years ago, says Visscher. Stromatolite mats are deposited over the eons by microbial ecosystems, with each layer holding clues about life at that time. There are contemporary examples of microbes that photosynthesize in the absence of oxygen using a variety of elements to complete the process, however it is not clear how this happened in the earliest life forms.

Theories as to how life's processes functioned in the absence of oxygen have mostly relied on hydrogen, sulfur, or iron as the elements that ferried electrons around to fulfill the metabolic needs of organisms.

Visscher explains these theories are contested, for example photosynthesis is possible with iron but researchers do not find evidence of that in the fossil record before oxygen appeared some 2.4 billion years ago. Hydrogen is mentioned yet the energetics and competition for hydrogen between different microbes shows it is highly unfeasible.

Arsenic is another theoretical possibility, and evidence for that was found in 2008. Visscher says the link with arsenic was strengthened in 2014 when he and colleagues found evidence of arsenic-based photosynthesis in deep time. To further support their theory, the researchers needed to find a modern analog to study the biogeochemistry and element cycling.

Finding an analog to the conditions on early Earth is a challenge for a number of reasons, besides the fact that oxygen is abundant on modern earth. For instance, the evidence shows early microbes captured atmospheric carbon and produced organic matter at a time when volcanic eruptions were frequent, UV light was intense in the absence of the ozone layer, and oceans were essentially a toxic soup.

Another challenging aspect of working within the fossil record, especially those as ancient as some stromatolites, is that there are few left due to the cycling of rock as continents move and time marches on. However, a breakthrough happened when the team discovered an active microbial mat, currently existing in the harsh conditions in Laguna La Brava in the Atacama Desert in Chile.

The mats have not been studied previously but present an otherworldly set of conditions, like those of early Earth. The mats are in a unique environment which leaves them in a permanent oxygen-free state at high altitude where they are exposed to wild, daily temperature swings, and high UV conditions. The mats serve as powerful and informative tools for truly understanding life in the conditions of early Earth.

Visscher explains, "We started working in Chile, where I found a blood red river. The red sediments are made up by anoxogenic photosynthetic bacteria. The water is very high in arsenic as well. The water that flows over the mats contains hydrogen sulfide that is volcanic in origin and it flows very rapidly over these mats. There is absolutely no oxygen."

The team also showed that the mats were making carbonate deposits and creating a new generation of stromatolites. The carbonate materials also showed evidence for arsenic cycling -- that arsenic is serving as a vehicle for electrons -- proving that the microbes are actively metabolizing arsenic much like oxygen in modern systems. Visscher says that these findings, along with the fossil evidence gives a strong indication of what was seen on early earth.

"Arsenic-based life has been a question in terms of does it have biological role or is it just a toxic compound?" says Visscher. That question appears to be answered, "I have been working with microbial mats for about 35 years or so. This is the only system on Earth where I could find a microbial mat that worked absolutely in the absence of oxygen."

Read more at Science Daily

Animals lose fear of predators rapidly after they start encountering humans

 Most wild animals show a suite of predator avoidance behaviors such as vigilance, freezing, and fleeing. But these are quickly reduced after the animals come into contact with humans through captivity, domestication, or urbanization, according to a study led by Benjamin Geffroy from MARBEC (Institute of Marine Biodiversity, Exploitation and Conservation), publishing September 22nd in the open-access journal PLOS Biology.

The international team of researchers analyzed the results of 173 peer-reviewed studies investigating antipredator traits (behavioral and physiological) in 102 species of domesticated, captive, and urbanized mammals, birds, reptiles, fish and molluscs, while taking into account their position in the Tree of Life.

The scientists found that contact with humans led to a rapid loss of animals antipredator traits, but simultaneously the variability between individuals initially increases and then gradually decreases over the generations in contact with human. The authors suppose that this two-step process is caused by reduced pressure from natural selection as a result of living in a safer environment, followed by artificial selection by humans for docility in the case of domestication.

Animals showed immediate changes in antipredator responses in the first generation after contact with humans, suggesting that the initial response is a result of behavioral flexibility, which may later be accompanied by genetic changes if contact continues over many generations. The researchers also found that domestication altered animal antipredator responses three times faster than urbanization, while captivity resulted in the slowest changes. The results also showed that herbivores changed behavior more rapidly than carnivores and that solitary species tended to change quicker that group-living animals.

The study demonstrates that domestication and urbanization exert similar pressures on animals and can result in rapid behavioral changes. The loss of anti-predator behaviors can cause problems when those domesticated or urbanized species encounter predators or when captive animals are released back into the wild. Understanding how animals respond to contact with humans has important implications for conservation and urban planning, captive breed programs, and livestock management.

Read more at Science Daily

Some polar bears in far north are getting short-term benefit from thinning ice

 A small subpopulation of polar bears lives on what used to be thick, multiyear sea ice far above the Arctic Circle. The roughly 300 to 350 bears in Kane Basin, a frigid channel between Canada's Ellesmere Island and Greenland, make up about 1-2% of the world's polar bears.

New research shows that Kane Basin polar bears are doing better, on average, in recent years than they were in the 1990s. The study, published Sept. 23 in Global Change Biology, finds the bears are healthier as conditions are warming because thinning and shrinking multiyear sea ice is allowing more sunlight to reach the ocean surface, which makes the system more ecologically productive.

"We find that a small number of the world's polar bears that live in multiyear ice regions are temporarily benefiting from climate change," said lead author Kristin Laidre, a polar scientist at the University of Washington Applied Physics Laboratory's Polar Science Center.

If greenhouse gases continue to build up in the atmosphere and the climate keeps warming, within decades these polar bears will likely face the same fate as their southern neighbors already suffering from declining sea ice.

"The duration of these benefits is unknown. Under unmitigated climate change, we expect the Kane Basin bears to run into the same situation as polar bears in the south -- it's just going to happen later," Laidre said. "They'll be one of the last subpopulations that will be negatively affected by climate change."

All of the world's 19 polar bear subpopulations, including Kane Basin, are experiencing a shorter on-ice hunting season, according to a 2016 study led by Laidre. This makes it hard for the animals, that can weigh more than 1,200 pounds as adults, to meet their nutritional needs. Polar bears venture out on sea ice to catch seals. In summer when the sea ice melts, the polar bears fast on land.

Laidre led a recent study showing that in the Baffin Bay polar bear subpopulation, which includes about 2,800 bears living just south of Kane Basin, adult females are thinner and are having fewer cubs as the summer open-water season -- when they must fast on land -- grows longer.

"Kane Basin is losing its multiyear ice, too, but that doesn't have the same effect on the polar bears' ability to hunt," Laidre said. "Multiyear ice becomes annual ice, whereas annual ice becomes open water, which is not good for polar bears."

The new paper looked at Kane Basin bears using satellite tracking data and direct physical measurements to compare from 1993 to 1997 with a more recent period, from 2012 to 2016. The body condition, or fatness, improved for all ages of males and females. The average number of cubs per litter, another measure of the animals' overall health, was unchanged.

Satellite tags showed the Kane Basin polar bears traveled across larger areas in recent years, covering twice as much distance and ranging farther from their home territory.

"They now have to move over larger areas," Laidre said. "The region is transitioning into this annual sea ice that is more productive but also more dynamic and broken up."

Observations show a profound shift in the sea ice in Kane Basin between the two study periods. In the 1990s, about half the area was covered in multiyear ice in the peak of summer, while in the 2010s the region was almost completely annual ice, which melts to open water in summer.

Even though there's now more open water, the marine ecosystem has become more productive. Annual sea ice allows more sunlight through, so more algae grow, which supports more fish and in turn attracts seals.

"Two decades ago, scientists hypothesized that climate change could temporarily benefit polar bears in multiyear ice regions over the short term, and our observations support that," Laidre said.

The subpopulation on the other side of Ellesmere Island, in Canada's Norwegian Bay, could be in a similar situation, she said, though no data exist for those animals.

If conditions continue to warm these northernmost polar bears will likely face the same fate as their southern neighbors. Kane Basin polar bears have only much deeper water to turn to farther north.

"It's important not to jump to conclusions and suggest that the High Arctic, which historically was covered by multiyear sea ice, is going to turn into a haven for polar bears," said Laidre, who is also an associate professor in the UW School of Aquatic and Fishery Sciences. "The Arctic Ocean around the North Pole is basically an abyss, with very deep waters that will never be as productive as the shallower waters to the south where most polar bears live.

Read more at Science Daily

Sport and memory go hand in hand

 If sport is good for the body, it also seems to be good for the brain. By evaluating memory performance following a sport session, neuroscientists from the University of Geneva (UNIGE) demonstrate that an intensive physical exercise session as short as 15 minutes on a bicycle improves memory, including the acquisition of new motor skills. How? Through the action of endocanabinoids, molecules known to increase synaptic plasticity. This study, to be read in the journal Scientific Reports, highlights the virtues of sport for both health and education. School programmes and strategies aimed at reducing the effects of neurodegeneration on memory could indeed benefit from it.

Very often, right after a sporting exercise -- especially endurance such as running or cycling -- one feels physical and psychological well-being. This feeling is due to endocannabinoids, small molecules produced by the body during physical exertion. "They circulate in the blood and easily cross the blood-brain barrier. They then bind to specialise cellular receptors and trigger this feeling of euphoria. In addition, these same molecules bind to receptors in the hippocampus, the main brain structure for memory processing," says Kinga Igloi, lecturer in the laboratory of Professor Sophie Schwartz, at UNIGE Faculty of Medicine's Department of Basic Neurosciences, who led this work. "But what is the link between sport and memory? This is what we wanted to understand," she continues.

Intense effort is more effective

To test the effect of sport on motor learning, scientists asked a group of 15 young and healthy men, who were not athletes, to take a memory test under three conditions of physical exercise: after 30 minutes of moderate cycling, after 15 minutes of intensive cycling (defined as 80% of their maximum heart rate), or after a period of rest. "The exercise was as follows: a screen showed four points placed next to each other. Each time one of the dots briefly changed into a star, the participant had to press the corresponding button as quickly as possible," explains Blanca Marin Bosch, researcher in the same laboratory. "It followed a predefined and repeated sequence in order to precisely evaluate how movements were learnt. This is very similar to what we do when, for example, we learn to type on a keyboard as quickly as possible. After an intensive sports session, the performance was much better."

In addition to the results of the memory tests, the scientists observed changes in the activation of brain structures with functional MRI and performed blood tests to measure endocannabinoid levels. The different analyses concur: the faster individuals are, the more they activate their hippocampus (the brain area of memory) and the caudate nucleus (a brain structure involved in motor processes). Moreover, their endocannabinoid levels follow the same curve: the higher the level after intense physical effort, the more the brain is activated and the better the brain's performance. "These molecules are involved in synaptic plasticity, i.e. the way in which neurons are connected to each other, and thus may act on long-term potentiation, the mechanism for optimal consolidation of memory," says Blanca Marin Bosch.

Improving school learning or preventing Alzheimer's disease

In a previous study, the research team had already shown the positive effect of sport on another type of memory, associative memory. However, contrary to what is shown here, they had observed that a sport session of moderate intensity produced better results. It therefore shows that, as not all forms of memory use the same brain mechanisms, not all sports intensities have the same effects. It should be noted that in all cases, physical exercise improves memory more than inaction.

By providing precise neuroscientific data, these studies make it possible to envisage new strategies for improving or preserving memory. "Sports activity can be an easy to implement, minimally invasive and inexpensive intervention. For example, would it be useful to schedule a sports activity at the end of a school morning to consolidate memory and improve learning?"

Read more at Science Daily

Sep 22, 2020

Wild birds as offerings to the Egyptian gods

 Millions of ibis and birds of prey mummies, sacrificed to the Egyptian gods Horus, Ra or Thoth, have been discovered in the necropolises of the Nile Valley. Such a quantity of mummified birds raises the question of their origin: were they bred, like cats, or were they hunted? Scientists from the CNRS, the Université Claude Bernard Lyon 1 and the C2RMF have carried out extensive geochemical analyses on mummies from the Musée des Confluences, Lyon. According to their results, published on 22nd September 2020 in the journal Scientific Reports, they were wild birds.

Mammals, reptiles, birds: the tens of millions of animal mummies deposited as offerings in the necropolises of the Nile Valley bear witness to an intense religious fervour, and to the practices of collecting and preparing animals that undoubtedly contributed significantly to the economy from the Old Kingdom (3rd millennium BC) to Roman Egypt (1st-3rd centuries AD). However, the origin of these animals and the methods of supply remain unknown. For some tamed species, such as the cat, breeding was probably the most efficient way of supplying large numbers of animals for mummification. But unlike cats, bird mummies cover all stages of development, from egg to adult, which may indicate more opportunistic sourcing practices.

In order to determine the origin -- breeding or hunting -- of the mummified birds, tiny fragments of feathers, bones and embalming strips were taken from 20 ibis and birds of prey mummies from the collections of the Musée des Confluences, Lyon. If these birds, which migrate in the wild, had been bred, their diet would have been homogeneous, of local origin and reflected in the uniform isotopic composition of the animal remains, regardless as to whether that diet had been produced specifically or derived from that of coexisting humans.

The various tissues were therefore dated using the carbon-14 method; and the isotopic compositions of oxygen, carbon, nitrogen, sulfur and strontium were measured, interpreted in terms of food sources and compared with those of contemporaneous human mummies. However, far from being homogeneous, these isotopic compositions showed a high variability and "exotic" signatures compared to those of ancient Egyptian humans: the birds were wild, migrating seasonally out of the Nile Valley.

Read more at Science Daily

Forest margins may be more resilient to climate change than previously thought

 A warming climate and more frequent wildfires do not necessarily mean the western United States will see the forest loss that many scientists expect. Dry forest margins may be more resilient to climate change than previously thought if managed appropriately, according to Penn State researchers.

"The basic narrative is it's just a matter of time before we lose these dry, low elevation forests," said Lucas Harris, a postdoctoral scholar who worked on the project as part of his doctoral dissertation. "There's increasing evidence that once disturbances like drought or wildfire remove the canopy and shrub cover in these dry forests, the trees have trouble coming back. On the other hand, there's growing evidence that there's a lot of spatial variability in how resilient these forests are to disturbances and climate change."

The researchers studied forest regeneration at four sites that had experienced wildfires in the eastern Sierra Nevada Mountains in California. The sites sit at the forest margin, a drier area where forest meets sagebrush grassland. These dry forest margins may be the most vulnerable to climate change-driven forest loss, according to the researchers.

Large fires in the area tend to consume the forest starting from the steppe margin then sweeping up the mountain, said Alan Taylor, professor of geography and ecology who has worked in the area for decades.

"You wouldn't see forest anymore over 10 or 20 years, and it seemed like the lower forest margin was getting pushed way up in elevation because it's so dry near the sagebrush boundary," Taylor said. "My research group wanted to look at this in detail because no one had actually done it."

Harris and Taylor's research team measured tree diameters and litter depth, counted the number of seedlings and saplings and identified tree species at the research sites. They also quantified fire severity, the amount of moisture available for plant growth and water deficit, an indicator of drought intensity. They then fed the data into five models to see how the probability for tree regeneration varied based on fire severity, climate and location, and remaining vegetation and canopy cover. They report their results today (Sept. 21) in Ecosphere.

The researchers found that 50% of the plots at the sites showed signs of tree regeneration, and water balance projections through the end of the current century indicate that there will be enough moisture available to support tree seedlings. The key is to prevent severe fire disturbances through proper management, according to the researchers, because tree regeneration was strongly associated with mature trees that survived fires.

"In these marginal or dry forest areas, management approaches like prescribed burning or fuel treatments that thin the forest can prevent the severe fires that would push this ecosystem to a non-forest condition," said Taylor, who also holds an appointment in the Earth and Environmental Systems Institute. "The study suggests that these low-severity disturbances could actually create very resilient conditions in places where most people have been suggesting that we'll see forest loss."

The researchers also noticed a shift in tree composition from fire-resistant yellow pines to less fire-resistant but more drought-resistant species like pinyon pine. They attributed the shift to drying and fire exclusion policies in effect over the last century.

"The shift could be beneficial if the species moving in is better suited to present and near-future climates," said Harris. "However, it could be dangerous if a bunch of fire-sensitive species move into a place and then it all burns up. Many trees would die, and we could see lasting forest loss."

California's climate is projected to warm, but many climate models also forecast an average increase in winter precipitation, especially in the northern part of the state and in the mountains, continued Harris.

"On the one hand, you have greater drought intensity for sure, but also you're going to have these wetter periods where there's more moisture available for tree growth in the spring and maybe into the early summer," he said. "So if the trees are able to survive that drought stress and take advantage of the additional moisture present in some years, they might be able to maintain or even expand their distribution."

This forest system is important for recreation, carbon storage, biodiversity and wildlife habitat, said Taylor. It also comprises part of the western side of the Great Basin, the largest area of contiguous watersheds that do not empty into an ocean in North America.

"There's not much forest in the Great Basin, which is a huge area of sagebrush grassland in Utah, Idaho, Oregon, Nevada and Arizona," Taylor said. "So the forests of the eastern Sierra Nevada Mountains represent a significant component of the forest found in that system."

Read more at Science Daily

Asteroid Ryugu's rocky past

 Researchers find evidence that asteroid Ryugu was born out of the possible destruction of a larger parent asteroid millions of years ago. Thanks to the Hayabusa2 spacecraft, the international team was able to study certain surface features in detail. Variations in the kinds of boulders scattered on Ryugu tell researchers about the processes involved in its creation. The study of asteroids including Ryugu informs the study of the evolution of life on Earth.

The asteroid Ryugu may look like a solid piece of rock, but it's more accurate to liken it to an orbiting pile of rubble. Given the relative fragility of this collection of loosely bound boulders, researchers believe that Ryugu and similar asteroids probably don't last very long due to disruptions and collisions from other asteroids. Ryugu is estimated to have adopted its current form around 10 million to 20 million years ago, which sounds like a lot compared to a human lifespan, but makes it a mere infant when compared to larger solar system bodies.

"Ryugu is too small to have survived the whole 4.6 billion years of solar system history," said Professor Seiji Sugita from the Department of Earth and Planetary Science at the University of Tokyo. "Ryugu-sized objects would be disrupted by other asteroids within several hundred million years on average. We think Ryugu spent most of its life as part of a larger, more solid parent body. This is based on observations by Hayabusa2 which show Ryugu is very loose and porous. Such bodies are likely formed from reaccumulations of collision debris."

As well as giving researchers data to measure Ryugu's density, Hayabusa2 also collects information about the spectral properties of the asteroid's surface features. For this study in particular, the team was keen to explore the subtle differences between the various kinds of boulders on or embedded in the surface. They determined there are two kinds of bright boulders on Ryugu, and the nature of these gives away how the asteroid may have formed.

"Ryugu is considered a C-type, or carbonaceous, asteroid, meaning it's primarily composed of rock that contains a lot of carbon and water," said postdoctoral researcher Eri Tatsumi. "As expected, most of the surface boulders are also C-type; however, there are a large number of S-type, or siliceous, rocks as well. These are silicate-rich, lack water-rich minerals and are more often found in the inner, rather than outer, solar system."

Given the presence of S- as well as C-type rocks on Ryugu, researchers are led to believe the little rubble-pile asteroid likely formed from the collision between a small S-type asteroid and Ryugu's larger C-type parent asteroid. If the nature of this collision had been the other way around, the ratio of C- to S-type material in Ryugu would also be reversed. Hayabusa2 is now on its return journey to Earth and is expected to deliver its cargo of samples on Dec. 6 of this year. Researchers are keen to study this material to add evidence for this hypothesis and to elucidate many other things about our little rocky neighbor.

"We used the optical navigation camera on Hayabusa2 to observe Ryugu's surface in different wavelengths of light, and this is how we discovered the variation in rock types. Among the bright boulders, C and S types have different albedos, or reflective properties," said Tatsumi. "But I eagerly await the analysis of the return samples, as this will confirm theories and improve the accuracy of our knowledge about Ryugu. What will be really interesting is knowing how Ryugu differs from meteorites on Earth, as this could in turn tell us something new about the history of Earth and the solar system as a whole."

Ryugu is not the only near-Earth asteroid scientists are currently exploring with probes, though. Another international team under NASA is currently studying the asteroid Bennu with the OSIRIS-REx spacecraft in orbit around it. Tatsumi also collaborates with researchers on that project and the teams share their research findings.

Read more at Science Daily

Water on exoplanet cloud tops could be found with hi-tech instrumentation

 University of Warwick astronomers have shown that water vapour can potentially be detected in the atmospheres of exoplanets by peering literally over the tops of their impenetrable clouds.

By applying the technique to models based upon known exoplanets with clouds the team has demonstrated in principle that high resolution spectroscopy can be used to examine the atmospheres of exoplanets that were previously too difficult to characterise due to clouds that are too dense for sufficient light to pass through.

Their technique is described in a paper for the Monthly Notices of the Royal Astronomical Society and provides another method for detecting the presence of water vapour in an exoplanet's atmosphere -- as well as other chemical species that could be used in future to assess potential signs of life. The research received funding from the Science and Technologies Facilities Council (STFC), part of UK Research and Innovation (UKRI).

Astronomers use light from a planet's host star to learn what its atmosphere is composed of. As the planet passes in front of the star they observe the transmission of the stellar light as it skims through the upper atmosphere and alters its spectrum. They can then analyse this spectrum to look at wavelengths that have spectral signatures for specific chemicals. These chemicals, such as water vapour, methane and ammonia, are only present in trace quantities in these hydrogen and helium rich planets.

However, dense clouds can block that light from passing through the atmosphere, leaving astronomers with a featureless spectrum. High resolution spectroscopy is a relatively recent technique that is being used in ground-based observatories to observe exoplanets in greater detail, and the Warwick researchers wanted to explore whether this technology could be used to detect the trace chemicals present in the thin atmospheric layer right above those clouds.

While astronomers have been able to characterise the atmospheres of many larger and hotter exoplanets that orbit close to their stars, termed 'hot Jupiters', smaller exoplanets are now being discovered at cooler temperatures (less than 700°C). Many of these planets, which are the size of Neptune or smaller, have shown much thicker cloud.

They modelled two previously known 'warm Neptunes' and simulated how the light from their star would be detected by a high resolution spectrograph. GJ3470b is a cloudy planet that astronomers had previously been able to characterise, while GJ436b has been harder to characterise due to a much thicker cloud layer. Both simulations demonstrated that at high resolution you can detect chemicals such as water vapour, ammonia and methane easily with just a few nights of observations with a ground-based telescope.

The technique works differently from the method recently used to detect phosphine on Venus, but could potentially be used to search for any type of molecule in the clouds of a planet outside of our solar system, including phosphine.

Lead author Dr Siddharth Gandhi of the Department of Physics at the University of Warwick said: "We have been investigating whether ground-based high resolution spectroscopy can help us to constrain the altitude in the atmosphere where we have clouds, and constrain chemical abundances despite those clouds.

"What we are seeing is that a lot of these planets have got water vapour on them, and we're starting to see other chemicals as well, but the clouds are preventing us from seeing these molecules clearly. We need a way to detect these species and high resolution spectroscopy is a potential way of doing that, even if there is a cloudy atmosphere.

"The chemical abundances can tell you quite a lot about how the planet may have formed because it leaves its chemical fingerprint on the molecules in the atmosphere. Because these are gas giants, detecting the molecules at the top of the atmosphere also offers a window into the internal structure as the gases mix with the deeper layers."

The majority of observations of exoplanets have been done using space-based telescopes such as Hubble or Spitzer, and their resolution is too low to detect sufficient signal from above the clouds. High resolution spectroscopy's advantage is that it is capable of probing a wider range of altitudes.

Read more at Science Daily

Parkinson's disease is not one, but two diseases

 Although the name may suggest otherwise, Parkinson's disease is not one but two diseases, starting either in the brain or in the intestines. Which explains why patients with Parkinson's describe widely differing symptoms, and points towards personalised medicine as the way forward for people with Parkinson's disease.

This is the conclusion of a study which has just been published in the leading neurology journal Brain.

The researchers behind the study are Professor Per Borghammer and Medical Doctor Jacob Horsager from the Department of Clinical Medicine at Aarhus University and Aarhus University Hospital, Denmark.

"With the help of advanced scanning techniques, we've shown that Parkinson's disease can be divided into two variants, which start in different places in the body. For some patients, the disease starts in the intestines and spreads from there to the brain through neural connections. For others, the disease starts in the brain and spreads to the intestines and other organs such as the heart," explains Per Borghammer.

He also points out that the discovery could be very significant for the treatment of Parkinson's disease in the future, as this ought to be based on the individual patient's disease pattern.

Parkinson's disease is characterised by slow deterioration of the brain due to accumulated alpha-synuclein, a protein that damages nerve cells. This leads to the slow, stiff movements which many people associate with the disease.

In the study, the researchers have used advanced PET and MRI imaging techniques to examine people with Parkinson's disease. People who have not yet been diagnosed but have a high risk of developing the disease are also included in the study. People diagnosed with REM sleep behaviour syndrome have an increased risk of developing Parkinson's disease.

The study showed that some patients had damage to the brain's dopamine system before damage in the intestines and heart occurred. In other patients, scans revealed damage to the nervous systems of the intestines and heart before the damage in the brain's dopamine system was visible.

This knowledge is important and it challenges the understanding of Parkinson's disease that has been prevalent until now, says Per Borghammer.

"Until now, many people have viewed the disease as relatively homogeneous and defined it based on the classical movement disorders. But at the same time, we've been puzzled about why there was such a big difference between patient symptoms. With this new knowledge, the different symptoms make more sense and this is also the perspective in which future research should be viewed," he says.

The researchers refer to the two types of Parkinson's disease as body-first and brain-first. In the case of body-first, it may be particularly interesting to study the composition of bacteria in the intestines known as the microbiota.

"It has long since been demonstrated that Parkinson's patients have a different microbiome in the intestines than healthy people, without us truly understanding the significance of this. Now that we're able to identify the two types of Parkinson's disease, we can examine the risk factors and possible genetic factors that may be different for the two types. The next step is to examine whether, for example, body-first Parkinson's disease can be treated by treating the intestines with faeces transplantation or in other ways that affect the microbiome," says Per Borghammer.

"The discovery of brain-first Parkinson's is a bigger challenge. This variant of the disease is probably relatively symptom-free until the movement disorder symptoms appear and the patient is diagnosed with Parkinson's. By then the patient has already lost more than half of the dopamine system, and it will therefore be more difficult to find patients early enough to be able to slow the disease," says Per Borghammer.

The study from Aarhus University is longitudinal, i.e. the participants are called in again after three and six years so that all of the examinations and scans can be repeated. According to Per Borghammer, this makes the study the most comprehensive ever, and it provides researchers with valuable knowledge and clarification about Parkinson's disease -- or diseases.

"Previous studies have indicated that there could be more than one type of Parkinson's, but this has not been demonstrated clearly until this study, which was specifically designed to clarify this question. We now have knowledge that offers hope for better and more targeted treatment of people who are affected by Parkinson's disease in the future," says Per Borghammer.

According to the Danish Parkinson's Disease Association, there are 8,000 people with Parkinson's disease in Denmark and up to eight million diagnosed patients worldwide.

Read more at Science Daily

Sep 21, 2020

Why there is no speed limit in the superfluid universe

 Physicists from Lancaster University have established why objects moving through superfluid helium-3 lack a speed limit in a continuation of earlier Lancaster research.

Helium-3 is a rare isotope of helium, in which one neutron is missing. It becomes superfluid at extremely low temperatures, enabling unusual properties such as a lack of friction for moving objects.

It was thought that the speed of objects moving through superfluid helium-3 was fundamentally limited to the critical Landau velocity, and that exceeding this speed limit would destroy the superfluid. Prior experiments in Lancaster have found that it is not a strict rule and objects can move at much greater speeds without destroying the fragile superfluid state.

Now scientists from Lancaster University have found the reason for the absence of the speed limit: exotic particles that stick to all surfaces in the superfluid.

The discovery may guide applications in quantum technology, even quantum computing, where multiple research groups already aim to make use of these unusual particles.

To shake the bound particles into sight, the researchers cooled superfluid helium-3 to within one ten thousandth of a degree from absolute zero (0.0001K or -273.15°C). They then moved a wire through the superfluid at a high speed, and measured how much force was needed to move the wire. Apart from an extremely small force related to moving the bound particles around when the wire starts to move, the measured force was zero.

Lead author Dr Samuli Autti said: "Superfluid helium-3 feels like vacuum to a rod moving through it, although it is a relatively dense liquid. There is no resistance, none at all. I find this very intriguing."

PhD student Ash Jennings added: "By making the rod change its direction of motion we were able to conclude that the rod will be hidden from the superfluid by the bound particles covering it, even when its speed is very high." "The bound particles initially need to move around to achieve this, and that exerts a tiny force on the rod, but once this is done, the force just completely disappears," said Dr Dmitry Zmeev, who supervised the project.

Read more at Science Daily

Astronomers discover an Earth-sized 'pi planet' with a 3.14-day orbit

 In a delightful alignment of astronomy and mathematics, scientists at MIT and elsewhere have discovered a "pi Earth" -- an Earth-sized planet that zips around its star every 3.14 days, in an orbit reminiscent of the universal mathematics constant.

The researchers discovered signals of the planet in data taken in 2017 by the NASA Kepler Space Telescope's K2 mission. By zeroing in on the system earlier this year with SPECULOOS, a network of ground-based telescopes, the team confirmed that the signals were of a planet orbiting its star. And indeed, the planet appears to still be circling its star today, with a pi-like period, every 3.14 days.

"The planet moves like clockwork," says Prajwal Niraula, a graduate student in MIT's Department of Earth, Atmospheric and Planetary Sciences (EAPS), who is the lead author of a paper published today in the Astronomical Journal.

"Everyone needs a bit of fun these days," says co-author Julien de Wit, of both the paper title and the discovery of the pi planet itself.

Planet extraction

The new planet is labeled K2-315b; it's the 315th planetary system discovered within K2 data -- just one system shy of an even more serendipitous place on the list.

The researchers estimate that K2-315b has a radius of 0.95 that of Earth's, making it just about Earth-sized. It orbits a cool, low-mass star that is about one-fifth the size of the sun. The planet circles its star every 3.14 days, at a blistering 81 kilometers per second, or about 181,000 miles per hour.

While its mass is yet to be determined, scientists suspect that K2-315b is terrestrial, like the Earth. But the pi planet is likely not habitable, as its tight orbit brings the planet close enough to its star to heat its surface up to 450 kelvins, or around 350 degrees Fahrenheit -- perfect, as it turns out, for baking actual pie.

"This would be too hot to be habitable in the common understanding of the phrase," says Niraula, who adds that the excitement around this particular planet, aside from its associations with the mathematical constant pi, is that it may prove a promising candidate for studying the characteristics of its atmosphere.

"We now know we can mine and extract planets from archival data, and hopefully there will be no planets left behind, especially these really important ones that have a high impact," says de Wit, who is an assistant professor in EAPS, and a member of MIT's Kavli Institute for Astrophysics and Space Research.

Niraula and de Wit's MIT co-authors include Benjamin Rackham and Artem Burdanov, along with a team of international collaborators.

Dips in the data

The researchers are members of SPECULOOS, an acronym for The Search for habitable Planets EClipsing ULtra-cOOl Stars, and named for a network of four 1-meter telescopes in Chile's Atacama Desert, which scan the sky across the southern hemisphere. Most recently, the network added a fifth telescope, which is the first to be located in the northern hemisphere, named Artemis -- a project that was spearheaded by researchers at MIT.

The SPECULOOS telescopes are designed to search for Earth-like planets around nearby, ultracool dwarfs -- small, dim stars that offer astronomers a better chance of spotting an orbiting planet and characterizing its atmosphere, as these stars lack the glare of much larger, brighter stars.

"These ultracool dwarfs are scattered all across the sky," Burdanov says. "Targeted ground-based surveys like SPECULOOS are helpful because we can look at these ultracool dwarfs one by one."

In particular, astronomers look at individual stars for signs of transits, or periodic dips in a star's light, that signal a possible planet crossing in front of the star, and briefly blocking its light.

Earlier this year, Niraula came upon a cool dwarf, slightly warmer than the commonly accepted threshold for an ultracool dwarf, in data collected by the K2 campaign -- the Kepler Space Telescope's second observing mission, which monitored slivers of the sky as the spacecraft orbited around the sun.

Over several months in 2017, the Kepler telescope observed a part of the sky that included the cool dwarf, labeled in the K2 data as EPIC 249631677. Niraula combed through this period and found around 20 dips in the light of this star, that seemed to repeat every 3.14 days.

The team analyzed the signals, testing different potential astrophysical scenarios for their origin, and confirmed that the signals were likely of a transiting planet, and not a product of some other phenomena such as a binary system of two spiraling stars.

The researchers then planned to get a closer look at the star and its orbiting planet with SPECULOOS. But first, they had to identify a window of time when they would be sure to catch a transit.

"Nailing down the best night to follow up from the ground is a little bit tricky," says Rackham, who developed a forecasting algorithm to predict when a transit might next occur. "Even when you see this 3.14 day signal in the K2 data, there's an uncertainty to that, which adds up with every orbit."

With Rackham's forecasting algorithm, the group narrowed in on several nights in February 2020 during which they were likely to see the planet crossing in front of its star. They then pointed SPECULOOS' telescopes in the direction of the star and were able to see three clear transits: two with the network's Southern Hemisphere telescopes, and the third from Artemis, in the Northern Hemisphere.

The researchers say the new pi planet may be a promising candidate to follow up with the James Webb Space Telescope (JWST), to see details of the planet's atmosphere. For now, the team is looking through other datasets, such as from NASA's TESS mission, and are also directly observing the skies with Artemis and the rest of the SPECULOOS network, for signs of Earthlike planets.

Read more at Science Daily

Researchers discover new molecules for tracking Parkinson's disease

 For many of the 200,000 patients diagnosed with Parkinson's disease in the United States every year, the diagnosis often occurs only after the appearance of severe symptoms such as tremors or speech difficulties. With the goal of recognizing and treating neurological diseases earlier, researchers are looking for new ways to image biological molecules that indicate disease progression before symptoms appear. One such candidate, and a known hallmark of Parkinson's disease, is the formation of clumps of alpha-synuclein protein, and, while this protein was identified more than 20 years ago, a reliable way to track alpha-synuclein aggregates in the brain has yet to be developed.

Now, a new study published in Chemical Science describes an innovative approach for identifying molecules that can help track the progression of Parkinson's disease. Conducted by researchers in the labs of E. James Petersson, Robert Mach, and Virginia Lee, this proof-of-concept study could change the paradigm for how researchers screen and test new molecules for studying a wide range of neurodegenerative diseases.

Studying these types of protein aggregates requires new tracers, radioactive molecules that clinicians use to image tissues and organs, for positron emission tomography (PET). As a senior researcher in the field of PET tracer development, Mach and his group worked for several years with the Michael J. Fox Foundation to develop an alpha-synuclein tracer, but without data on the protein's structure they were unable to find candidates that were selective enough to be used as a diagnostic tool.

Then, with the first publication of alpha-synuclein's structure and an increase in tools available from the field of computational chemistry, Mach and Petersson started collaborating on developing an alpha-synuclein PET tracer. By combining their respective expertise in radiochemistry and protein engineering, they were able to confirm experimentally where on the alpha-synuclein protein potential tracer molecules were able to bind, crucial information to help them discover and design molecules that would be specific to alpha-synuclein.

In their latest study, the researchers developed a high-throughput computational method, allowing them to screen millions of candidate molecules, to see which ones will bind to the known binding sites on alpha-synuclein. Building off a previously published method, their approach first identifies an "exemplar," a pseudo-molecule that fits perfectly into the binding site of alpha-synuclein. Then, that exemplar is compared to actual molecules that are commercially available to see which ones have a similar structure. The researchers then use other computer programs to help narrow down the list of candidates for testing in the lab.

To evaluate the performance of their screening method, the scientists identified a small subset of 20 promising candidates from the 7 million compounds that were screened and found that two had extremely high binding affinity to alpha-synuclein. The researchers also used mouse brain tissues provided by the Lee group to further validate this new method. The researchers were impressed, and pleasantly surprised, by their success rate, which they attribute to the specific nature of their search method. "There's certainly a bit of luck involved as well," Petersson adds, "Probably the biggest surprise is just how well it worked."

The idea of using the exemplar method to tackle this problem came to first author and Ph.D. graduate John "Jack" Ferrie while he was learning computational chemistry methods at the Institute for Protein Design at the University of Washington as part of a Parkinson's Foundation Summer Fellowship. "The summer fellowship is designed to train students in new methods that can be applied to Parkinson's disease research, and that's exactly what happened here," says Petersson. "The ideas that Jack came back with formed the basis of a big effort in both my lab and Bob Mach's lab to identify PET tracers computationally."

Read more at Science Daily

Comet discovered to have its own northern lights

This composite is a mosaic comprising four individual NAVCAM images taken from 19 miles (31 kilometers) from the center of comet 67P/Churyumov-Gerasimenko on Nov. 20, 2014. The image resolution is 10 feet (3 meters) per pixel.

Data from NASA instruments aboard the ESA (European Space Agency) Rosetta mission have helped reveal that comet 67P/Churyumov-Gerasimenko has its own far-ultraviolet aurora. It is the first time such electromagnetic emissions in the far-ultraviolet have been documented on a celestial object other than a planet or moon. A paper on the findings was released today in the journal Nature Astronomy.

On Earth, aurora (also known as the northern or southern lights) are generated when electrically charged particles speeding from the Sun hit the upper atmosphere to create colorful shimmers of green, white, and red. Elsewhere in the solar system, Jupiter and some of its moons -- as well as Saturn, Uranus, Neptune, and even Mars -- have all exhibited their own version of northern lights. But the phenomena had yet to be documented in comets.

Rosetta is space exploration's most traveled and accomplished comet hunter. Launched in 2004, it orbited comet 67P/Churyumov-Gerasimenko (67P/C-G) from Aug. 2014 until its dramatic end-of-mission comet landing in Sept. 2016. The data for this most recent study is on what mission scientists initially interpreted as "dayglow," a process caused by photons of light interacting with the envelope of gas -- known as the coma -- that radiates from, and surrounds, the comet's nucleus. But new analysis of the data paints a very different picture.

"The glow surrounding 67P/C-G is one of a kind," said Marina Galand of Imperial College London and lead author of the study. "By linking data from numerous Rosetta instruments, we were able to get a better picture of what was going on. This enabled us to unambiguously identify how 67P/C-G's ultraviolet atomic emissions form."

The data indicate 67P/C-G's emissions are actually auroral in nature. Electrons streaming out in the solar wind -- the stream of charged particles flowing out from the Sun -- interact with the gas in the comet's coma, breaking apart water and other molecules. The resulting atoms give off a distinctive far-ultraviolet light. Invisible to the naked eye, far-ultraviolet has the shortest wavelengths of radiation in the ultraviolet spectrum.

Exploring the emission of 67P/C-G will enable scientists to learn how the particles in the solar wind change over time, something that is crucial for understanding space weather throughout the solar system. By providing better information on how the Sun's radiation affects the space environment they must travel through, such information could ultimately can help protect satellites and spacecraft, as well as astronauts traveling to the Moon and Mars.

"Rosetta is the gift that keeps on giving," said Paul Feldman, an investigator on Alice at the Johns Hopkins University in Baltimore and a co-author of the paper. "The treasure trove of data it returned over its two-year visit to the comet have allowed us to rewrite the book on these most exotic inhabitants of our solar system -- and by all accounts there is much more to come."

NASA Instruments Aboard ESA's Rosetta

NASA-supplied instruments contributed to this investigation. The Ion and Electron Sensor (IES) instrument detected the amount and energy of electrons near the spacecraft, the Alice instrument measured the ultraviolet light emitted by the aurora, and the Microwave Instrument for the Rosetta Orbiter (MIRO) measured the amount of water molecules around the comet (the MIRO instrument includes contributions from France, Germany, and Taiwan). Other instruments aboard the spacecraft used in the research were the Italian Space Agency's Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS), the Langmuir Probe (LAP) provided by Sweden, and the Rosetta Orbiter Spectrometer for Ion and Neutral Analysis (ROSINA) provided by Switzerland.

Read more at Science Daily

Sep 20, 2020

The Phish scale: New tool helps IT staff see why users click on fraudulent emails

 Researchers at the National Institute of Standards and Technology (NIST) have developed a new tool called the Phish Scale that could help organizations better train their employees to avoid a particularly dangerous form of cyberattack known as phishing.

By 2021, global cybercrime damages will cost $6 trillion annually, up from $3 trillion in 2015, according to estimates from the 2020 Official Annual Cybercrime Report by Cybersecurity Ventures.

One of the more prevalent types of cybercrime is phishing, a practice where hackers send emails that appear to be from an acquaintance or trustworthy institution. A phishing email (or phish) can tempt users with a variety of scenarios, from the promise of free gift cards to urgent alerts from upper management. If users click on links in a phishing email, the links can take them to websites that could deposit dangerous malware into the organization's computers.

Many organizations have phishing training programs in which employees receive fake phishing emails generated by the employees' own organization to teach them to be vigilant and to recognize the characteristics of actual phishing emails. Chief information security officers (CISOs), who often oversee these phishing awareness programs, then look at the click rates, or how often users click on the emails, to determine if their phishing training is working. Higher click rates are generally seen as bad because it means users failed to notice the email was a phish, while low click rates are often seen as good.

However, numbers alone don't tell the whole story. "The Phish Scale is intended to help provide a deeper understanding of whether a particular phishing email is harder or easier for a particular target audience to detect," said NIST researcher Michelle Steves. The tool can help explain why click rates are high or low.

The Phish Scale uses a rating system that is based on the message content in a phishing email. This can consist of cues that should tip users off about the legitimacy of the email and the premise of the scenario for the target audience, meaning whichever tactics the email uses would be effective for that audience. These groups can vary widely, including universities, business institutions, hospitals and government agencies.

The new method uses five elements that are rated on a 5-point scale that relate to the scenario's premise. The overall score is then used by the phishing trainer to help analyze their data and rank the phishing exercise as low, medium or high difficulty.

The significance of the Phish Scale is to give CISOs a better understanding of their click-rate data instead of relying on the numbers alone. A low click rate for a particular phishing email can have several causes: The phishing training emails are too easy or do not provide relevant context to the user, or the phishing email is similar to a previous exercise. Data like this can create a false sense of security if click rates are analyzed on their own without understanding the phishing email's difficulty.

By using the Phish Scale to analyze click rates and collecting feedback from users on why they clicked on certain phishing emails, CISOs can better understand their phishing training programs, especially if they are optimized for the intended target audience.

The Phish Scale is the culmination of years of research, and the data used for it comes from an "operational" setting, very much the opposite of a laboratory experiment with controlled variables. "As soon as you put people into a laboratory setting, they know," said Steves. "They're outside of their regular context, their regular work setting, and their regular work responsibilities. That is artificial already. Our data did not come from there."

This type of operational data is both beneficial and in short supply in the research field. "We were very fortunate that we were able to publish that data and contribute to the literature in that way," said NIST researcher Kristen Greene.

As for next steps, Greene and Steves say they need even more data. All of the data used for the Phish Scale came from NIST. The next step is to expand the pool and acquire data from other organizations, including nongovernmental ones, and to make sure the Phish Scale performs as it should over time and in different operational settings. "We know that the phishing threat landscape continues to change," said Greene. "Does the Phish Scale hold up against all the new phishing attacks? How can we improve it with new data?" NIST researcher Shaneé Dawkins and her colleagues are now working to make those improvements and revisions.

In the meantime, the Phish Scale provides a new method for computer security professionals to better understand their organization's phishing click rates, and ultimately improve training so their users are better prepared against real phishing scenarios.

Read more at Science Daily