Apr 30, 2020

New insight into bacterial structure to help fight against superbugs

Scientists from the University of Sheffield have produced the first high-resolution images of the structure of the cell wall of bacteria, in a study that could further understanding of antimicrobial resistance.

The research, published in Nature, revealed a new and unexpected structure of the outer bacterial layers of the bacterium Staphylococcus aureus.

The findings set a new framework for understanding how bacteria grow and how antibiotics work, overturning previous theories about the structure of the outer bacterial layers.

The images give unprecedented insight into the composition of the bacterial cell wall and will inform new approaches to developing antibiotics in order to combat antibiotic resistance. There are no other examples of studies of the cell wall in any organism at comparable resolution, down to the molecular scale.

Laia Pasquina Lemonche, a PhD Researcher from the University of Sheffield's Department of Physics and Astronomy, said: "Many antibiotics work by inhibiting the bacteria's production of a cell wall, a strong but permeable skin around the bacteria which is critical for its survival.

"We still don't understand how antibiotics like penicillin kill bacteria, but this isn't surprising because until now we had remarkably little information about the actual organisation of the bacterial cell wall. This study provides that essential stepping stone which we hope will lead to both a better understanding of how antibiotics work and to the future development of new approaches to combat antimicrobial resistance."

The team used an advanced microscopy technique called Atomic Force Microscopy (AFM), which works by using a sharp needle to feel the shape of a surface and build an image similar to a contour map, but at the scale of individual molecules.

Professor Jamie Hobbs, Professor of Physics at the University of Sheffield, said: "It is by physicists and biologists working together that we've been able to make these breakthroughs in our understanding of the bacterial cell wall."

The researchers are now using the same techniques to understand how antibiotics change the architecture of the cell wall and also how changes in the cell wall are important in antimicrobial resistance.

From Science Daily

A new machine learning method streamlines particle accelerator operations

Each year, researchers from around the world visit the Department of Energy's SLAC National Accelerator Laboratory to conduct hundreds of experiments in chemistry, materials science, biology and energy research at the Linac Coherent Light Source (LCLS) X-ray laser. LCLS creates ultrabright X-rays from high-energy beams of electrons produced in a giant linear particle accelerator.

Experiments at LCLS run around the clock, in two 12-hour shifts per day. At the start of each shift, operators must tweak the accelerator's performance to prepare the X-ray beam for the next experiment. Sometimes, additional tweaking is needed during a shift as well. In the past, operators have spent hundreds of hours each year on this task, called accelerator tuning.

Now, SLAC researchers have developed a new tool, using machine learning, that may make part of the tuning process five times faster compared to previous methods. They described the method in Physical Review Letters on March 25.

Tuning the beam

Producing LCLS's powerful X-ray beam starts with the preparation of a high-quality electron beam. Some of the electrons' energy then gets converted into X-ray light inside special magnets. The properties of the electron beam, which needs to be dense and tightly focused, are a critical factor in how good the X-ray beam will be.

"Even a small difference in the density of the electron beam can have a huge difference in the amount of X-rays you get out at the end," says Daniel Ratner, head of SLAC's machine learning initiative and a member of the team that developed the new technique.

The accelerator uses a series of 24 special magnets, called quadrupole magnets, to focus the electron beam similarly to how glass lenses focus light. Traditionally, human operators carefully turned knobs to adjust individual magnets between shifts to make sure the accelerator was producing the X-ray beam needed for a particular experiment. This process took up a lot of the operators' time -- time they could spend on other important tasks that improve the beam for experiments.

A few years ago, LCLS operators adopted a computer algorithm that automated and sped up this magnet tuning. However, it came with its own disadvantages. It aimed at improving the X-ray beam by making random adjustments to the magnets' strengths. But unlike human operators, this algorithm had no prior knowledge of the accelerator's structure and couldn't make educated guesses in its tuning that might have ultimately led to even better results.

This is why SLAC researchers decided to develop a new algorithm that combines machine learning -- "smart" computer programs that learn how to get better over time -- with knowledge about the physics of the accelerator.

"The machine learning approach is trying to tie this all together to give operators better tools so that they can focus on other important problems," says Joseph Duris, a SLAC scientist who led the new study.

A better beam, faster

The new approach uses a technique called a Gaussian process, which predicts the effect a particular accelerator adjustment has on the quality of the X-ray beam. It also generates uncertainties for its predictions. The algorithm then decides which adjustments to try for the biggest improvements.

For example, it may decide to try a dramatic adjustment whose outcome is very uncertain but could lead to a big payoff. That means this new, adventurous algorithm has a better chance than the previous algorithm of making the tweaks needed to create the best possible X-ray beam.

The SLAC researchers also used data from previous LCLS operations to teach the algorithm which magnet strengths have typically led to brighter X-rays, giving the algorithm a way of making educated guesses about the adjustments it should try. This equips the algorithm with knowledge and expertise that human operators naturally have, and that the previous algorithm lacked.

"We can rely on that physics knowledge, that institutional knowledge, in order to improve the predictions," Duris says.

Insights into the magnets' relationships to each other also improved the technique. The quadrupole magnets work in pairs, and to increase their focusing power, the strength of one magnet in a pair must be increased while the other's is decreased.

With the new process, tuning the quadrupole magnets has become about three to five times faster, the researchers estimate. It also tends to produce higher-intensity beams than the previously used algorithm.

"Our ability to increase our tuning efficiency is really, really critical to being able to deliver a beam faster and with better quality to people who are coming from all over the world to run experiments," says Jane Shtalenkova, an accelerator operator at SLAC who worked with Duris, Ratner and others to develop the new tool.

Beyond LCLS

The same method can be extended to tune other electron or X-ray beam properties that scientists may want to optimize for their experiments. For example, researchers could apply the technique to maximize the signal they get out of their sample after it's hit by LCLS's X-ray beam.

This flexibility also makes the new algorithm useful for other facilities.

Read more at Science Daily

Computational techniques explore 'the dark side of amyloid aggregation in the brain'

As physicians and families know too well, though Alzheimer's disease has been intensely studied for decades, too much is still not known about molecular processes in the brain that cause it. Now researchers at the University of Massachusetts Amherst say new insights from analytic theory and molecular simulation techniques offer a better understanding of amyloid fibril growth and brain pathology.

As senior author Jianhan Chen notes, the "amyloid hypothesis" was promising -- amyloid protein fibrils are a central feature in Alzheimer's, Parkinson's disease and other neurodegenerative diseases. "But the process is really difficult to study," he says. "For many years people thought the fibril might the harmful factor in the brain. But after billions of dollars of investment failed to deliver an Alzheimer's drug, that thinking is really questioned. We now believe that the fibril is not the toxic species, but it's the earlier forms, soluble oligomers or proto-fibrils. That's what we wanted to study."

Chen and first author Zhiguang Jia, a research scientist in Chen's computational biophysics lab, explored how building-block peptides form fibrils. "We are really proud of this work because, to the best of our knowledge, for the first time we have described the comprehensive process of how fibril growth can happen. We illustrate that the effects of disease-causing mutations often arise from the cumulative effects of many small perturbations. A comprehensive description is absolutely critical to generate reliable and testable hypothesis," he adds. Details of their multi-scale approach with many atomistic simulations are in Proceedings of the National Academy of Sciences.

Chen adds, "The process is slow and very complex. All the nonproductive pathways are usually hidden and have never been described in a comprehensive and quantitative fashion. It is like the dark side of the moon."

Chen says their model is "parameter-free and purely based on physics, with no fitting or assumptions needed. We provide a complete description of the process and the physics just comes out naturally. It's really satisfying; we feel it's a real breakthrough."

He and Jia focused first on how peptides in disordered solution behave. The process starts with peptides in a partially unfolded conformation, Chen notes. They describe both productive and non-productive aggregation processes and point out that non-productive ones can take a very long time to disengage from interactions. "It's like hiking in the woods without a path," Chen says. "It's like a maze. And if one peptide takes a mistaken pathway, it has start over and retry many, many times."

A key insight was to account for these many non-productive pathways -- too many possibilities -- that slow movement and cause a "kinetic bottleneck," he says. Another important insight, Chen points out, is that the "energy landscape" as biophysicists call it, is crucial. With "usual" structured proteins, in spite of their great complexity, they fold quickly because the underlying energy landscape is well structured to support quick, efficient folding.

By contrast, fibril growth occurs in a "really flat" energy landscape, he adds. "There are many, many mistakes before you fall into the hole that will lead to fibril formation." Biochemists call it "unguided search," he says, adding that "bumbling" is a good way to describe it.

Modeling and characterizing such unguided systems are extremely difficult, the biophysicist notes. "To use a simulation to predict the process, you need a complete description of the whole maze or you can never grasp it, and this is almost impossible. To describe comprehensively the search space, you must compromise resolution of peptide modeling. When you limit the resolution of the model, you'll not be able to faithfully capture the impacts of disease mutations, for example."

He says these conflicting requirements -- resolution and completeness -- must be satisfied at the same time. "Our approach is the first to satisfy both. This is one of our technical breakthroughs," Chen says.

Read more at Science Daily

Reduced obesity for weighted-vest wearers

Scientists from the University of Gothenburg, Sweden, have found a new method of reducing human body weight and fat mass using weighted vests. The new study indicates that there is something comparable to built-in bathroom scales that contributes to keeping our body weight and, by the same token, fat mass constant.

The researchers hypothesized that loading the vests with weights would result in a compensatory body-weight decrease. Sixty-nine people with a body mass index (BMI) of 30-35, the lowest obesity category, took part in the clinical study. Their instructions were to wear a weighted vest eight hours a day for three weeks, and otherwise live as usual.

All the study participants wore weighted vests but, by drawing of lots, they were assigned to one of two groups. The control group wore only light vests weighing 1 kg, while the treatment group wore heavy vests weighing some 11 kg. When the three weeks had passed, the experimental subjects who wore the heavier vests had lost 1.6 kg in weight, while those wearing the light vests had lost 0.3 kg.

"We think it's very interesting that the treatment with the heavier weighted vests reduced fat mass while muscle mass simultaneously remained intact," says Professor Claes Ohlsson of Sahlgrenska Academy, University of Gothenburg.

"The effect on fat mass we found, from this short experiment, exceeded what's usually observed after various forms of physical training. But we weren't able to determine whether the reduction was in subcutaneous fat (just under the skin) or the dangerous visceral kind (belly fat) in the abdominal cavity that's most strongly associated with cardiovascular diseases and diabetes," says Professor John-Olov Jansson of Sahlgrenska Academy, University of Gothenburg.

In previous animal studies published in 2018, the scientists showed that there is an energy balance system that endeavors to keep body weight constant: the "gravitostat," as they have dubbed it. In mice, this regulation takes place partly by influencing appetite. To work, the system must contain a kind of personal weighing machine. The researchers' new clinical study shows that similar built-in scales exists in humans as well.

If people do a lot of sitting, what seems to happen is that the reading on our personal scales falls too low. This may explain why sitting is so clearly associated with obesity and ill-health. Weighted vests can raise the reading on the scales, resulting in weight loss.

Many questions about how the gravitostat works remain for the researchers to answer. Aspects they want to study include whether, in wearers of weighted vests, changed energy expenditure, appetite and mobility help them to lose weight. The scientists also want to see whether the weight reduction continues for the vest wearers over periods longer than three weeks, and whether the dangerous visceral fat is reduced by the treatment.

From Science Daily

Apr 29, 2020

Evidence of Late Pleistocene human colonization of isolated islands beyond Wallace's Line

A new article published in Nature Communications applies stable isotope analysis to a collection of fossil human teeth from the islands of Timor and Alor in Wallacea to study the ecological adaptations of the earliest members of our species to reach this isolated part of the world. Because the Wallacean islands are considered extreme, resource poor settings, archaeologists believed that early seafaring populations would have moved rapidly through this region without establishing permanent communities. Nevertheless, this has so far been difficult to test.

This study, led by scientists from the Department of Archaeology, Max Planck Institute for the Science of Human History (MPI SHH), alongside colleagues from the Australian National University and Universitas Gadjah Mada, used an isotopic methodology that reveals the resources consumed by humans during the period of tooth formation. They demonstrate that the earliest human fossil so far found in the region, dating to around 42,000-39,000 years ago, relied upon coastal resources. Yet, from 20,000 years ago, humans show an increasing reliance on tropical forest environments, away from the island coasts. The results support the idea that one distinguishing characteristic of Homo sapiens is high ecological flexibility, especially when compared to other hominins known from the same region.

Pleistocene hominin adaptations in Southeast Asia


Over the last two decades, archaeological evidence from deserts, high-altitude settings, tropical rainforests, and maritime habitats seem to increasingly suggest that Late Pleistocene humans rapidly adapted to a number of extreme environments. By contrast, our closest hominin relatives, such as Homo erectus and Neanderthals, apparently used various mixtures of forests and grasslands, albeit from as far apart as the Levant, Siberia, and Java. However, this apparent distinction needs testing, especially as finds of another closely related hominin, the Denisovans, have been found on the high-altitude Tibetan Plateau.

As one of the corresponding authors on the new paper, Sue O'Connor of Australian National University says, "The islands beyond Wallace's Line are ideal places to test the adaptive differences between our species and other hominins. These islands were never connected to mainland Southeast Asia during the Pleistocene, and would have ensured that hominins had to make water crossings to reach it." Tropical forest settings like those in Wallacea are often considered barriers to human expansion and are a far cry from the sweeping 'savannahs' with an abundance of medium to large mammals that hominins are believed to have relied on.

Fossils and stone tools show that hominins made it to Wallacean islands at least one million years ago, including the famous 'Hobbit,' or Homo floresiensis, on the island of Flores. When our own species arrived 45,000 years ago (or perhaps earlier), it is thought to have quickly developed the specialized use of marine habitats, as evidenced by one of the world's earliest fish hooks found in the region. Nevertheless, as co-author Ceri Shipton puts it "the extent of this maritime adaptation has remained hotly debated and difficult to test using snapshots based on, often poorly preserved, animal remains."

Stable isotope analysis and Late Pleistocene humans

This new paper uses stable carbon isotopes measured from fossil human teeth to directly reconstruct the long-term diets of past populations. Although this method has been used to study the diets and environments of African hominins for nearly half a century, it has thus far been scarcely applied to the earliest members of our own species expanding within and beyond Africa. Using the principle 'you are what you eat,' researchers analyzed powdered hominin tooth enamel from 26 individuals dated between 42,000 and 1,000 years ago to explore the types of resources they consumed during tooth formation.

The new paper shows that the earliest human fossil available from the region, excavated from the site of Asitau Kuru on Timor, was indeed reliant on maritime resources, suggesting a well-tuned adaptation to the colonization of coastal areas. "This fits with our existing models of rapid human movement through Wallacea on the way to Australia," says co-author Shimona Kealy of the Australian National University.

From around 20,000 years ago, however, human diets seem to have switched inland, towards the supposedly impoverished resources of the island forests. Although some individuals maintained the use of coastal habitats, the majority seemingly began to adapt to the populations of small mammals and tropical forest plants in the region. As co-author Mahirta at Universitas Gadjah Mada puts it, "Coastal resources such as shellfish and reef fish are easy to exploit and available year-round, however growing populations likely forced early island occupants to look inland to other resources."

A species defined by flexibility

This study provides the first direct insights into the adaptations of our own species as it settled in a series of challenging island environments in Wallacea. "Early human populations here, and elsewhere, could not only successfully use the enormous variety of often-extreme Pleistocene environments," suggests Patrick Roberts, lead author of the study and Group Leader at MPI SHH, "they could also specialize in them over substantial periods of time. As a result, even if some local populations did fail, the species as a whole would go on to become tremendously prolific."

Read more at Science Daily

Asteroid 1998 OR2 to safely fly past Earth this week

A large near-Earth asteroid will safely pass by our planet on Wednesday morning, providing astronomers with an exceptional opportunity to study the 1.5-mile-wide (2-kilometer-wide) object in great detail.

The asteroid, called 1998 OR2, will make its closest approach at 5:55 a.m. EDT (2:55 a.m. PDT). While this is known as a "close approach" by astronomers, it's still very far away: The asteroid will get no closer than about 3.9 million miles (6.3 million kilometers), passing more than 16 times farther away than the Moon.

Asteroid 1998 OR2 was discovered by the Near-Earth Asteroid Tracking program at NASA's Jet Propulsion Laboratory in July 1998, and for the past two decades astronomers have tracked it. As a result, we understand its orbital trajectory very precisely, and we can say with confidence that this asteroid poses no possibility of impact for at least the next 200 years. Its next close approach to Earth will occur in 2079, when it will pass by closer -- only about four times the lunar distance.

Despite this, 1998 OR2 is still categorized as a large "potentially hazardous asteroid" because, over the course of millennia, very slight changes in the asteroid's orbit may cause it to present more of a hazard to Earth than it does now. This is one of the reasons why tracking this asteroid during its close approach -- using telescopes and especially ground-based radar -- is important, as observations such as these will enable an even better long-term assessment of the hazard presented by this asteroid.

Close approaches by large asteroids like 1998 OR2 are quite rare. The previous close approach by a large asteroid was made by asteroid Florence in September 2017. That 3-mile-wide (5-kilometer-wide) object zoomed past Earth at 18 lunar distances. On average, we expect asteroids of this size to fly by our planet this close roughly once every five years.

Since they are bigger, asteroids of this size reflect much more light than smaller asteroids and are therefore easier to detect with telescopes. Almost all near-Earth asteroids (about 98%) of the size of 1998 OR2 or larger have already been discovered, tracked and cataloged. It is extremely unlikely there could be an impact over the next century by one of these large asteroids, but efforts to discover all asteroids that could pose an impact hazard to Earth continue.

Read more at Science Daily

Newly discovered exoplanet dethrones former king of Kepler-88 planetary system

Our solar system has a king. The planet Jupiter, named for the most powerful god in the Greek pantheon, has bossed around the other planets through its gravitational influence. With twice the mass of Saturn, and 300 times that of Earth, Jupiter's slightest movement is felt by all the other planets. Jupiter is thought to be responsible for the small size of Mars, the presence of the asteroid belt, and a cascade of comets that delivered water to young Earth.

Do other planetary systems have gravitational gods like Jupiter?

A team of astronomers led by the University of Hawaii Institute for Astronomy (UH IfA) has discovered a planet three times the mass of Jupiter in a distant planetary system.

The discovery is based on six years of data taken at W. M. Keck Observatory on Maunakea in Hawaii. Using the High-Resolution Echelle Spectrometer (HIRES) instrument on the 10-meter Keck I telescope, the team confirmed that the planet, named Kepler-88 d, orbits its star every four years, and its orbit is not circular, but elliptical. At three times the mass of Jupiter, Kepler-88 d is the most massive planet in this system.

The system, Kepler-88, was already famous among astronomers for two planets that orbit much closer to the star, Kepler-88 b and c (planets are typically named alphabetically in the order of their discovery).

Those two planets have a bizarre and striking dynamic called mean motion resonance. The sub-Neptune sized planet b orbits the star in just 11 days, which is almost exactly half the 22-day orbital period of planet c, a Jupiter-mass planet. The clockwork-like nature of their orbits is energetically efficient, like a parent pushing a child on a swing. Every two laps planet b makes around the star, it gets pumped. The outer planet, Kepler-88 c, is twenty times more massive than planet b, and so its force results in dramatic changes in the orbital timing of the inner planet.

Astronomers observed these changes, called transit timing variations, with the NASA Kepler space telescope, which detected the precise times when Kepler-88 b crossed (or transited) between the star and the telescope. Although transit timing variations (TTVs for short) have been detected in a few dozen planetary systems, Kepler-88 b has some of the largest timing variations. With transits arriving up to half a day early or late, the system is known as "the King of TTVs."

The newly discovered planet adds another dimension to astronomers' understanding of the system.

"At three times the mass of Jupiter, Kepler-88 d has likely been even more influential in the history of the Kepler-88 system than the so-called King, Kepler-88 c, which is only one Jupiter mass," says Dr. Lauren Weiss, Beatrice Watson Parrent Postdoctoral Fellow at UH IfA and lead author on the discovery team. "So maybe Kepler-88 d is the new supreme monarch of this planetary empire -- the empress."

Read more at Science Daily

Spitzer telescope reveals the precise timing of a black hole dance

This image shows two massive black holes in the OJ 287 galaxy. The smaller black hole orbits the larger one, which is also surrounded by a disk of gas. When the smaller black hole crashes through the disk, it produces a flare brighter than 1 trillion stars.
Black holes aren't stationary in space; in fact, they can be quite active in their movements. But because they are completely dark and can't be observed directly, they're not easy to study. Scientists have finally figured out the precise timing of a complicated dance between two enormous black holes, revealing hidden details about the physical characteristics of these mysterious cosmic objects.

The OJ 287 galaxy hosts one of the largest black holes ever found, with over 18 billion times the mass of our Sun. Orbiting this behemoth is another black hole with about 150 million times the Sun's mass. Twice every 12 years, the smaller black hole crashes through the enormous disk of gas surrounding its larger companion, creating a flash of light brighter than a trillion stars -- brighter, even, than the entire Milky Way galaxy. The light takes 3.5 billion years to reach Earth.

But the smaller black hole's orbit is oblong, not circular, and it's irregular: It shifts position with each loop around the bigger black hole and is tilted relative to the disk of gas. When the smaller black hole crashes through the disk, it creates two expanding bubbles of hot gas that move away from the disk in opposite directions, and in less than 48 hours the system appears to quadruple in brightness.

Because of the irregular orbit, the black hole collides with the disk at different times during each 12-year orbit. Sometimes the flares appear as little as one year apart; other times, as much as 10 years apart. Attempts to model the orbit and predict when the flares would occur took decades, but in 2010, scientists created a model that could predict their occurrence to within about one to three weeks. They demonstrated that their model was correct by predicting the appearance of a flare in December 2015 to within three weeks.

Then, in 2018, a group of scientists led by Lankeswar Dey, a graduate student at the Tata Institute of Fundamental Research in Mumbai, India, published a paper with an even more detailed model they claimed would be able to predict the timing of future flares to within four hours. In a new study published in the Astrophysical Journal Letters, those scientists report that their accurate prediction of a flare that occurred on July 31, 2019, confirms the model is correct.

The observation of that flare almost didn't happen. Because OJ 287 was on the opposite side of the Sun from Earth, out of view of all telescopes on the ground and in Earth orbit, the black hole wouldn't come back into view of those telescopes until early September, long after the flare had faded. But the system was within view of NASA's Spitzer Space Telescope, which the agency retired in January 2020.

After 16 years of operations, the spacecraft's orbit had placed it 158 million miles (254 million kilometers) from Earth, or more than 600 times the distance between Earth and the Moon. From this vantage point, Spitzer could observe the system from July 31 (the same day the flare was expected to appear) to early September, when OJ 287 would become observable to telescopes on Earth.

"When I first checked the visibility of OJ 287, I was shocked to find that it became visible to Spitzer right on the day when the next flare was predicted to occur," said Seppo Laine, an associate staff scientist at Caltech/IPAC in Pasadena, California, who oversaw Spitzer's observations of the system. "It was extremely fortunate that we would be able to capture the peak of this flare with Spitzer, because no other human-made instruments were capable of achieving this feat at that specific point in time."

Ripples in Space

Scientists regularly model the orbits of small objects in our solar system, like a comet looping around the Sun, taking into account the factors that will most significantly influence their motion. For that comet, the Sun's gravity is usually the dominant force, but the gravitational pull of nearby planets can change its path, too.

Determining the motion of two enormous black holes is much more complex. Scientists must account for factors that might not noticeably impact smaller objects; chief among them are something called gravitational waves. Einstein's theory of general relativity describes gravity as the warping of space by an object's mass. When an object moves through space, the distortions turn into waves. Einstein predicted the existence of gravitational waves in 1916, but they weren't observed directly until 2015 by the Laser Interferometer Gravitational Wave Observatory (LIGO).

The larger an object's mass, the larger and more energetic the gravitational waves it creates. In the OJ 287 system, scientists expect the gravitational waves to be so large that they can carry enough energy away from the system to measurably alter the smaller black hole's orbit -- and therefore timing of the flares.

While previous studies of OJ 287 have accounted for gravitational waves, the 2018 model is the most detailed yet. By incorporating information gathered from LIGO's detections of gravitational waves, it refines the window in which a flare is expected to occur to just 1 1/2 days.

To further refine the prediction of the flares to just four hours, the scientists folded in details about the larger black hole's physical characteristics. Specifically, the new model incorporates something called the "no-hair" theorem of black holes.

Published in the 1960s by a group of physicists that included Stephen Hawking, the theorem makes a prediction about the nature of black hole "surfaces." While black holes don't have true surfaces, scientists know there is a boundary around them beyond which nothing -- not even light -- can escape. Some ideas posit that the outer edge, called the event horizon, could be bumpy or irregular, but the no-hair theorem posits that the "surface" has no such features, not even hair (the theorem's name was a joke).

In other words, if one were to cut the black hole down the middle along its rotational axis, the surface would be symmetric. (The Earth's rotational axis is almost perfectly aligned with its North and South Poles. If you cut the planet in half along that axis and compared the two halves, you would find that our planet is mostly symmetric, though features like oceans and mountains create some small variations between the halves.)

Finding Symmetry

In the 1970s, Caltech professor emeritus Kip Thorne described how this scenario -- a satellite orbiting a massive black hole -- could potentially reveal whether the black hole's surface was smooth or bumpy. By correctly anticipating the smaller black hole's orbit with such precision, the new model supports the no-hair theorem, meaning our basic understanding of these incredibly strange cosmic objects is correct. The OJ 287 system, in other words, supports the idea that black hole surfaces are symmetric along their rotational axes.

So how does the smoothness of the massive black hole's surface impact the timing of the smaller black hole's orbit? That orbit is determined mostly by the mass of the larger black hole. If it grew more massive or shed some of its heft, that would change the size of smaller black hole's orbit. But the distribution of mass matters as well. A massive bulge on one side of the larger black hole would distort the space around it differently than if the black hole were symmetric. That would then alter the smaller black hole's path as it orbits its companion and measurably change the timing of the black hole's collision with the disk on that particular orbit.

"It is important to black hole scientists that we prove or disprove the no-hair theorem. Without it, we cannot trust that black holes as envisaged by Hawking and others exist at all," said Mauri Valtonen, an astrophysicist at University of Turku in Finland and a coauthor on the paper.

Read more at Science Daily

Apr 28, 2020

TAMA300 blazes trail for improved gravitational wave astronomy

Researchers at the National Astronomical Observatory of Japan (NAOJ) have used the infrastructure of the former TAMA300 gravitational wave detector in Mitaka, Tokyo to demonstrate a new technique to reduce quantum noise in detectors. This new technique will help increase the sensitivity of the detectors comprising a collaborative worldwide gravitational wave network, allowing them to observe fainter waves.

When it began observations in 2000, TAMA300 was one of the world's first large-scale interferometric gravitational wave detectors. At that time TAMA300 had the highest sensitivity in the world, setting an upper limit on the strength of gravitational wave signals; but the first detection of actual gravitational waves was made 15 years later in 2015 by LIGO. Since then detector technology has improved to the point that modern detectors are observing several signals per month. The scientific results obtained from these observations are already impressive and many more are expected in the next decades. TAMA300 is no longer participating in observations, but is still active, acting as a testbed for new technologies to improve other detectors.

The sensitivity of current and future gravitational wave detectors is limited at almost all the frequencies by quantum noise caused by the effects of vacuum fluctuations of the electromagnetic fields. But even this inherent quantum noise can be sidestepped. It is possible to manipulate the vacuum fluctuations to redistribute the quantum uncertainties, deceasing one type of noise at the expense of increasing a different, less obstructive type of noise. This technique, known as vacuum squeezing, has already been implemented in gravitational wave detectors, greatly increasing their sensitivity to higher frequency gravitational waves. But the optomechanical interaction between the electromagnetic field and the mirrors of the detector cause the effects of vacuum squeezing to change depending on the frequency. So at low frequencies vacuum squeezing increases the wrong type of noise, actually degrading sensitivity.

To overcome this limitation and achieve reduced noise at all frequencies, a team at NAOJ composed of members of the in-house Gravitational Wave Science Project and the KAGRA collaboration (but also including researchers of the Virgo and GEO collaborations) has recently demonstrated the feasibility of a technique known as frequency dependent vacuum squeezing, at the frequencies useful for gravitational wave detectors. Because the detector itself interacts with the electromagnetic fields differently depending on the frequency, the team used the infrastructure of the former TAMA300 detector to create a field which itself varies depending on frequency. A normal (frequency independent) squeezed vacuum field is reflected off an optical cavity 300-m long, such that a frequency dependence is imprinted and it is able counteract the optomechanical effect of the interferometer.

Read more at Science Daily

Red-flagging misinformation could slow the spread of fake news on social media

The dissemination of fake news on social media is a pernicious trend with dire implications for the 2020 presidential election. Indeed, research shows that public engagement with spurious news is greater than with legitimate news from mainstream sources, making social media a powerful channel for propaganda.

A new study on the spread of disinformation reveals that pairing headlines with credibility alerts from fact-checkers, the public, news media and even AI, can reduce peoples' intention to share. However, the effectiveness of these alerts varies with political orientation and gender. The good news for truth seekers? Official fact-checking sources are overwhelmingly trusted.

The study, led by Nasir Memon, professor of computer science and engineering at the New York University Tandon School of Engineering and Sameer Patil, visiting research professor at NYU Tandon and assistant professor in the Luddy School of Informatics, Computing, and Engineering at Indiana University Bloomington, goes further, examining the effectiveness of a specific set of inaccuracy notifications designed to alert readers to news headlines that are inaccurate or untrue.

The work, "Effects of Credibility Indicators on Social Media News Sharing Intent," published in the Proceedings of the 2020 ACM CHI Conference on Human Factors in Computing Systems, involved an online study of around 1,500 individuals to measure the effectiveness among different groups of four so-called "credibility indicators" displayed beneath headlines:

  • Fact Checkers: "Multiple fact-checking journalists dispute the credibility of this news"
  • News Media: "Major news outlets dispute the credibility of this news"
  • Public: "A majority of Americans disputes the credibility of this news"
  • AI: "Computer algorithms using AI dispute the credibility of this news"

"We wanted to discover whether social media users were less apt to share fake news when it was accompanied by one of these indicators and whether different types of credibility indicators exhibit different levels of influence on people's sharing intent," says Memon. "But we also wanted to measure the extent to which demographic and contextual factors like age, gender, and political affiliation impact the effectiveness of these indicators."

Participants -- over 1,500 U.S. residents -- saw a sequence of 12 true, false, or satirical news headlines. Only the false or satirical headlines included a credibility indicator below the headline in red font. For all of the headlines, respondents were asked if they would share the corresponding article with friends on social media, and why.

"Upon initial inspection, we found that political ideology and affiliation were highly correlated to responses and that the strength of individuals' political alignments made no difference, whether Republican or Democrat," says Memon. "The indicators impacted everyone regardless of political orientation, but the impact on Democrats was much larger compared to the other two groups."

The most effective of the credibility indicators, by far, was Fact Checkers: Study respondents intended to share 43% fewer non-true headlines with this indicator versus 25%, 22%, and 22% for the "News Media," "Public," and "AI" indicators, respectively.

Effects of Political Affiliation

The team found a strong correlation between political affiliation and the propensity of each of the credibility indicators to influence intention to share. In fact, the AI credibility indicator actually induced Republicans to increase their intention to share non-true news:

  • Democrats intended to share 61% fewer non-true headlines with the Fact Checkers indicator (versus 40% for Independents and 19% for Republicans)
  • Democrats intended to share 36% fewer non-true headlines with the News Media indicator (versus 29% for Independents and 4.5% for Republicans)
  • Democrats intended to share 37% fewer non-true headlines with the Public indicator, (versus 17% for Independents and 6.7% for Republicans)
  • Democrats intended to share 40% fewer non-true headlines with the AI indicator (versus 16% for Independents)
  • Republicans intended to share 8.1% more non-true news with the AI indicator

Republicans are less likely to be influenced by credibility indicators, more inclined to share fake news on social media.

Patil says that while fact-checkers are the most effective kind of indicator, regardless of political affiliation and gender, fact-checking is a very labor-intensive. He says the team was surprised by the fact that Republicans were more inclined to share news that was flagged as not credible using the AI indicator.

"We were not expecting that, although conservatives may tend to trust more traditional means of flagging the veracity of news," he says, adding that the team will next examine how to make the most effective credibility indicator -- fact-checkers -- efficient enough to handle the scale inherent in today's news climate.

"This could include applying fact checks to only the most-needed content, which might involve applying natural language algorithms. So, it is a question, broadly speaking, of how humans and AI could co-exist," he explains.

The team also found that males intended to share non-true headlines one and half times more than females, with the differences largest for the Public and News Media indicators.

Men are less likely to be influenced by credibility indicators, more inclined to share fake news on social media. But indicators, especially those from fact-checkers, reduce intention to share fake news across the board.

Read more at Science Daily

How mistakes help us recognize things

We learned it as children: to cross the street in exemplary fashion, we must first look to the left, then to the right, and finally once more to the left. If we see a car and a cyclist approaching when we first look to the left, this information is stored in our short-term memory. During the second glance to the left, our short-term memory reports: bicycle and car were there before, they are the same ones, they are still far enough away. We cross the street safely.

This is, however, not at all true. Our short-term memory deceives us. When looking to the left the second time, our eyes see something completely different: the bicycle and the car do not have the same colour anymore because they are just now passing through the shadow of a tree, they are no longer in the same location, and the car is perhaps moving more slowly. The fact that we nonetheless immediately recognise the bicycle and the car is due to the fact that the memory of the first leftward look biases the second one.

Scientists at Goethe University, led by psychologist Christoph Bledowski and doctoral student Cora Fischer reconstructed the traffic situation -- very abstractly -- in the laboratory: student participants were told to remember the motion direction of green or red dots moving across a monitor. During each trial, the test person saw two moving dot fields in short succession and had to subsequently report the motion direction of one of these dot fields. In additional tests, both dot fields were shown simultaneously next to each other. The test persons all completed numerous successive trials.

The Frankfurt scientists were very interested in the mistakes made by the test persons and how these mistakes were systematically connected in successive trials. If for example the observed dots moved in the direction of 10 degrees and in the following trial in the direction of 20 degrees, most people reported 16 to 18 degrees for the second trial. However, if 0 degrees were correct for the following trial, they reported 2 to 4 degrees for the second trial. The direction of the previous trial therefore distorted the perception of the following one -- "not very much, but systematically," says Christoph Bledowski. He and his team extended previous studies by investigating the influence of contextual information of the dot fields like colour, spatial position (right or left) and sequence (shown first or second). "In this way we more closely approximate real situations, in which we acquire different types of visual information from objects," Bledowski explains. This contextual information, especially space and sequence, contribute significantly to the distortion of successive perception in short-term memory. First author Cora Fischer says: "The contextual information helps us to differentiate among different objects and consequently to integrate information of the same object through time."

Read more at Science Daily

How the heart affects our perception

The first mechanism establishes a relationship between the phase of the heartbeat and conscious experience. In a regular rhythm, the heart contracts in the so-called systolic phase and pumps blood into the body. In a second phase, the diastolic phase, the blood flows back and the heart fills up again. In a previous publication from the MPI CBS, it was reported that perception of external stimuli changes with the heartbeat. In systole, we are less likely to detect a weak electric stimulus in the finger compared to diastole.

Now, in a new study, Esra Al and colleagues have found the reason for this change in perception: Brain activity is changing over the heart cycle. In systole a specific component of brain activity, which is associated with consciousness, the so called P300-component is suppressed. In other words, it seems that -- in systole -- the brain makes sure that certain information is kept out of conscious experience. The brain seems to take into account the pulse which floods the body in systole and predicts that pulse-associated bodily changes are "not real" but rather due to the pulse. Normally, this helps us to not be constantly disturbed by our pulse. However, when it comes to weak stimuli which coincide with systole we might miss them, although they are real.

During their investigations on heart-brain interactions, Al and colleagues also revealed a second effect of heartbeat on perception: If a person's brain shows a higher response to the heartbeat, the processing of the stimulus in the brain is attenuated -- the person detects the stimulus less. "This seems to be a result of directing our attention between external environmental signals and internal bodily signals.," explains study author Al. In other words, a large heartbeat-evoked potential seems to reflect a "state of mind," in which we are more focused on the functioning of our inner organs such as the blood circulation, however less aware of stimuli from the outside world.

The results not only have implications for our understanding of heart-brain interactions in healthy persons, but also in patients. The senior author, Arno Villringer explains, "The new results might help to explain why patients after stroke often suffer from cardiac problems and why patients with cardiac disease often have impaired cognitive function."

Read more at Science Daily

Apr 27, 2020

Spread of early dairy farming across Western Europe

A study has tracked the shift from hunter-gatherer lifestyles to early farming that occurred in prehistoric Europe over a period of around 1,500 years.

An international team of scientists, led by researchers at the University of York, analysed the molecular remains of food left in pottery used by the first farmers who settled along the Atlantic Coast of Europe from 7,000 to 6,000 years ago.

The researchers report evidence of dairy products in 80% of the pottery fragments from the Atlantic coast of what is now Britain and Ireland. In comparison, dairy farming on the Southern Atlantic coast of what is now Portugal and Spain seems to have been much less intensive, and with a greater use of sheep and goats rather than cows.

The study confirms that the earliest farmers to arrive on the Southern Atlantic coast exploited animals for their milk but suggests that dairying only really took off when it spread to northern latitudes, with progressively more dairy products processed in ceramic vessels.

Prehistoric farmers colonising Northern areas with harsher climates may have had a greater need for the nutritional benefits of milk, including vitamin D and fat, the authors of the study suggest.

Senior author of the paper, Professor Oliver Craig from the Department of Archaeology at the University of York, said: "Latitudinal differences in the scale of dairy production might also be important for understanding the evolution of adult lactase persistence across Europe. Today, the genetic change that allows adults to digest the lactose in milk is at much higher frequency in Northwestern Europeans than their southern counterparts."

The research team examined organic residues preserved in Early Neolithic pottery from 24 archaeological sites situated between Portugal and Normandy as well as in the Western Baltic.

They found surprisingly little evidence for marine foods in pottery even from sites located close to the Atlantic shoreline, with plenty of opportunities for fishing and shellfish gathering. An exception was in the Western Baltic where dairy foods and marine foods were both prepared in pottery.

Lead author of the paper, Dr Miriam Cubas, said: "This surprising discovery could mean that many prehistoric farmers shunned marine foods in favour of dairy, but perhaps fish and shellfish were simply processed in other ways.

Read more at Science Daily

'Elegant' solution reveals how the universe got its structure

Milky Way
The universe is full of billions of galaxies -- but their distribution across space is far from uniform. Why do we see so much structure in the universe today and how did it all form and grow?

A 10-year survey of tens of thousands of galaxies made using the Magellan Baade Telescope at Carnegie's Las Campanas Observatory in Chile provided a new approach to answering this fundamental mystery. The results, led by Carnegie's Daniel Kelson, are published in Monthly Notices of the Royal Astronomical Society.

"How do you describe the indescribable?" asks Kelson. "By taking an entirely new approach to the problem."

"Our tactic provides new -- and intuitive -- insights into how gravity drove the growth of structure from the universe's earliest times," said co-author Andrew Benson. "This is a direct, observation-based test of one of the pillars of cosmology."

The Carnegie-Spitzer-IMACS Redshift Survey was designed to study the relationship between galaxy growth and the surrounding environment over the last 9 billion years, when modern galaxies' appearances were defined.

The first galaxies were formed a few hundred million years after the Big Bang, which started the universe as a hot, murky soup of extremely energetic particles. As this material expanded outward from the initial explosion, it cooled, and the particles coalesced into neutral hydrogen gas. Some patches were denser than others and, eventually, their gravity overcame the universe's outward trajectory and the material collapsed inward, forming the first clumps of structure in the cosmos.

The density differences that allowed for structures both large and small to form in some places and not in others have been a longstanding topic of fascination. But until now, astronomers' abilities to model how structure grew in the universe over the last 13 billion years faced mathematical limitations.

"The gravitational interactions occurring between all the particles in the universe are too complex to explain with simple mathematics," Benson said.

So, astronomers either used mathematical approximations -- which compromised the accuracy of their models -- or large computer simulations that numerically model all the interactions between galaxies, but not all the interactions occurring between all of the particles, which was considered too complicated.

"A key goal of our survey was to count up the mass present in stars found in an enormous selection of distant galaxies and then use this information to formulate a new approach to understanding how structure formed in the universe," Kelson explained.

The research team -- which also included Carnegie's Louis Abramson, Shannon Patel, Stephen Shectman, Alan Dressler, Patrick McCarthy, and John S. Mulchaey, as well as Rik Williams , now of Uber Technologies -- demonstrated for the first time that the growth of individual proto-structures can be calculated and then averaged over all of space.

Doing this revealed that denser clumps grew faster, and less-dense clumps grew more slowly.

They were then able to work backward and determine the original distributions and growth rates of the fluctuations in density, which would eventually become the large-scale structures that determined the distributions of galaxies we see today.

In essence, their work provided a simple, yet accurate, description of why and how density fluctuations grow the way they do in the real universe, as well as in the computational-based work that underpins our understanding of the universe's infancy.

"And it's just so simple, with a real elegance to it," added Kelson.

The findings would not have been possible without the allocation of an extraordinary number of observing nights at Las Campanas.

"Many institutions wouldn't have had the capacity to take on a project of this scope on their own," said Observatories Director John Mulchaey. "But thanks to our Magellan Telescopes, we were able to execute this survey and create this novel approach to answering a classic question."

Read more at Science Daily

Disappearance of animal species takes mental, cultural and material toll on humans

For thousands of years, indigenous hunting societies have subsisted on specific animals for their survival. How have these hunter-gatherers been affected when these animals migrate or go extinct?

To answer this and other questions, Tel Aviv University (TAU) researchers conducted a broad survey of several hunter-gatherer societies across history in a retrospective study published on January 30 in Time and Mind. The study, led by Eyal Halfon and Prof. Ran Barkai of TAU's Department of Archeology and Ancient Near Eastern Cultures, sheds new light on the deep, multidimensional connection between humans and animals.

"There has been much discussion of the impact of people on the disappearance of animal species, mostly through hunting," explains Halfon. "But we flipped the issue to discover how the disappearance of animals -- either through extinction or migration -- has affected people."

The research reveals that these societies expressed a deep emotional and psychological connection with the animal species they hunted, especially after their disappearance. The study will help anthropologists and others understand the profound environmental changes taking place in our own lifetimes.

Halfon and Prof. Barkai conducted a survey of different historical periods and geographical locations, focusing on hunter-gatherer societies that hunted animals as the basis for their subsistence. They also investigated situations in which these animals became extinct or moved to more hospitable regions as a result of climate change.

"We found that humans reacted to the loss of the animal they hunted -- a significant partner in deep, varied and fundamental ways," Halfon says.

The new research explores hunter-gatherer societies throughout human history, from those dating back hundreds of thousands of years to modern-day societies that still function much the way prehistoric groups did. Ten case studies illustrate the deep connection -- existential, physical, spiritual and emotional -- between humans and animals they hunted.

"Many hunter-gatherer populations were based on one type of animal that provided many necessities such as food, clothing, tools and fuel," Prof. Barkai says. "For example, until 400,000 years ago prehistoric humans in Israel hunted elephants. Up to 40,000 years ago, residents of Northern Siberia hunted the woolly mammoth. When these animals disappeared from those areas, this had major ramifications for humans, who needed to respond and adapt to a new situation. Some had to completely change their way of life to survive."

According to the study, human groups adapted in different ways. Siberian residents seeking sustenance after the disappearance of mammoths migrated east and became the first settlers of Alaska and northern Canada. Cave dwellers in central Israel's Qesem Cave (excavated by Prof. Barkai) hunted fallow deer, far smaller than elephants, which required agility and social connections instead of robust physical strength. This necessitated far-reaching changes in their material and social culture and, subsequently, physical structure.

Halfon stresses the emotional reaction to an animal group's disappearance. "Humans felt deeply connected to the animals they hunted, considering them partners in nature, and appreciating them for the livelihood and sustenance they provided," he says. "We believe they never forgot these animals -- even long after they disappeared from the landscape."

An intriguing example of this kind of memory can be found in engravings from the Late Paleolithic period in Europe, which feature animals like mammoths and seals. Studies show that most of these depictions were created long after these two animals disappeared from the vicinity.

"These depictions reflect a simple human emotion we all know very well: longing," says Halfon. "Early humans remembered the animals that disappeared and perpetuated them, just like a poet who writes a song about his beloved who left him."

Read more at Science Daily

Scientists unveil how general anesthesia works

Hailed as one of the most important medical advances, the discovery of general anesthetics -- compounds which induce unconsciousness, prevent control of movement and block pain -- helped transform dangerous and traumatic operations into safe and routine surgery. But despite their importance, scientists still don't understand exactly how general anesthetics work.

Now, in a study published this week in the Journal of Neuroscience, researchers from the Okinawa Institute of Science and Technology Graduate University (OIST) and Nagoya University have revealed how a commonly used general anesthetic called isoflurane weakens the transmission of electrical signals between neurons, at junctions called synapses.

"Importantly, we found that isoflurane did not block the transmission of all electrical signals equally; the anesthetic had the strongest effect on higher frequency impulses that are required for functions such as cognition or movement, whilst it had minimal effect on low frequency impulses that control life-supporting functions, such as breathing," said Professor Tomoyuki Takahashi, who leads the Cellular and Molecular Synaptic Function (CMSF) Unit at OIST. "This explains how isoflurane is able to cause anesthesia, by preferentially blocking the high frequency signals."

At synapses, signals are sent by presynaptic neurons and received by postsynaptic neurons. At most synapses, communication occurs via chemical messengers -- or neurotransmitters.

When an electrical nerve impulse, or action potential, arrives at the end of the presynaptic neuron, this causes synaptic vesicles -- tiny membrane 'packets' that contain neurotransmitters -- to fuse with the terminal membrane, releasing the neurotransmitters into the gap between neurons. When enough neurotransmitters are sensed by the postsynaptic neuron, this triggers a new action potential in the post-synaptic neuron.

The CMSF unit used rat brain slices to study a giant synapse called the calyx of Held. The scientists induced electrical signals at different frequencies and then detected the action potentials generated in the postsynaptic neuron. They found that as they increased the frequency of electrical signals, isoflurane had a stronger effect on blocking transmission.

To corroborate his unit's findings, Takahashi reached out to Dr. Takayuki Yamashita, a researcher from Nagoya University who conducted experiments on synapses, called cortico-cortical synapses, in the brains of living mice.

Yamashita found that the anesthetic affected cortico-cortical synapses in a similar way to the calyx of Held. When the mice were anesthetized using isoflurane, high frequency transmission was strongly reduced whilst there was less effect on low frequency transmission.

"These experiments both confirmed how isoflurane acts as a general anesthetic," said Takahashi. "But we wanted to understand what underlying mechanisms isoflurane targets to weaken synapses in this frequency-dependent manner."

Tracking down the targets

With further research, the researchers found that isoflurane reduced the amount of neurotransmitter released, by both lowering the probability of the vesicles being released and by reducing the maximum number of vesicles able to be released at a time.

The scientists therefore examined whether isoflurane affected calcium ion channels, which are key in the process of vesicle release. When action potentials arrive at the presynaptic terminal, calcium ion channels in the membrane open, allowing calcium ions to flood in. Synaptic vesicles then detect this rise in calcium, and they fuse with the membrane. The researchers found that isoflurane lowered calcium influx by blocking calcium ion channels, which in turn reduced the probability of vesicle release.

"However, this mechanism alone could not explain how isoflurane reduces the number of releasable vesicles, or the frequency-dependent nature of isoflurane's effect," said Takahashi.

The scientists hypothesized that isoflurane could reduce the number of releasable vesicles by either directly blocking the process of vesicle release by exocytosis, or by indirectly blocking vesicle recycling, where vesicles are reformed by endocytosis and then refilled with neurotransmitter, ready to be released again.

By electrically measuring the changes in the surface area of the presynaptic terminal membrane, which is increased by exocytosis and decreased by endocytosis, the scientists concluded that isoflurane only affected vesicle release by exocytosis, likely by blocking exocytic machinery.

"Crucially, we found that this block only had a major effect on high frequency signals, suggesting that this block on exocytic machinery is the key to isoflurane's anesthetizing effect," said Takahashi.

The scientists proposed that high frequency action potentials trigger such a massive influx of calcium into the presynaptic terminal that isoflurane cannot effectively reduce the calcium concentration. Synaptic strength is therefore weakened predominantly by the direct block of exocytic machinery rather than a reduced probability of vesicle release.

Meanwhile, low frequency impulses trigger less exocytosis, so isoflurane's block on exocytic machinery has little effect. Although isoflurane effectively reduces entry of calcium into the presynaptic terminal, lowering the probability of vesicle release, by itself, is not powerful enough to block postsynaptic action potentials at the calyx of Held and has only a minor effect in cortico-cortical synapses. Low frequency transmission is therefore maintained.

Overall, the series of experiments provide compelling evidence to how isoflurane weakens synapses to induce anesthesia.

Read more at Science Daily

Apr 26, 2020

How hearing loss in old age affects the brain

If your hearing deteriorates in old age, the risk of dementia and cognitive decline increases. So far, it hasn't been clear why. A team of neuroscientists has examined what happens in the brain when hearing gradually deteriorates: key areas of the brain are reorganized, and this affects memory.

Daniela Beckmann, Mirko Feldmann, Olena Shchyglo and Professor Denise Manahan-Vaughan from the Department of Neurophysiology of the Medical Faculty worked together for the study.

When sensory perception fades

The researchers studied the brain of mice that exhibit hereditary hearing loss, similar to age related hearing loss in humans. The scientists analysed the density of neurotransmitter receptors in the brain that are crucial for memory formation. They also researched the extent to which information storage in the brain's most important memory organ, the hippocampus, was affected.

Adaptability of the brain suffers

Memory is enabled by a process called synaptic plasticity. In the hippocampus, synaptic plasticity was chronically impaired by progressive hearing loss. The distribution and density of neurotransmitter receptors in sensory and memory regions of the brain also changed constantly. The stronger the hearing impairment, the poorer were both synaptic plasticity and memory ability.

"Our results provide new insights into the putative cause of the relationship between cognitive decline and age-related hearing loss in humans," said Denise Manahan-Vaughan. "We believe that the constant changes in neurotransmitter receptor expression caused by progressive hearing loss create shifting sands at the level of sensory information processing that prevent the hippocampus from working effectively," she adds.

From Science Daily

Insects: Largest study to date finds declines on land, but recoveries in freshwater

Butterfly against urban background
A worldwide compilation of long-term insect abundance studies shows that the number of land-dwelling insects is in decline. On average, there is a global decrease of 0.92% per year, which translates to approximately 24% over 30 years. At the same time, the number of insects living in freshwater, such as midges and mayflies, has increased on average by 1.08% each year. This is possibly due to effective water protection policies. Despite these overall averages, local trends are highly variable, and areas that have been less impacted by humans appear to have weaker trends. These are the results from the largest study of insect change to date, including 1676 sites across the world, now published in the journal Science.

The study was led by researchers from the German Centre for Integrative Biodiversity Research (iDiv), Leipzig University (UL) and Martin Luther University Halle-Wittenberg (MLU). It fills key knowledge gaps in the context of the much-discussed issue of "insect declines."

Over the past few years, a number of studies have been published that show dramatic declines in insect numbers through time. The most prominent, from nature reserves in Western Germany, suggested remarkable declines of flying insect biomass (>75% decrease over 27 years). This was published in 2017 and sparked a media storm suggesting a widespread "insect apocalypse." Since then, there have been several follow-up publications from different places across the world, most showing strong declines, others less so, and some even showing increases. But so far, no one has combined the available data on insect abundance trends across the globe to investigate just how widespread and severe insect declines are. Until now.

Largest data compilation to date

An international team of scientists collaborated to compile data from 166 long-term surveys performed at 1676 sites worldwide, between 1925 and 2018, to investigate trends in insect abundances (number of individuals, not species). The complex analysis revealed a high variation in trends, even among nearby sites. For example, in countries where many insect surveys have taken place, such as Germany, the UK and the US, some places experienced declines while others quite close by indicated no changes, or even increases. However, when all of the trends across the world were combined, the researchers were able to estimate how total insect abundances were changing on average across time. They found that for terrestrial insects (insects that spend their whole lives on land, like butterflies, grasshoppers and ants), there was an average decrease of 0.92% per year.

Insects disappear quietly

First author Dr Roel van Klink, a scientist at iDiv and UL, said: "0.92% may not sound like much, but in fact it means 24% fewer insects in 30 years' time and 50% fewer over 75 years. Insect declines happen in a quiet way and we don't take notice from one year to the next. It's like going back to the place where you grew up. It's only because you haven't been there for years that you suddenly realise how much has changed, and all too often not for the better."

Insect declines were strongest in some parts of the US (West and Midwest) and in Europe, particularly in Germany. For Europe in general, trends became on average more negative over time, with the strongest declines since 2005.

Fewer insects in the air

When reporting about "insect decline," the mass media have often referred to the "windscreen phenomenon": people's perception that there are fewer insects being splattered on the windscreens of their cars now compared to some decades ago. The new study confirms this observation, at least on average. Last author Jonathan Chase, professor at iDiv and MLU, said: "Many insects can fly, and it's those that get smashed by car windshields. Our analysis shows that flying insects have indeed decreased on average. However, the majority of insects are less conspicuous and live out of sight -- in the soil, in tree canopies or in the water."

For the new study, the researchers also analysed data from many of these hidden habitats. This showed that on average, there are fewer insects living in the grass and on the ground today than in the past -- similar to the flying insects. By contrast, the number of insects living in tree canopies has, on average, remained largely unchanged.

Freshwater insects have recovered

At the same time, studies of insects that live (part of) their lives under water, like midges and mayflies, showed an average annual increase of 1.08%. This corresponds to a 38% increase over 30 years. This positive trend was particularly strong in Northern Europe, in the Western US, and since the early 1990s, in Russia. For Jonathan Chase this is a good sign. He said: "These numbers show that we can reverse these negative trends. Over the past 50 years, several measures have been taken to clean up our polluted rivers and lakes in many places across the world. This may have allowed the recovery of many freshwater insect populations. It makes us hopeful that we can reverse the trend for populations that are currently declining."

Roel van Klink added: "Insect populations are like logs of wood that are pushed under water. They want to come up, while we keep pushing them further down. But we can reduce the pressure so they can rise again. The freshwater insects have shown us this is possible. It's just not always easy to identify the causes of declines, and thus the most effective measures to reverse them. And these may also differ between locations."

No simple solutions

Ann Swengel, co-author of the study, has spent the last 34 years studying butterfly populations across hundreds of sites in Wisconsin and nearby states in the US. She stresses how complex the observed abundance trends are and what they mean for effective conservation management: "We've seen so much decline, including on many protected sites. But we've also observed some sites where butterflies are continuing to do well. It takes lots of years and lots of data to understand both the failures and the successes, species by species and site by site. A lot is beyond the control of any one person, but the choices we each make in each individual site really do matter."

Habitat destruction most likely causes insect declines

Although the scientists were unable to say for certain exactly why such trends -- both negative and positive -- emerged, they were able to point to a few possibilities. Most importantly, they found that destruction of natural habitats -- particularly through urbanisation -- is associated with the declines of terrestrial insects. Other reports, such as the IPBES Global Assessment, also noted that land-use change and habitat destruction are a main cause of global biodiversity change.

Read more at Science Daily