Aug 14, 2021

For trees, carbs are key to surviving insect defoliation, study finds

A recent multi-year outbreak of an invasive moth killed thousands of acres of oak trees across southern New England. But interspersed among the wreckage were thousands of trees that survived. A new study published today in Functional Ecology sheds light on why. Research by scientists from Harvard, UMass Amherst, Boston University, and MIT reveals that a tree's carbohydrate reserves are crucial to surviving an onslaught of hungry caterpillars.

The biology of trees makes them resilient to even the most severe stressors. "Oak trees are planners, in a way," says Meghan Blumstein, NSF Post-doctoral Research Fellow at MIT and a co-author of the study. "Some of the food they make during the growing season is used immediately for energy and some is stored in the stems and roots for a rainy day. With stored carbs, they are able to immediately create a new flush of leaves after an insect outbreak."

But trees are not invincible, and the new study reveals the specific threshold of reserves necessary for them to survive: 1.5 percent carbohydrates in their dried wood- or about 20-25% of their normal storage capacity. The repeated emergence of Lymantria dispar (an insect formerly known as "gypsy moth") from 2016 to 2018 challenged trees' resilience by defoliating them year after year.

"The trees that died were the trees that were out of reserves," says lead author Audrey Barker Plotkin, a Senior Scientist at the Harvard Forest. But the location of the trees mattered, too. The research team found that trees growing along forest edges tended to have more reserves, even at the same level of defoliation, making them more resilient than interior forest trees. The research team posits that forest edge trees may have simply experienced less severe defoliation in the years before 2018. And, because edge trees get lots of light, they may also be able to rebound without drawing down their reserves as much as their interior forest counterparts.

The new study provides direct evidence, that had until now been lacking, that trees can indeed starve to death when insects invade. This more nuanced understanding will help improve forest resilience models as new pests and a shifting climate continue to drive change in the region.

Read more at Science Daily

17-year study of children associates poverty with smaller, slower-growing subcortical regions

Children in poverty are more likely to have cognitive and behavioral difficulties than their better-off peers. Plenty of past research has looked into the physical effects of childhood poverty, or documented mental health disparities between socioeconomic classes. But Deanna Barch, chair and professor in the Department of Psychological & Brain Sciences in Arts & Sciences at Washington University in St. Louis, and her colleague Joan Luby, MD, the Samuel and Mae S. Ludwig Professor of Child Psychiatry in the School of Medicine, wanted to look at a suite of outcomes to determine whether poverty continues to affect people as they enter adulthood.

And if so, how?

To answer these questions, Luby and Barch, who is also a professor of radiology? and the Gregory B. Couch Professor of Psychiatry in the School of Medicine, and colleagues collected data for 17 years from families who agreed to participate, including 216 preschoolers who were followed through early adulthood. During the course of the study, the young participants underwent brain imaging to help tease out the relationships among their socioeconomic status in preschool, and provided information on a host of outcomes -- including cognitive, social and psychiatric -- in early adulthood.

The results were published July 14 in the journal Biological Psychiatry: Cognitive Neuroscience and Neuroimaging.

"First and foremost: yes," Barch said, "Early poverty sadly continues to predict worse outcomes in all of these domains." That holds true even if a child's socioeconomic status changes before adulthood.

The risks for these outcomes, the research showed, are mediated through brain development.

"We think poverty and all of the things associated with it" -- such as stress, inadequate nutrition, less access to health care -- "impact brain development." she said. "If we can prevent poverty, we can help circumvent some of these negative outcomes."

For the study, the researchers recruited primary caregivers and their 3- to 5-year-old children. They used a specific recruiting questionnaire that would ensure there were more children with elevated symptoms of depression. This would later allow researchers to separate the effects of poverty from existing psychological disorders.

The children were interviewed annually, and once they were at least 16, researchers tested them for cognitive function, psychiatric disorders, high-risk behaviors, educational function and social function. During the 17 years, the participants also received five brain scans that measured the volumes of local and global brain matter, giving the researchers a unique insight into whether brain development was a mediating factor -- are changes to the brain the way that poverty "gets into" someone?

After controlling for variables including preschool psychopathology and any significant life events throughout the years, the researchers were able to show socioeconomic status in preschool was associated with cognitive function, high-risk behaviors, social function and educational function 13+ years after the then-children joined the study.

Brain-scan results showed the physical marks of poverty.

The children who were living below the poverty level as preschoolers had smaller volumes of certain subcortical brain regions, including the hippocampus, caudate, putamen, and thalamus. "But also they had less growth in these regions over time," Barch said. "So they're starting out smaller and not growing as much."

Subcortical regions aren't a prime research target because they are not necessarily responsible for a specific cognitive or emotional function. Instead, information must travel through them in order to reach regions of the brain associated with higher-order functioning.

"The thalamus, for example, doesn't always get a lot of love in the literature," Barch said, "but it's a very important relay structure that helps coordinate the transfer of information from the brainstem to higher-order cortical areas.

"These brain regions are like important waypoints on the highway of the brain," Barch said. And they are particularly sensitive to environmental factors such as pollutants or poor nutrition, factors more likely to affect those living in poverty.

To be clear, this data does not paint a deterministic picture. "Plenty of kids have wonderful outcomes despite growing up in poverty," Barch said. That is often because they have had additional support and additional resources. She's putting this theory to the test in upcoming research where she and her colleagues will be tracking the effects of the child tax credit on children's development.

"Growing up in poverty makes things harder for people, but it is preventable," Barch said. "That's the good news: We can do something about this."

Read more at Science Daily

Aug 13, 2021

Black hole size revealed by its eating pattern

The feeding patterns of black holes offer insight into their size, researchers report. A new study revealed that the flickering in the brightness observed in actively feeding supermassive black holes is related to their mass.

Supermassive black holes are millions to billions of times more massive than the sun and usually reside at the center of massive galaxies. When dormant and not feeding on the gas and stars surrounding them, SMBHs emit very little light; the only way astronomers can detect them is through their gravitational influences on stars and gas in their vicinity. However, in the early universe, when SMBHs were rapidly growing, they were actively feeding -- or accreting -- materials at intensive rates and emitting an enormous amount of radiation -- sometimes outshining the entire galaxy in which they reside, the researchers said.

The new study, led by the University of Illinois Urbana-Champaign astronomy graduate student Colin Burke and professor Yue Shen, uncovered a definitive relationship between the mass of actively feeding SMBHs and the characteristic timescale in the light-flickering pattern. The findings are published in the journal Science.

The observed light from an accreting SMBH is not constant. Due to physical processes that are not yet understood, it displays a ubiquitous flickering over timescales ranging from hours to decades. "There have been many studies that explored possible relations of the observed flickering and the mass of the SMBH, but the results have been inconclusive and sometimes controversial," Burke said.

The team compiled a large data set of actively feeding SMBHs to study the variability pattern of flickering. They identified a characteristic timescale, over which the pattern changes, that tightly correlates with the mass of the SMBH. The researchers then compared the results with accreting white dwarfs, the remnants of stars like our sun, and found that the same timescale-mass relation holds, even though white dwarfs are millions to billions times less massive than SMBHs.

The light flickers are random fluctuations in a black hole's feeding process, the researchers said. Astronomers can quantify this flickering pattern by measuring the power of the variability as a function of timescales. For accreting SMBHs, the variability pattern changes from short timescales to long timescales. This transition of variability pattern happens at a characteristic timescale that is longer for more massive black holes.

The team compared black hole feeding to our eating or drinking activity by equating this transition to a human belch. Babies frequently burp while drinking milk, while adults can hold in the burp for a more extended amount of time. Black holes kind of do the same thing while feeding, they said.

"These results suggest that the processes driving the flickering during accretion are universal, whether the central object is a supermassive black hole or a much more lightweight white dwarf," Shen said.

"The firm establishment of a connection between the observed light flicker and fundamental properties of the accretor will certainly help us better understand accretion processes," said Yan-Fei Jiang, a researcher at the Flatiron Institute and study co-author.

Astrophysical black holes come in a broad spectrum of mass and size. In between the population of stellar-mass black holes, which weigh less than several tens of times the mass of the sun, and SMBHs, there is a population of black holes called intermediate-mass black holes that weigh between about 100 and 100,000 times the mass of the sun.

IMBHs are expected to form in large numbers through the history of the universe, and they may provide the seeds necessary to grow into SMBHs later. However, observationally this population of IMBHs is surprisingly elusive. There is only one indisputably confirmed IMBH that weighs about 150 times the mass of the sun. But that IMBH was serendipitously discovered by the gravitational wave radiation from the coalescence of two less-massive black holes.

"Now that there is a correlation between the flickering pattern and the mass of the central accreting object, we can use it to predict what the flickering signal from an IMBH might look like," Burke said.

Astronomers worldwide are waiting for the official kickoff of an era of massive surveys that monitor the dynamic and variable sky. The Vera C. Rubin Observatory in Chile's Legacy Survey of Space and Time will survey the sky over a decade and collect light flickering data for billions of objects, starting in late 2023.

"Mining the LSST data set to search for flickering patterns that are consistent with accreting IMBHs has the potential to discover and fully understand this long-sought mysterious population of black holes," said co-author Xin Liu, an astronomy professor at the U. of I.

Read more at Science Daily

Genetic enigma solved: Inheritance of coat color patterns in dogs

An international team of researchers including scientists from the Institute of Genetics of the University of Bern has unraveled the enigma of inheritance of coat color patterns in dogs. The researchers discovered that a genetic variant responsible for a very light coat in dogs and wolves originated more than two million years ago in a now extinct relative of the modern wolf.

The inheritance of several coat color patterns in dogs has been controversially debated for decades. Researchers including Tosso Leeb from the Institute of Genetics of the University of Bern have now finally been able to solve the puzzle. Not only did they clarify how the coat color patterns are genetically controlled, but the researchers also discovered that the light coat color in white arctic wolves and many modern dogs is due to a genetic variant originating in a species that went extinct a long time ago. The study has just been published in the scientific journal Nature Ecology and Evolution.

Two pigments and a "switch" for all coat colors

Wolves and dogs can make two different types of pigment, the black one, called eumelanin and the yellow, pheomelanin. A precisely regulated production of these two pigments at the right time and at the right place on the body gives rise to very different coat color patterns. Prior to the study, four different patterns had been recognized in dogs and several genetic variants had been theorized which cause these patterns. However, commercial genetic testing of these variants in many thousands of dogs yielded conflicting results, indicating that the existing knowledge on the inheritance of coat color patterns was incomplete and not entirely correct.

During the formation of coat color, the so-called agouti signaling protein represents the body's main switch for the production of yellow pheomelanin. If the agouti signaling protein is present, the pigment producing cells will synthesize yellow pheomelanin. If no agouti signaling protein is present, black eumelanin will be formed. "We realized early on that the causative genetic variants have to be regulatory variants which modulate the rate of protein production and lead to higher or lower amounts of agouti signal protein," Tosso Leeb explains.

Five instead of four distinct coat color patterns

The gene for agouti signaling protein has several initiation sites for reading the genetic information, which are called promoters. Dogs, on the one hand, have a ventral promoter, which is responsible for the production of agouti signaling protein at the belly. On the other hand, dogs have an additional hair cycle-specific promoter that mediates the production of agouti signaling protein during specific stages of hair growth and enables the formation of banded hair.

For the first time, the researchers characterized these two promoters in detail, in hundreds of dogs. They discovered two variants of the ventral promoter. One of the variants conveys the production of normal amounts of agouti signaling protein. The other variant has higher activity and causes the production of an increased amount of agouti signaling protein. The researchers even identified three different variants of the hair cycle-specific promoter. Starting with these variants at the individual promoters, the researchers identified a total of five different combinations, which cause different coat color patterns in dogs. "The textbooks have to be rewritten as there are five instead of the previously accepted four different patterns in dogs," Leeb says.

Unexpected insights on the evolution of wolves

As many genomes from wolves of different regions on earth have become publicly available, the researchers further investigated whether the identified genetic variants also exist in wolves. These analyses demonstrated that the variants for overactive ventral and hair cycle-specific promoters were already present in wolves prior to the domestication of modern dogs, which started approximately 40,000 years ago. Most likely, these genetic variants facilitated adaptation of wolves with a lighter coat color to snow-rich environments during past ice ages. Today, the completely white arctic wolves and the light colored wolves in the Himalaya still carry these genetic variants.

Further comparisons of the gene sequences with other species of the canidae family yielded very surprising results. The researchers were able to show that the overactive variant of the hair cycle-specific promoter in light-colored dogs and wolves shared more similarities with very distantly related species such as the golden jackal or the coyote than with the European grey wolf.

"The only plausible explanation for this unexpected finding is an ancient origin of this variant, more than two million years ago, in a now extinct relative of wolves," Leeb says. The gene segment must have been introgressed more than two million years ago into wolves by hybridization events with this now extinct relative of wolves. Thus, a small piece of DNA from this extinct species is still found today in yellow dogs and white arctic wolves. "This is reminiscent of the spectacular finding that modern humans carry a small proportion of DNA in their genomes from the now extinct Neandertals," Leeb adds.

The study was enabled by a sabbatical done by Prof. Danika Bannasch at the University of Bern with its longstanding research focus on the genetics of coat color in domestic animals. Bannasch, a professor in veterinary genetics at the University of California Davis, filtered the relevant promoter variants from thousands of other functionally neutral genetic variants. The evolutionary analyses were conducted by Christopher Kaelin and Gregory Barsh of the HudsonAlpha Institute and Stanford University.

Read more at Science Daily

Global warming begets more warming, new paleoclimate study finds

It is increasingly clear that the prolonged drought conditions, record-breaking heat, sustained wildfires, and frequent, more extreme storms experienced in recent years are a direct result of rising global temperatures brought on by humans' addition of carbon dioxide to the atmosphere. And a new MIT study on extreme climate events in Earth's ancient history suggests that today's planet may become more volatile as it continues to warm.

The study, appearing today in Science Advances, examines the paleoclimate record of the last 66 million years, during the Cenozoic era, which began shortly after the extinction of the dinosaurs. The scientists found that during this period, fluctuations in the Earth's climate experienced a surprising "warming bias." In other words, there were far more warming events -- periods of prolonged global warming, lasting thousands to tens of thousands of years -- than cooling events. What's more, warming events tended to be more extreme, with greater shifts in temperature, than cooling events.

The researchers say a possible explanation for this warming bias may lie in a "multiplier effect," whereby a modest degree of warming -- for instance from volcanoes releasing carbon dioxide into the atmosphere -- naturally speeds up certain biological and chemical processes that enhance these fluctuations, leading, on average, to still more warming.

Interestingly, the team observed that this warming bias disappeared about 5 million years ago, around the time when ice sheets started forming in the Northern Hemisphere. It's unclear what effect the ice has had on the Earth's response to climate shifts. But as today's Arctic ice recedes, the new study suggests that a multiplier effect may kick back in, and the result may be a further amplification of human-induced global warming.

"The Northern Hemisphere's ice sheets are shrinking, and could potentially disappear as a long-term consequence of human actions" says the study's lead author Constantin Arnscheidt, a graduate student in MIT's Department of Earth, Atmospheric and Planetary Sciences. "Our research suggests that this may make the Earth's climate fundamentally more susceptible to extreme, long-term global warming events such as those seen in the geologic past."

Arnscheidt's study co-author is Daniel Rothman, professor of geophysics at MIT, and co-founder and co-director of MIT's Lorenz Center.

A volatile push


For their analysis, the team consulted large databases of sediments containing deep-sea benthic foraminifera -- single-celled organisms that have been around for hundreds of millions of years and whose hard shells are preserved in sediments. The composition of these shells is affected by the ocean temperatures as organisms are growing; the shells are therefore considered a reliable proxy for the Earth's ancient temperatures.

For decades, scientists have analyzed the composition of these shells, collected from all over the world and dated to various time periods, to track how the Earth's temperature has fluctuated over millions of years.

"When using these data to study extreme climate events, most studies have focused on individual large spikes in temperature, typically of a few degrees Celsius warming," Arnscheidt says. "Instead, we tried to look at the overall statistics and consider all the fluctuations involved, rather than picking out the big ones."

The team first carried out a statistical analysis of the data and observed that, over the last 66 million years, the distribution of global temperature fluctuations didn't resemble a standard bell curve, with symmetric tails representing an equal probability of extreme warm and extreme cool fluctuations. Instead, the curve was noticeably lopsided, skewed toward more warm than cool events. The curve also exhibited a noticeably longer tail, representing warm events that were more extreme, or of higher temperature, than the most extreme cold events.

"This indicates there's some sort of amplification relative to what you would otherwise have expected," Arnscheidt says. "Everything's pointing to something fundamental that's causing this push, or bias toward warming events."

"It's fair to say that the Earth system becomes more volatile, in a warming sense," Rothman adds.

A warming multiplier

The team wondered whether this warming bias might have been a result of "multiplicative noise" in the climate-carbon cycle. Scientists have long understood that higher temperatures, up to a point, tend to speed up biological and chemical processes. Because the carbon cycle, which is a key driver of long-term climate fluctuations, is itself composed of such processes, increases in temperature may lead to larger fluctuations, biasing the system towards extreme warming events.

In mathematics, there exists a set of equations that describes such general amplifying, or multiplicative effects. The researchers applied this multiplicative theory to their analysis to see whether the equations could predict the asymmetrical distribution, including the degree of its skew and the length of its tails.

In the end, they found that the data, and the observed bias toward warming, could be explained by the multiplicative theory. In other words, it's very likely that, over the last 66 million years, periods of modest warming were on average further enhanced by multiplier effects, such as the response of biological and chemical processes that further warmed the planet.

As part of the study, the researchers also looked at the correlation between past warming events and changes in Earth's orbit. Over hundreds of thousands of years, Earth's orbit around the sun regularly becomes more or less elliptical. But scientists have wondered why many past warming events appeared to coincide with these changes, and why these events feature outsized warming compared with what the change in Earth's orbit could have wrought on its own.

So, Arnscheidt and Rothman incorporated the Earth's orbital changes into the multiplicative model and their analysis of Earth's temperature changes, and found that multiplier effects could predictably amplify, on average, the modest temperature rises due to changes in Earth's orbit.

"Climate warms and cools in synchrony with orbital changes, but the orbital cycles themselves would predict only modest changes in climate," Rothman says. "But if we consider a multiplicative model, then modest warming, paired with this multiplier effect, can result in extreme events that tend to occur at the same time as these orbital changes."

Read more at Science Daily

Engineers uncover the secrets of fish fins

Peer into any fishbowl, and you'll see that pet goldfish and guppies have nimble fins. With a few flicks of these appendages, aquarium swimmers can turn in circles, dive deep down or even bob to the surface.

New research led by the University of Colorado Boulder has uncovered the engineering secrets behind what makes fish fins so strong yet flexible. The team's insights could one day lead to new designs for robotic surgical tools or even airplane wings that change their shape with the push of a button.

The researchers published their results Aug. 11 in the journal Science Robotics.

Francois Barthelat, senior author of the study, noted that fins are remarkable because they can achieve feats of dexterity even though they don't contain a single muscle. (Fish move these structures by twitching sets of muscles located at the base of the fins).

"If you look at a fin, you'll see that it's made of many stiff 'rays,'" said Barthelat, professor in the Paul M. Rady Department of Mechanical Engineering. "Each of those rays can be manipulated individually just like your fingers, but there are 20 or 30 of them in each fin."

In their latest research, Barthelat and his colleagues drew on a range of approaches, including computer simulations and 3D-printed materials, to dive deep into the biomechanics of these agile structures. They report that the key to fish fins may lie in their unique design. Each ray in a fin is made up of multiple segments of a hard material that stack on top of much softer collagen, making them the perfect balance between bouncy and stiff.

"You get this dual capability where fins can morph, and yet they're still quite stiff when they push water," he said.

Armor and airplanes


Barthelat is no stranger to looking into aquariums. He previously studied how fish scales can help engineers to design better body armor for humans, and how seashells might inspire tougher glasses.

Fins may be just as useful. When it comes to engineering, Barthelat explained, materials that are both stiff and flexible are a hot commodity. Airplane designers, for example, have long been interested in developing wings that can morph on command, giving planes more ability to maneuver while still keeping them in the air.

"Airplanes do this now, to some extent, when they drop their flaps," Barthelat said. "But that's in a rigid way. A wing made out of morphing materials, in contrast, could change its shape more radically and in a continuous manner, much like a bird."

To understand how ordinary run-of-the-mill goldfish achieve similar feats every day, take a close look at these structures under the microscope. Each of the rays in a fin has a layered structure, a bit like a bakery éclair: The spikes include two layers of stiff and mineralized materials called hemitrichs that surround an inner layer of spongy collagen.

But, Barthelat said, those layers of hemitrichs aren't solid. They're divided into segments, as if someone had cut up the éclair into bite-sized pieces.

"Until recently, the function of those segments hadn't been clear," he said.

Swimming, flying and walking

The engineer and his team decided to use computer simulations to examine the mechanical properties of fins. They discovered that those segments can make all the difference.

Pretend for a moment, Barthelat explained, that fish fins are made up entirely of collagen. They could bend easily, but wouldn't give fish much traction in the water because hydrodynamic forces would collapse them. Rays made up of solid, non-segmented hemitrichs, in contrast, would have the opposite problem -- they'd be way too stiff.

"All of the segments, essentially, create these tiny hinges along the ray," Barthelat said. "When you try to compress or pull on those bony layers, they have a very high stiffness. This is critical for the ray to resist and produce hydrodynamic forces that push on water. But if you try to bend individual bony layers, they're very compliant, and that part is critical for the rays to deform easily from the base muscles."

The researchers further tested the theory by using a 3D printer to produce model fish fins made from plastic, some with those hinges built in and some without. The idea panned out: The team found that the segmented design provided better combinations of stiffness and morphing capabilities.

Barthelat added that he and his colleagues have only scratched the surface of the wide diversity of fins in the fish world. Flying fish, for example, deploy their fins to glide above the water, while mudskippers use their fins like legs to walk on land.

"We like to pick up where the biologists and zoologists have left off, using our background in the mechanics of materials to further our understanding of the amazing properties of the natural world," Barthelat said.

Read more at Science Daily

Aug 12, 2021

NASA spacecraft provides insight into asteroid Bennu's future orbit

In a study released Wednesday, NASA researchers used precision-tracking data from the agency's Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer (OSIRIS-REx) spacecraft to better understand movements of the potentially hazardous asteroid Bennu through the year 2300, significantly reducing uncertainties related to its future orbit, and improving scientists' ability to determine the total impact probability and predict orbits of other asteroids.

The study, titled "Ephemeris and hazard assessment for near-Earth asteroid (101955) Bennu based on OSIRIS-REx data," was published in the journal Icarus.

"NASA's Planetary Defense mission is to find and monitor asteroids and comets that can come near Earth and may pose a hazard to our planet," said Kelly Fast, program manager for the Near-Earth Object Observations Program at NASA Headquarters in Washington. "We carry out this endeavor through continuing astronomical surveys that collect data to discover previously unknown objects and refine our orbital models for them. The OSIRIS-REx mission has provided an extraordinary opportunity to refine and test these models, helping us better predict where Bennu will be when it makes its close approach to Earth more than a century from now."

In 2135, asteroid Bennu will make a close approach with Earth. Although the near-Earth object will not pose a danger to our planet at that time, scientists must understand Bennu's exact trajectory during that encounter in order to predict how Earth's gravity will alter the asteroid's path around the Sun -- and affect the hazard of Earth impact.

Using NASA's Deep Space Network and state-of-the-art computer models, scientists were able to significantly shrink uncertainties in Bennu's orbit, determining its total impact probability through the year 2300 is about 1 in 1,750 (or 0.057%). The researchers were also able to identify Sept. 24, 2182, as the most significant single date in terms of a potential impact, with an impact probability of 1 in 2,700 (or about 0.037%).

Although the chances of it hitting Earth are very low, Bennu remains one of the two most hazardous known asteroids in our solar system, along with another asteroid called 1950 DA.

Before leaving Bennu May 10, 2021, OSIRIS-REx spent more than two years in close proximity to the asteroid, gathering information about its size (it is about one-third of a mile, or 500 meters, wide), shape, mass, and composition, while monitoring its spin and orbital trajectory. The spacecraft also scooped up a sample of rock and dust from the asteroid's surface, which it will deliver to Earth on Sept. 24, 2023, for further scientific investigation.

"The OSIRIS-REx data give us so much more precise information, we can test the limits of our models and calculate the future trajectory of Bennu to a very high degree of certainty through 2135," said study lead Davide Farnocchia, of the Center for Near Earth Object Studies (CNEOS), which is managed by NASA's Jet Propulsion Laboratory in Southern California. "We've never modeled an asteroid's trajectory to this precision before."

Gravitational keyholes

The precision measurements on Bennu help to better determine how the asteroid's orbit will evolve over time and whether it will pass through a "gravitational keyhole" during its 2135 close approach. These keyholes are areas in space that would set Bennu on a path toward a future impact with Earth if the asteroid were to pass through them at certain times, due to the effect of Earth's gravitational pull.

To calculate exactly where the asteroid will be during its 2135 close approach -- and whether it might pass through a gravitational keyhole -- Farnocchia and his team evaluated various types of small forces that may affect the asteroid as it orbits the Sun. Even the smallest force can significantly deflect its orbital path over time, causing it to pass through or completely miss a keyhole.

Among those forces, the Sun's heat plays a crucial role. As an asteroid travels around the Sun, sunlight heats up its dayside. Because the asteroid spins, the heated surface will rotate away and cool down when it enters the nightside. As it cools, the surface releases infrared energy, which generates a small amount of thrust on the asteroid -- a phenomenon called the Yarkovsky effect. Over short timeframes, this thrust is minuscule, but over long periods, the effect on the asteroid's position builds up and can play a significant role in changing an asteroid's path.

"The Yarkovsky effect will act on all asteroids of all sizes, and while it has been measured for a small fraction of the asteroid population from afar, OSIRIS-REx gave us the first opportunity to measure it in detail as Bennu travelled around the Sun," said Steve Chesley, senior research scientist at JPL and study co-investigator. "The effect on Bennu is equivalent to the weight of three grapes constantly acting on the asteroid -- tiny, yes, but significant when determining Bennu's future impact chances over the decades and centuries to come."

The team considered many other perturbing forces as well, including the gravity of the Sun, the planets, their moons, and more than 300 other asteroids, the drag caused by interplanetary dust, the pressure of the solar wind, and Bennu's particle-ejection events. The researchers even evaluated the force OSIRIS-REx exerted when performing its Touch-And-Go (TAG) sample collection event Oct. 20, 2020, to see if it might have slightly altered Bennu's orbit, ultimately confirming previous estimates that the TAG event had a negligible effect.

"The force exerted on Bennu's surface during the TAG event were tiny even in comparison to the effects of other small forces considered," said Rich Burns, OSIRIS-REx project manager at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "TAG did not alter Bennu's likelihood of impacting Earth."

Tiny risk, huge gain

Although a 0.057% impact probability through the year 2300 and an impact probability of 0.037% on Sept. 24, 2182, are low, this study highlights the crucial role that OSIRIS-REx operations played in precisely characterizing Bennu's orbit.

"The orbital data from this mission helped us better appreciate Bennu's impact chances over the next couple of centuries and our overall understanding of potentially hazardous asteroids -- an incredible result," said Dante Lauretta, OSIRIS-REx principal investigator and professor at the University of Arizona. "The spacecraft is now returning home, carrying a precious sample from this fascinating ancient object that will help us better understand not only the history of the solar system but also the role of sunlight in altering Bennu's orbit since we will measure the asteroid's thermal properties at unprecedented scales in laboratories on Earth."

Read more at Science Daily

Protecting Earth from space storms

"There are only two natural disasters that could impact the entire U.S.," according to Gabor Toth, professor of Climate and Space Sciences and Engineering at the University of Michigan. "One is a pandemic and the other is an extreme space weather event."

We're currently seeing the effects of the first in real-time.

The last major space weather event struck the Earth in 1859. Smaller, but still significant, space weather events occur regularly. These fry electronics and power grids, disrupt global positioning systems, cause shifts in the range of the Aurora Borealis, and raise the risk of radiation to astronauts or passengers on planes crossing over the poles.

"We have all these technological assets that are at risk," Toth said. "If an extreme event like the one in 1859 happened again, it would completely destroy the power grid and satellite and communications systems -- the stakes are much higher."

Motivated by the White House National Space Weather Strategy and Action Plan and the National Strategic Computing Initiative, in 2020 the National Science Foundation (NSF) and NASA created the Space Weather with Quantified Uncertainties (SWQU) program. It brings together research teams from across scientific disciplines to advance the latest statistical analysis and high performance computing methods within the field of space weather modeling.

"We are very proud to have launched the SWQU projects by bringing together expertise and supports across multiple scientific domains in a joint effort between NSF and NASA," said Vyacheslav (Slava) Lukin, the Program Director for Plasma Physics at NSF. "The need has been recognized for some time, and the portfolio of six projects, Gabor Toth's among them, engages not only the leading university groups, but also NASA Centers, Department of Defense and Department of Energy National Laboratories, as well as the private sector."

Toth helped develop today's preeminent space weather prediction model, which is used for operational forecasting by the National Oceanic and Atmospheric Administration (NOAA). On February 3, 2021, NOAA began using the Geospace Model Version 2.0, which is part of the University of Michigan's Space Weather Modeling Framework, to predict geomagnetic disturbances.

"We're constantly improving our models," Toth said. The new model replaces version 1.5 which has been in operations since November 2017. "The main change in version 2 was the refinement of the numerical grid in the magnetosphere, several improvements in the algorithms, and a recalibration of the empirical parameters."

The Geospace Model is based on a global representation of Earth's Geospace environment that includes magnetohydrodynamics -- the properties and behavior of electrically conducting fluids like plasma interacting with magnetic fields, which plays a key role in the dynamics of space weather.

The Geospace model predicts magnetic disturbances on the ground resulting from geospace interactions with solar wind. Such magnetic disturbances induce a geoelectric field that can damage large-scale electrical conductors, such as the power grid.

Short-term advanced warning from the model provides forecasters and power grid operators with situational awareness about harmful currents and allows time to mitigate the problem and maintain the integrity of the electric power grid, NOAA announced at the time of the launch.

As advanced as the Geospace Model is, it provides only about 30 minutes of advanced warning. Toth's team is one of several groups working to increase lead time to one to three days. Doing so means understanding how activity on the surface of the Sun leads to events that can impact the Earth.

"We're currently using data from a satellite measuring plasma parameters one million miles away from the Earth," Toth explained. Researchers hope to start from the Sun, using remote observation of the Sun's surface -- in particular, coronal mass ejections that produce flares that are visible in X-rays and UV light. "That happens early on the Sun. From that point, we can run a model and predict the arrival time and impact of magnetic events."

Improving the lead time of space weather forecasts requires new methods and algorithms that can compute far faster than those used today and can be deployed efficiently on high performance computers. Toth uses the Frontera supercomputer at the Texas Advanced Computing Center -- the fastest academic system in the world and the 10th most powerful overall -- to develop and test these new methods.

"I consider myself really good at developing new algorithms," Toth said. "I apply these to space physics, but many of the algorithms I develop are more general and not restricted to one application."

A key algorithmic improvement made by Toth involved finding a novel way to combine the kinetic and fluid aspects of plasmas in one simulation model. "People tried it before and failed. But we made it work. We go a million times faster than brute-force simulations by inventing smart approximations and algorithms," Toth said.

The new algorithm dynamically adapts the location covered by the kinetic model based on the simulation results. The model identifies the regions of interests and places the kinetic model and the computational resources to focus on them. This can result in a 10 to 100 time speed up for space weather models.

As part of the NSF SWQU project, Toth and his team has been working on making the Space Weather Modeling Framework run efficiently on future supercomputers that rely heavily on graphical processing units (GPUs). As a first goal, they set out to port the Geospace model to GPUs using the NVIDIA Fortran compiler with OpenACC directives.

They recently managed to run the full Geospace model faster than real-time on a single GPU. They used TACC's GPU-enabled Longhorn machine to reach this milestone. To run the model with the same speed on traditional supercomputer requires at least 100 CPU cores.

"It took a whole year of code development to make this happen, Toth said. "The goal is to run an ensemble of simulations fast and efficiently to provide a probabilistic space weather forecast."

This type of probabilistic forecasting is important for another aspect of Toth's research: localizing predictions in terms of the impact on the surface of Earth.

"Should we worry in Michigan or only in Canada? What is the maximum induced current particular transformers will experience? How long will generators need to be shut off? To do this accurately, you need a model you believe in," he said. "Whatever we predict, there's always some uncertainty. We want to give predictions with precise probabilities, similar to terrestrial weather forecasts."

Toth and his team run their code in parallel on thousands of cores on Frontera for each simulation. They plan to run thousands of simulations over the coming years to see how model parameters affect the results to find the best model parameters and to be able to attach probabilities to simulation results.

Read more at Science Daily

Scrap the nap: Study shows short naps don’t relieve sleep deprivation

A nap during the day won't restore a sleepless night, says the latest study from Michigan State University's Sleep and Learning Lab.

"We are interested in understanding cognitive deficits associated with sleep deprivation. In this study, we wanted to know if a short nap during the deprivation period would mitigate these deficits," said Kimberly Fenn, associate professor of MSU, study author and director of MSU's Sleep and Learning Lab. "We found that short naps of 30 or 60 minutes did not show any measurable effects."

The study was published in the journal Sleep and is among the first to measure the effectiveness of shorter naps -- which are often all people have time to fit into their busy schedules.

"While short naps didn't show measurable effects on relieving the effects of sleep deprivation, we found that the amount of slow-wave sleep that participants obtained during the nap was related to reduced impairments associated with sleep deprivation," Fenn said.

Slow-wave sleep, or SWS, is the deepest and most restorative stage of sleep. It is marked by high amplitude, low frequency brain waves and is the sleep stage when your body is most relaxed; your muscles are at ease, and your heart rate and respiration are at their slowest.

"SWS is the most important stage of sleep," Fenn said. "When someone goes without sleep for a period of time, even just during the day, they build up a need for sleep; in particular, they build up a need for SWS. When individuals go to sleep each night, they will soon enter into SWS and spend a substantial amount of time in this stage."

Fenn's research team -- including MSU colleague Erik Altmann, professor of psychology, and Michelle Stepan, a recent MSU alumna currently working at the University of Pittsburgh -- recruited 275 college-aged participants for the study.

The participants completed cognitive tasks when arriving at MSU's Sleep and Learning Lab in the evening and were then randomly assigned to three groups: The first was sent home to sleep; the second stayed at the lab overnight and had the opportunity to take either a 30 or a 60 minute nap; and the third did not nap at all in the deprivation condition.

The next morning, participants reconvened in the lab to repeat the cognitive tasks, which measured attention and placekeeping, or the ability to complete a series of steps in a specific order without skipping or repeating them -- even after being interrupted.

"The group that stayed overnight and took short naps still suffered from the effects of sleep deprivation and made significantly more errors on the tasks than their counterparts who went home and obtained a full night of sleep," Fenn said. "However, every 10-minute increase in SWS reduced errors after interruptions by about 4%."

These numbers may seem small but when considering the types of errors that are likely to occur in sleep-deprived operators -- like those of surgeons, police officers or truck drivers -- a 4% decrease in errors could potentially save lives, Fenn said.

"Individuals who obtained more SWS tended to show reduced errors on both tasks. However, they still showed worse performance than the participants who slept," she said.

Read more at Science Daily

Potential new treatment for deadly blood cancer

A drug used to treat certain advanced breast cancers may offer a new treatment option for a deadly blood cancer known as myelofibrosis, new research from UVA Cancer Center suggests.

The drug, palbociclib, may be able to prevent the scarring of bone marrow that existing treatments for myelofibrosis cannot. This scarring disrupts the marrow's production of blood cells and causes severe anemia that leaves patients weak and fatigued. The scarring also reduces the number of platelets in the blood, making clotting difficult, and often causes an enlarged spleen.

"Current therapies only provide symptomatic relief without offering significant improvement of bone marrow fibrosis. So, there is a critical need to develop more effective therapy for myelofibrosis," said senior researcher Golam Mohi, PhD, of the University of Virginia School of Medicine's Department of Biochemistry and Molecular Genetics. "We have identified CDK6, a regulator of cell cycle, as a new therapeutic target in myelofibrosis. We demonstrate that CDK4/6 inhibitor palbociclib in combination with ruxolitinib markedly inhibits myelofibrosis, suggesting this drug combination could be an effective therapeutic strategy against this devastating blood disorder."

Myelofibrosis: A Dangerous Cancer

Myelofibrosis is a form of leukemia. It occurs in approximately 1 to 1.5 of every 100,000 people, primarily those who are middle-aged or older. Patients with intermediate or high-risk cases typically survive only 16 to 35 months.

Existing treatments for myelofibrosis do not address the bone marrow scarring that is a hallmark of the disease. The drug ruxolitinib is used to relieve patients' symptoms, but Mohi's new research suggests that pairing the drug with palbociclib may make a far superior treatment.

Palbociclib, by itself, reduced bone marrow scarring in two different mouse models of myelofibrosis. It also decreased the abnormally high levels of white blood cells seen in myelofibrosis and shrank the mice's enlarged spleens.

Combining the drug with ruxolitinib offered even more benefits, restoring the bone marrow and white blood cell counts to normal and dramatically reducing the size of the mice's enlarged spleens.

Additional research is needed to determine if the findings will hold true in human patients. But Mohi and his team are hopeful. They note that palbociclib is known to quiet the activity of bone marrow in patients with metastatic breast cancer (cancer that has spread to other parts of the body), and they hope there will be beneficial effects in patients with myelofibrosis.

"A combinatorial therapeutic approach involving palbociclib and ruxolitinib will enable lowering the doses of each of the inhibitors and thus reducing toxicities while enhancing the therapeutic efficacy," they write in a new scientific paper outlining their findings.

New treatments for myelofibrosis are particularly needed because ruxolitinib treatment does not offer significant reduction in bone marrow fibrosis and often loses its effectiveness with prolonged use, the researchers note.

Read more at Science Daily

Aug 11, 2021

Magnetic patterns hidden in meteorites reveal early Solar System dynamics

Researchers have developed a novel technique to investigate the dynamics of the early Solar System by analyzing magnetites in meteorites utilizing the wave nature of electrons.

Within meteorites, the magnetic fields associated with the particles that make up the object can act as a historical record. By analyzing such magnetic fields, scientists can deduce the probable events that affected the object and reconstruct a time-lapse of what events occurred on the meteorite and when.

"Primitive meteorites are time capsules of primordial materials formed at the beginning of our Solar System," said Yuki Kimura, an associate professor at the Institute of Low Temperature Science at Hokkaido University in Japan who led the study. "To understand the physical and chemical history of the Solar System, it is crucial to analyze various types of meteorites with different origins."

While there are many meteorites available for study here on Earth, most of them originated from the asteroid belt, between Mars and Jupiter. These samples are used to study what the early Solar System looked like. However, it becomes difficult to reconstruct events that happened farther out in the Solar System, well past the asteroid belt.

This is where the research team took great strides in understanding outer Solar System dynamics soon after the system formed. The paper, published in The Astrophysical Journal Letters, details a novel technique to study the remnant magnetization of particles in the Tagish Lake meteorite, believed to have been formed in the cold outer Solar System.

Using the technique, together with numerical simulation, the team showed that the parent body of the Tagish Lake meteorite was formed in the Kuiper Belt, a region in the outer Solar System, sometime around 3 million years after the first Solar System minerals formed. It then moved to the orbit of the asteroid belt as a result of the formation of Jupiter. The magnetite was formed when the parent body was heated to about 250°C by radiogenic heating and an energetic impact which is thought to have occurred during the body's transit from the Kuiper belt to the Asteroid belt.

"Our results help us infer the early dynamics of Solar System bodies that occurred several million years after the formation of the Solar System, and imply a highly efficient formation of the outer bodies of the Solar System, including Jupiter," says Kimura.

The new technique, called "nanometer-scale paleomagnetic electron holography," involves using the wave nature of electrons to examine their interference patterns, known as a hologram, to extract high resolution information from the structure of the meteorites. This high-resolution technique adds another crucial tool to the toolbox of researchers working to understand the early dynamics of the entire Solar System.

Read more at Science Daily

Researchers use artificial intelligence to unlock extreme weather mysteries

From lake-draining drought in California to bridge-breaking floods in China, extreme weather is wreaking havoc. Preparing for weather extremes in a changing climate remains a challenge, however, because their causes are complex and their response to global warming is often not well understood. Now, Stanford researchers have developed a machine learning tool to identify conditions for extreme precipitation events in the Midwest, which account for over half of all major U.S. flood disasters. Published in Geophysical Research Letters, their approach is one of the first examples using AI to analyze causes of long-term changes in extreme events and could help make projections of such events more accurate.

"We know that flooding has been getting worse," said study lead author Frances Davenport, a PhD student in Earth system science in Stanford's School of Earth, Energy & Environmental Sciences (Stanford Earth). "Our goal was to understand why extreme precipitation is increasing, which in turn could lead to better predictions about future flooding."

Among other impacts, global warming is expected to drive heavier rain and snowfall by creating a warmer atmosphere that can hold more moisture. Scientists hypothesize that climate change may affect precipitation in other ways, too, such as changing when and where storms occur. Revealing these impacts has remained difficult, however, in part because global climate models do not necessarily have the spatial resolution to model these regional extreme events.

"This new approach to leveraging machine learning techniques is opening new avenues in our understanding of the underlying causes of changing extremes," said study co-author Noah Diffenbaugh, the Kara J Foundation Professor in the School of Earth, Energy & Environmental Sciences. "That could enable communities and decision makers to better prepare for high-impact events, such as those that are so extreme that they fall outside of our historical experience."

Davenport and Diffenbaugh focused on the upper Mississippi watershed and the eastern part of the Missouri watershed. The highly flood-prone region, which spans parts of nine states, has seen extreme precipitation days and major floods become more frequent in recent decades. The researchers started by using publicly available climate data to calculate the number of extreme precipitation days in the region from 1981 to 2019. Then they trained a machine learning algorithm designed for analyzing grid data, such as images, to identify large-scale atmospheric circulation patterns associated with extreme precipitation (above the 95th percentile).

"The algorithm we use correctly identifies over 90 percent of the extreme precipitation days, which is higher than the performance of traditional statistical methods that we tested," Davenport said.

The trained machine learning algorithm revealed that multiple factors are responsible for the recent increase in Midwest extreme precipitation. During the 21st century, the atmospheric pressure patterns that lead to extreme Midwest precipitation have become more frequent, increasing at a rate of about one additional day per year, although the researchers note that the changes are much weaker going back further in time to the 1980s.

However, the researchers found that when these atmospheric pressure patterns do occur, the amount of precipitation that results has clearly increased. As a result, days with these conditions are more likely to have extreme precipitation now than they did in the past. Davenport and Diffenbaugh also found that increases in the precipitation intensity on these days were associated with higher atmospheric moisture flows from the Gulf of Mexico into the Midwest, bringing the water necessary for heavy rainfall in the region.

The researchers hope to extend their approach to look at how these different factors will affect extreme precipitation in the future. They also envision redeploying the tool to focus on other regions and types of extreme events, and to analyze distinct extreme precipitation causes, such as weather fronts or tropical cyclones. These applications will help further parse climate change's connections to extreme weather.

Read more at Science Daily

System trains drones to fly around obstacles at high speeds

If you follow autonomous drone racing, you likely remember the crashes as much as the wins. In drone racing, teams compete to see which vehicle is better trained to fly fastest through an obstacle course. But the faster drones fly, the more unstable they become, and at high speeds their aerodynamics can be too complicated to predict. Crashes, therefore, are a common and often spectacular occurrence.

But if they can be pushed to be faster and more nimble, drones could be put to use in time-critical operations beyond the race course, for instance to search for survivors in a natural disaster.

Now, aerospace engineers at MIT have devised an algorithm that helps drones find the fastest route around obstacles without crashing. The new algorithm combines simulations of a drone flying through a virtual obstacle course with data from experiments of a real drone flying through the same course in a physical space.

The researchers found that a drone trained with their algorithm flew through a simple obstacle course up to 20 percent faster than a drone trained on conventional planning algorithms. Interestingly, the new algorithm didn't always keep a drone ahead of its competitor throughout the course. In some cases, it chose to slow a drone down to handle a tricky curve, or save its energy in order to speed up and ultimately overtake its rival.

"At high speeds, there are intricate aerodynamics that are hard to simulate, so we use experiments in the real world to fill in those black holes to find, for instance, that it might be better to slow down first to be faster later," says Ezra Tal, a graduate student in MIT's Department of Aeronautics and Astronautics. "It's this holistic approach we use to see how we can make a trajectory overall as fast as possible."

"These kinds of algorithms are a very valuable step toward enabling future drones that can navigate complex environments very fast," adds Sertac Karaman, associate professor of aeronautics and astronautics, and director of the Laboratory for Information and Decision Systems at MIT. "We are really hoping to push the limits in a way that they can travel as fast as their physical limits will allow."

Tal, Karaman, and MIT graduate student Gilhyun Ryou have published their results in the International Journal of Robotics Research.

Fast effects


Training drones to fly around obstacles is relatively straightforward if they are meant to fly slowly. That's because aerodynamics such as drag don't generally come into play at low speeds, and they can be left out of any modeling of a drone's behavior. But at high speeds, such effects are far more pronounced, and how the vehicles will handle is much harder to predict.

"When you're flying fast, it's hard to estimate where you are," Ryou says. "There could be delays in sending a signal to a motor, or a sudden voltage drop which could cause other dynamics problems. These effects can't be modeled with traditional planning approaches."

To get an understanding for how high-speed aerodynamics affect drones in flight, researchers have to run many experiments in the lab, setting drones at various speeds and trajectories to see which fly fast without crashing -- an expensive, and often crash-inducing training process.

Instead, the MIT team developed a high-speed flight-planning algorithm that combines simulations and experiments, in a way that minimizes the number of experiments required to identify fast and safe flight paths.

The researchers started with a physics-based flight planning model, which they developed to first simulate how a drone is likely to behave while flying through a virtual obstacle course. They simulated thousands of racing scenarios, each with a different flight path and speed pattern. They then charted whether each scenario was feasible (safe), or infeasible (resulting in a crash). From this chart, they could quickly zero in on a handful of the most promising scenarios, or racing trajectories, to try out in the lab.

"We can do this low-fidelity simulation cheaply and quickly, to see interesting trajectories that could be both fast and feasible. Then we fly these trajectories in experiments to see which are actually feasible in the real world," Tal says. "Ultimately we converge to the optimal trajectory that gives us the lowest feasible time."

Going slow to go fast


To demonstrate their new approach, the researchers simulated a drone flying through a simple course with five large, square-shaped obstacles arranged in a staggered configuration. They set up this same configuration in a physical training space, and programmed a drone to fly through the course at speeds and trajectories that they previously picked out from their simulations. They also ran the same course with a drone trained on a more conventional algorithm that does not incorporate experiments into its planning.

Overall, the drone trained on the new algorithm "won" every race, completing the course in a shorter time than the conventionally trained drone. In some scenarios, the winning drone finished the course 20 percent faster than its competitor, even though it took a trajectory with a slower start, for instance taking a bit more time to bank around a turn. This kind of subtle adjustment was not taken by the conventionally trained drone, likely because its trajectories, based solely on simulations, could not entirely account for aerodynamic effects that the team's experiments revealed in the real world.

The researchers plan to fly more experiments, at faster speeds, and through more complex environments, to further improve their algorithm. They also may incorporate flight data from human pilots who race drones remotely, and whose decisions and maneuvers might help zero in on even faster yet still feasible flight plans.

Read more at Science Daily

New findings on how ketamine prevents depression

The discovery that the anaesthetic ketamine can help people with severe depression has raised hopes of finding new treatment options for the disease. Researchers at Karolinska Institutet in Sweden have now identified novel mechanistic insights how the drug exerts its antidepressant effect. The findings have been published in the journal Molecular Psychiatry.

According to the World Health Organization, depression is a leading cause of disability worldwide and the disease affects more than 360 million people every year.

The risk of suffering is affected by both genetics and environmental factors. The most commonly prescribed antidepressants, such as SSRIs, affect nerve signalling via monoamines in the brain. However, it can take a long time for these drugs to help, and over 30 percent of sufferers experience no relief at all.

The need for new types of antidepressants with faster action and wider effect is therefore considerable. An important breakthrough is the anaesthetic ketamine, which has been registered for some years in the form of a nasal spray for the treatment of intractable depression.

Unlike classic antidepressants, ketamine affects the nerve signalling that occurs via the glutamate system, but it is unclear exactly how the antidepressant effect is mediated. When the medicine has an effect, it relieves depressive symptoms and suicidal thoughts very quickly.

However, ketamine can cause unwanted side effects such as hallucinations and delusions and there may be a risk of abuse so alternative medicines are needed. The researchers want to better understand how ketamine works in order to find substances that can have the same rapid effect but without the side effects.

In a new study, researchers at Karolinska Institutet have further investigated the molecular mechanisms underlying ketamine's antidepressant effects. Using experiments on both cells and mice, the researchers were able to show that ketamine reduced so-called presynaptic activity and the persistent release of the neurotransmitter glutamate.

"Elevated glutamate release has been linked to stress, depression and other mood disorders, so lowered glutamate levels may explain some of the effects of ketamine," says Per Svenningsson, professor at the Department of Clinical Neuroscience, Karolinska Institutet, and the study's last author.

When nerve signals are transmitted, the transmission from one neuron to the next occurs via synapses, a small gap where the two neurons meet.

The researchers were able to see that ketamine directly stimulated AMPA receptors, which sit postsynaptically, that is, the part of the nerve cell that receives signals and this leads to the increased release of the neurotransmitter adenosine which inhibits presynaptic glutamate release.

The effects of ketamine could be counteracted by the researchers inhibiting presynaptic adenosine A1 receptors.

"This suggests that the antidepressant action of ketamine can be regulated by a feedback mechanism. It is new knowledge that can explain some of the rapid effects of ketamine," says Per Svenningsson

In collaboration with Rockefeller University, the same research group has also recently reported on the disease mechanism in depression.

Read more at Science Daily

Aug 10, 2021

New findings on the evolution of galaxies

Emirati national Aisha Al Yazeedi, a research scientist at the NYU Abu Dhabi (NYUAD) Center for Astro, Particle, and Planetary Physics, has published her first research paper, featuring some key findings on the evolution of galaxies.

Galaxies eventually undergo a phase in which they lose most of their gas, which results in a change into their properties over the course of their evolution. Current models for galaxy evolution suggest this should eventually happen to all galaxies, including our own Milky Way; Al Yazeedi and her team are delving into this process.

Commenting on the findings, Al Yazeedi said: "The evolution of galaxies is directly linked to the activity of their central supermassive blackhole (SMBH). However, the connection between the activity of SMBHs and the ejection of gas from the entire galaxy is poorly understood. Observational studies, including our research, are essential to clarify how the central SMBH can influence the evolution of its entire host galaxy and prove key theoretical concepts in the field of astrophysics."

Titled "The impact of low luminosity AGN on their host galaxies: A radio and optical investigation of the kpc-scale outflow in MaNGA 1-166919," the paper has been published in Astronomical Journal. Its findings outline gas ejection mechanisms, outflow properties, and how they are related to the activity of the supermassive blackhole (SMBH) at the center of the host galaxy.

To that end, the paper presents a detailed optical and radio study of the MaNGA 1-166919 galaxy, which appears to have an Active Galactic Nucleus (AGN). Radio morphology shows two lobes (jets) emanating from the center of the galaxy, a clear sign of AGN activity that could be driving the optical outflow. By measuring the outflow properties, the NYUAD researchers documented how the extent of the optical outflow matches the extent of radio emission.

Al Yazeedi is a member of NYUAD's Kawader program, a national capacity-building research fellowship that allows outstanding graduates to gain experience in cutting-edge academic research. The three-year, individually tailored, intensive program is designed for graduates considering a graduate degree or a career in research.

Her paper adds to the growing body of UAE space research and activities. The UAE has sent an Emirati into space, a spacecraft around Mars and recently announced plans to send a robotic rover to the Moon in 2022, ahead of the ultimate goal to build a city on Mars by 2117.

Read more at Science Daily

High BMI causes depression – and both physical and social factors play a role

A largescale new study provides further evidence that being overweight causes depression and lowers wellbeing and indicates both social and physical factors may play a role in the effect.

With one in four adults estimated to be obese in the UK, and growing numbers of children affected, obesity is a global health challenge. While the dangers of being obese on physical health is well known, researchers are now discovering that being overweight can also have a significant impact on mental health.

The new study, published in Human Molecular Genetics, sought to investigate why a body of evidence now indicates that higher BMI causes depression. The team used genetic analysis, known as Mendelian Randomisation, to examine whether the causal link is the result of psychosocial pathways, such as societal influences and social stigma, or physical pathways, such as metabolic conditions linked to higher BMI. Such conditions include high blood pressure, type 2 diabetes and cardiovascular disease.

In research led by the University of Exeter and funded by the Academy of Medical Sciences, the team examined genetic data from more than 145,000 participants from the UK Biobank with detailed mental health data available. In a multifaceted study, the researchers analysed genetic variants linked to higher BMI, as well as outcomes from a clinically-relevant mental health questionnaire designed to assess levels of depression, anxiety and wellbeing.

To examine which pathways may be active in causing depression in people with higher BMI, the team also interrogated two sets of previously discovered genetic variants. One set of genes makes people fatter, yet metabolically healthier, meaning they were less likely to develop conditions linked to higher BMI, such as high blood pressure and type 2 diabetes. The second set of genes analysed make people fatter and metabolically unhealthy, or more prone to such conditions. The team found little difference between the two sets of genetic variants, indicating that both physical and social factors play a role in higher rates of depression and poorer wellbeing.

Lead author Jess O'Loughlin, at the University of Exeter Medical School, said: "Obesity and depression are both major global health challenges, and our study provides the most robust evidence to date that higher BMI causes depression. Understanding whether physical or social factors are responsible for this relationship can help inform effective strategies to improve mental health and wellbeing. Our research suggests that being fatter leads to a higher risk of depression, regardless of the role of metabolic health. This suggests that both physical health and social factors, such as social stigma, both play a role in the relationship between obesity and depression."

Read more at Science Daily

Researchers develop real-time lyric generation technology to inspire song writing

Music artists can find inspiration and new creative directions for their song writing with technology developed by Waterloo researchers.

LyricJam, a real-time system that uses artificial intelligence (AI) to generate lyric lines for live instrumental music, was created by members of the University's Natural Language Processing Lab.

The lab, led by Olga Vechtomova, a Waterloo Engineering professor cross-appointed in Computer Science, has been researching creative applications of AI for several years.

The lab's initial work led to the creation of a system that learns musical expressions of artists and generates lyrics in their style.

Recently, Vechtomova, along with Waterloo graduate students Gaurav Sahu and Dhruv Kumar, developed technology that relies on various aspects of music such as chord progressions, tempo and instrumentation to synthesize lyrics reflecting the mood and emotions expressed by live music.

As a musician or a band plays instrumental music, the system continuously receives the raw audio clips, which the neural network processes to generate new lyric lines. The artists can then use the lines to compose their own song lyrics.

"The purpose of the system is not to write a song for the artist," Vechtomova explains. "Instead, we want to help artists realize their own creativity. The system generates poetic lines with new metaphors and expressions, potentially leading the artists in creative directions that they haven't explored before."

The neural network designed by the researchers learns what lyrical themes, words and stylistic devices are associated with different aspects of music captured in each audio clip.

For example, the researchers observed that lyrics generated for ambient music are very different than those for upbeat music.

The research team conducted a user study, inviting musicians to play live instruments while using the system.

"One unexpected finding was that participants felt encouraged by the generated lines to improvise," Vechtomova said. "For example, the lines inspired artists to structure chords a bit differently and take their improvisation in a new direction than originally intended. Some musicians also used the lines to check if their improvisation had the desired emotional effect."

Another finding from the study highlighted the co-creative aspect of the experience. Participants commented that they viewed the system as an uncritical jamming partner and felt encouraged to play their musical instruments even if they were not actively trying to write lyrics.

Since LyricJam went live in June this year, over 1,500 users worldwide have tried it out.

Read more at Science Daily

Beige fat 'indispensable' in protecting the brain from dementia

Beige is considered a calming paint color, and scientists have new evidence that beige fat has a similar impact on the brain, bringing down the inflammation associated with the more common white fat and providing protection from dementia.

They have found that beige fat cells, which are typically intermingled with white fat cells in the subcutaneous fat present on "pear shaped" people, mediate subcutaneous fat's brain protection, Dr. Alexis M. Stranahan and her colleagues report in the journal Nature Communications.

Pear-shaped people, whose weight is generally distributed more evenly, rather than "apple shaped" individuals with fat clustered around their middle and often around internal organs like the liver in the abdominal cavity, are considered less at risk for cardiometabolic problems like heart disease and diabetes, as well as cognitive decline, says Stranahan, neuroscientist at the Medical College of Georgia at Augusta University.

Now the scientists have shown that beige fat cells, or adipocytes, are "indispensable" to the neuroprotective and anti-inflammatory effects of subcutaneous fat, says Stranahan, the study's corresponding author.

In fact without beige adipocytes, in the face of a high-fat diet, they saw subcutaneous fat start acting more like dangerous visceral fat, says Stranahan who reported last year in The Journal of Clinical Investigation that visceral adiposity sends a message to resident immune cells in the brain to fire up the inflammation, which ultimately damages cognition. "It's a very different signature," she says.

Visceral fat around the organs is mostly white fat cells, which store energy as triglycerides, which are yet another fat type found in the blood, and a risk factor for heart disease and stroke at high levels. Particularly in younger people, subcutaneous fat is a mixture of white and beige fat cells, and these beige cells are more like brown fat cells, which are packed with powerhouses called mitochondria and are efficient at using fat and sugars to produce heat in a process called thermogenesis. Exercise and cold exposure are said to enable the so-called "beiging" of white fat cells.

For some of their studies, the scientists used male mice with a specific gene knocked out that prevents adipocytes in the subcutaneous fat from beiging or browning, effectively resulting in subcutaneous fat that is more like visceral fat.

On a high-fat diet, it's already been shown that these mice develop diabetes more rapidly than those with normal amounts of beige fat. It's also known that transplanting subcutaneous fat into an obese mouse will improve their metabolic profile in a few weeks, and she wanted to know about potential impact on cognitive problems.

While both the normal and knockout mice gained about the same amount of weight over four weeks, mice without functional beige fat displayed accelerated cognitive dysfunction on testing, and their brains and bodies indicated a strong, rapid inflammatory response to the high-fat diet that included activation of microglial cells, those resident immune cells in the brain, which can further heighten inflammation and contribute to dementia and other brain problems.

Before they ever developed diabetes, the microglia of the mice, whose ages were comparable to a 20-something-year-old, had already turned on numerous inflammatory markers. Interestingly normal mice they studied as controls also turned on these markers but turned on anti-inflammatory markers as well apparently to minimize any response.

Normally it takes mice about three months on a high-fat diet to show the kind of responses they saw in the beige-fat knockouts in a single month.

To further explore the impact of beige fat, they also transplanted subcutaneous fat from young, lean healthy mice into the visceral compartment of otherwise normal but now-obese mice who had developed dementia-like behavior after remaining on a high-fat diet for 10 to 12 weeks.

Transplanting the subcutaneous fat resulted in improved memory, restoring essentially normal synaptic plasticity -- the ability of the connections between neurons to adapt so they can communicate -- in the hippocampus, the center of learning and memory deep in the brain. These positive changes were dependent on the beige adipocytes in the donor subcutaneous fat, Stranahan and her colleagues write.

Transplants from the beige-fat knockouts on the other hand did not improve cognition in the obese mice, including by strictly objective measures like any increased electrical activity between neurons.

"If we can figure out what it is about beige fat that limits inflammation and maybe what it is about beige fat that improves brain plasticity, then maybe we can mimic that somehow with a drug or with cold-stimulated beiging or even taking out some of your subcutaneous fat when you are young, freezing it and giving it back to you when you are older," Stranahan says.

All fat tends to be packed with immune cells, which can both promote and calm inflammation. They found beige fat interacts continuously with those immune cells, inducing the anti-inflammatory cytokine IL-4 in the subcutaneous fat. IL-4 in turn is required for cold to stimulate the "beiging" of fat, she notes.

Also in turn, the fat induced IL-4 in microglia and T cells, key drivers of the immune response, in the meninges, a sort of multilayer cap that fits over the brain to help protect it. They also found T cells in the choroid plexus, where cerebrospinal fluid is produced, had calming IL-4 induced.

Their findings suggest IL-4 is directly involved in communication between beige adipocytes and neurons in the hippocampus, the scientists write.

"It's kind of like "Whisper Down the Lane" if you ever played that at camp," Stranahan says of what appears to be a calming chain of communication.

When Stranahan and her team looked further they found it was the recipient's own T cells in the meninges that were called to positive, protective action by the transplanted beige fat cells, not immune cells from the transplanted fat itself.

There is evidence that in chronic obesity, your own immune cells can reach the brain, and there was no evidence in this case that it was the donor's immune cells making the journey.

"It's exciting because we have a way for peripheral immune cells to interact with the brain in a way that promotes cognition," Stranahan says, noting that there also are many bad things immune cells could do in the brain like contribute to stroke and Alzheimer's.

Her many next goals include learning more about how much it matters where you put the transplanted fat, like whether transferring subcutaneous fat to a subcutaneous area might work even better to protect against cognitive decline; whether transplanting visceral fat to a subcutaneous area decreases its damaging effect; and better understanding how subcutaneous fat sends what appears to be an active anti-inflammatory message. She also wants to explore these issues in female mice since the current studies were limited to males.

But what they and others already are finding underscores the importance of inherent fat distribution, which could be a biomarker for those most at risk for cognitive decline, she says.

The stage of obesity may be another factor, because she also has early evidence suggesting that the longer a high-fat diet is maintained and the more subcutaneous fat increases, its protective powers decrease and visceral fat increases.

Even in a healthy, non-obese young person visceral fat is going to produce higher levels of basal inflammation, Stranahan notes.

Stranahan emphasizes that she does not want her findings to cause excessive concern in overweight individuals or generate more prejudice against them, rather the work is about better identifying risk factors and different points and methods of intervention to fit the needs of individuals.

Stranahan and her colleagues reported in 2015 in the journal Brain, Behavior, and Immunity that a high-fat diet prompts microglia to become uncharacteristically sedentary and to start eating the connections between neurons.

Read more at Science Daily

Aug 9, 2021

Small stars share similar dynamics to our sun, key to planet habitability

Stars scattered throughout the cosmos look different, but they may be more alike than once thought, according to Rice University researchers.

New modeling work by Rice scientists shows that "cool" stars like the sun share the dynamic surface behaviors that influence their energetic and magnetic environments. This stellar magnetic activity is key to whether a given star hosts planets that could support life.

The work by Rice postdoctoral researcher Alison Farrish and astrophysicists David Alexander and Christopher Johns-Krull appears in a published study in The Astrophysical Journal. The research links the rotation of cool stars with the behavior of their surface magnetic flux, which in turn drives the star's coronal X-ray luminosity, in a way that could help predict how magnetic activity affects any exoplanets in their systems.

The study follows another led by Farrish and Alexander that showed a star's space "weather" may make planets in their "Goldilocks zone" uninhabitable.

"All stars spin down over their lifetimes as they shed angular momentum, and they get less active as a result," Farrish said. "We think the sun in the past was more active and that might have affected the early atmospheric chemistry of Earth. So thinking about how the higher energy emissions from stars change over long timescales is pretty important to exoplanet studies."

"More broadly, we're taking models that were developed for the sun and seeing how well they adapt to stars," said Johns-Krull.

The researchers set out to model what far-flung stars are like based on the limited data available. The spin and flux of some stars have been determined, along with their classification -- types F, G, K and M -- which gave information about their sizes and temperatures.

They compared the properties of the sun, a G-type star, through its Rossby number, a measure of stellar activity that combines its speed of rotation with its subsurface fluid flows that influence the distribution of magnetic flux on a star's surface, with what they knew of other cool stars. Their models suggest that each star's "space weather" works in much the same way, influencing conditions on their respective planets.

"The study suggests that stars -- at least cool stars -- are not too dissimilar from each other," Alexander said. "From our perspective, Alison's model can be applied without fear or favor when we look at exoplanets around M or F or K stars, as well, of course, as other G stars.

"It also suggests something much more interesting for established stellar physics, that the process by which a magnetic field is generated may be quite similar in all cool stars. That's a bit of a surprise," he said. This could include stars that, unlike the sun, are convective down to their cores.

"All stars like the sun fuse hydrogen and helium in their cores and that energy is first carried in the radiation of photons toward the surface," Johns-Krull said. "But it hits a zone about 60% to 70% of the way that's just too opaque, so it starts to undergo convection. Hot matter moves from below, the energy radiates away, and the cooler matter falls back down.

"But stars with less than a third of the mass of the sun don't have a radiative zone; they're convective everywhere," he said. "A lot of ideas about how stars generate a magnetic field rely on there being a boundary between the radiative and the convection zones, so you would expect stars that don't have that boundary to behave differently. This paper shows that in many ways, they behave just like the sun, once you adjust for their own peculiarities."

Farrish, who recently earned her doctorate at Rice and begins a postdoctoral research assignment at NASA's Goddard Space Flight Center soon, noted the model applies only to unsaturated stars.

"The most magnetically active stars are the ones we call 'saturated,'" Farrish said. "At a certain point, an increase in magnetic activity stops showing the associated increase in high energy X-ray emission. The reason that dumping more magnetism onto the star's surface doesn't give you more emission is still a mystery.

"Conversely, the sun is in the unsaturated regime, where we do see a correlation between magnetic activity and energetic emission," she said. "That happens at a more moderate activity level, and those stars are of interest because they might provide more hospitable environments for planets."

"The bottom line is the observations, which span four spectral types including both fully and partially convective stars, can be reasonably well represented by a model generated from the sun," Alexander said. "It also reinforces the idea that even though a star that is 30 times more active than the sun may not be a G-class star, it's still captured by the analysis that Alison has done."

"We do have to be clear that we're not simulating any specific star or system," he said. "We are saying that statistically, the magnetic behavior of a typical M star with a typical Rossby number behaves in a similar fashion to that of the sun which allows us to assess its potential impact on its planets."

A critical wild card is a star's activity cycle, which can't be incorporated into the models without years of observation. (The sun's cycle is 11 years, evidenced by sunspot activity when its magnetic field lines are most distorted.)

Johns-Krull said the model will still be useful in many ways. "One of my areas of interest is studying very young stars, many of which are, like low-mass stars, fully convective," he said. "Many of these have disc material around them and are still forming planets. How they interact is mediated, we think, by the stellar magnetic field.

"So, Alison's modeling work can be used to learn about the large-scale structure of very magnetically active stars, and that can then allow us to test some ideas about how these young stars and their disks interact."

Read more at Science Daily

Birds’ eye size reflects habitat and diet, may predict sensitivity to environmental change

A new study shows the eye size of birds can reveal broad patterns of their biology and behavior, including where they live, what they eat and how they hunt, providing a potential roadmap for future conservation efforts.

Birds have some of the largest eyes relative to their bodies of all vertebrate land animals, second only to frogs. With a limited range of taste and smell, birds primarily rely on vision to navigate, find food and avoid predators. Yet surprisingly little is known about how eye size in birds influences their behavior compared with other traits, such as beak shape and body size, which scientists have meticulously studied since Charles Darwin's classic work on finches.

"I was really shocked to find out while doing literature searches that there was no definitive publication on how eye size in birds relates to their environment," said Ian Ausprey, a recent doctoral graduate of the Florida Museum of Natural History's Ordway Lab of Ecosystem Conservation.

Previous studies on bird eyes have been limited in scope, typically including only a few dozen species or birds in specific regions. This gap in scientific knowledge was all the more glaring given that a graduate student measured the eyes of more than 4,000 species of birds in museum collections in the late 1970s, creating the largest dataset of its kind.

Ausprey relied on this resource to analyze eye size for 2,777 species -- about one-third of the world's bird diversity -- revealing that this single trait more powerfully predicts where birds live and how they behave than better-studied characteristics such as size, anatomy and movement.

Large eyes increase sensitivity to deforestation

Ausprey had the idea for the study while conducting fieldwork with colleagues in the Andean forests of Peru. Over the course of five years, the researchers measured the eyes of Peruvian birds and attached small light sensors to more than a dozen species of tanagers, finches, wrens and woodpeckers to determine how these birds were coping with increased amounts of forest fragmentation due to agriculture.

Their results were troubling: Birds with large eyes avoided agricultural fields, keeping to diminishing forest habitats. But the researchers could also use eye size to predict where these birds mated and laid eggs and what they were eating, valuable information for future conservation efforts.

Ausprey wanted to know whether this pattern held true for all birds, not just those in Peru. But with over 10,000 species spread out across all seven continents, answering a question as broad as how eye size influences bird behavior would have taken years.

Fortunately, the data Ausprey needed had already been collected in the form of a dissertation, a nearly 2,000-page tome completed by Stanley Ritland during his time as a doctoral student at the University of Chicago.

"He spent his time traveling around museums, extracting eyes out of specimens preserved in alcohol and then measuring them," Ausprey said. "He did it for several thousand species of birds, as well as mammals and reptiles."

Ritland left academia upon graduating, however, and never published his data in a scientific journal. Researchers have used small portions of the massive dataset, initially relegated to the stacks of the University of Chicago library, to answer small-scale questions, but comprehensive analyses have so far been lacking.

Although the data was available, the time-consuming task of digitizing it still remained. Ausprey hired two undergraduate students, Savannah Montgomery and Kristie Perez, who spent five months transcribing Ritland's measurements into spreadsheets so they could be analyzed and shared more broadly with the scientific community.

Because eye size tends to increase with body size, Ausprey standardized all the measurements for each species by mass and intentionally omitted birds that operate at optical extremes, such as far-sighted raptors and nocturnal owls. Scientists already know these species have unusually large eyes.

Instead, he focused on land-dwelling birds that hunt for food close to the ground and are most active during daylight hours.

Light and shadow define bird vision

Stark patterns began to take shape as eye size was compared with a host of behavioral traits.

Birds with larger eyes live closer to the equator, where the planet's belt of rainforests create dark understory habitats. Regardless of latitude, birds that hunt or forage closest to the forest floor have large eyes to take in as much light as possible, while those that spend more time in the sky had correspondingly smaller eyes to reduce glare.

"Bright lights can cause something called disability glare," Ausprey said. "When you shine a light on birds, they change the way they forage. They also respond differently to vocalizations of experimental predators."

Scientists worry that such behavioral changes may negatively affect avian understory specialists, many of which have already been displaced because of deforestation.

"Understory tropical birds may be especially sensitive to fragmentation because they are adapted to dark forested environments and are unable to cope with rapid changes in brightness associated with forest edges and human-modified habitats," Ausprey said.

Eye size is also strongly correlated with diet. Larger eyes not only absorb more light, but they can also confer increased focal length and resolution, the equivalent of upgrading your camera with a longer lens.

Birds that eat insects have larger eyes, which are better suited for spotting prey at long distances, regardless of whether they lived in the forest understory or open habitats. Birds with the smallest eyes relative to body size were often nectar feeders, hinting that they may rely on color more than shape when looking for food.

Ausprey also analyzed how eyes have changed throughout the birds' evolution, finding that once eyes became larger in a particular group, they stayed that way. This meant that closely related groups, such as the hummingbird and swift families, could have eyes of vastly different sizes.

Within a family, however, size didn't change much among species. Fly catchers, for example, spend a lot of time sallying out and catching prey, which requires long-distance, binocular vision, Ausprey said.

"And it turns out, flycatchers tend to have larger eyes, as you'd expect. All the finches and tanagers and such that eat fruits and seeds tend to have very small eyes."

Collections provide tools for understanding the natural world

To Ausprey, the data collected by Ritland decades ago offer an unparalleled glimpse into bird diversity and behavior, which may help conserve species for the future.

"Nearly half a century of time has passed, and yet the same datasets are relevant," Ausprey said.

Ritland relied entirely on alcohol-preserved museum collections, meaning the same specimens he measured are still accessible to scientists stitching together patterns in the natural world.

Some of the birds he encountered during his museum visits were already of considerable antiquity by the time he began taking his measurements, including two birds collected during Captain Cook's first voyage around the world.

"Museum collections are invaluable, indispensable and essentially irreplaceable," Ritland said in an email.

Read more at Science Daily